Aug 13 01:31:59.880100 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 01:31:59.880123 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:31:59.880132 kernel: BIOS-provided physical RAM map: Aug 13 01:31:59.880141 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 01:31:59.880146 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 01:31:59.880151 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 01:31:59.880158 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 01:31:59.880164 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 01:31:59.880169 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 01:31:59.880175 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 01:31:59.880180 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 01:31:59.880186 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 01:31:59.880193 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 01:31:59.880199 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 01:31:59.880205 kernel: NX (Execute Disable) protection: active Aug 13 01:31:59.880211 kernel: APIC: Static calls initialized Aug 13 01:31:59.880217 kernel: SMBIOS 2.8 present. Aug 13 01:31:59.880225 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 01:31:59.880231 kernel: DMI: Memory slots populated: 1/1 Aug 13 01:31:59.880237 kernel: Hypervisor detected: KVM Aug 13 01:31:59.880243 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 01:31:59.880249 kernel: kvm-clock: using sched offset of 5665786237 cycles Aug 13 01:31:59.880255 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 01:31:59.880261 kernel: tsc: Detected 1999.999 MHz processor Aug 13 01:31:59.880268 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:31:59.880274 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:31:59.880280 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 01:31:59.880289 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 01:31:59.880295 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:31:59.880301 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 01:31:59.880307 kernel: Using GB pages for direct mapping Aug 13 01:31:59.880313 kernel: ACPI: Early table checksum verification disabled Aug 13 01:31:59.880319 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 01:31:59.880325 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:31:59.880331 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:31:59.880337 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:31:59.880345 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 01:31:59.880351 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:31:59.880357 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:31:59.880364 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:31:59.880373 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:31:59.880379 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 01:31:59.880388 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 01:31:59.880410 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 01:31:59.880417 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 01:31:59.880423 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 01:31:59.880429 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 01:31:59.880436 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 01:31:59.880442 kernel: No NUMA configuration found Aug 13 01:31:59.880448 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 01:31:59.880457 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Aug 13 01:31:59.880464 kernel: Zone ranges: Aug 13 01:31:59.880470 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:31:59.880476 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 01:31:59.880482 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:31:59.880489 kernel: Device empty Aug 13 01:31:59.880495 kernel: Movable zone start for each node Aug 13 01:31:59.880501 kernel: Early memory node ranges Aug 13 01:31:59.880507 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 01:31:59.880514 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 01:31:59.880522 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:31:59.880528 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 01:31:59.880534 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:31:59.880541 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 01:31:59.880547 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 01:31:59.880553 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 01:31:59.880559 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 01:31:59.880566 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:31:59.880572 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 01:31:59.880580 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 01:31:59.880587 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:31:59.880593 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 01:31:59.880599 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 01:31:59.880606 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:31:59.880612 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 01:31:59.880618 kernel: TSC deadline timer available Aug 13 01:31:59.880624 kernel: CPU topo: Max. logical packages: 1 Aug 13 01:31:59.880630 kernel: CPU topo: Max. logical dies: 1 Aug 13 01:31:59.880639 kernel: CPU topo: Max. dies per package: 1 Aug 13 01:31:59.880645 kernel: CPU topo: Max. threads per core: 1 Aug 13 01:31:59.880651 kernel: CPU topo: Num. cores per package: 2 Aug 13 01:31:59.880657 kernel: CPU topo: Num. threads per package: 2 Aug 13 01:31:59.880663 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 01:31:59.880670 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 01:31:59.880676 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 01:31:59.880682 kernel: kvm-guest: setup PV sched yield Aug 13 01:31:59.880688 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 01:31:59.880697 kernel: Booting paravirtualized kernel on KVM Aug 13 01:31:59.880703 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:31:59.880709 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 01:31:59.880716 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 01:31:59.880722 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 01:31:59.880728 kernel: pcpu-alloc: [0] 0 1 Aug 13 01:31:59.880734 kernel: kvm-guest: PV spinlocks enabled Aug 13 01:31:59.880741 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 01:31:59.880748 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:31:59.880757 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:31:59.880763 kernel: random: crng init done Aug 13 01:31:59.880769 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 01:31:59.880776 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:31:59.880782 kernel: Fallback order for Node 0: 0 Aug 13 01:31:59.880788 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Aug 13 01:31:59.880795 kernel: Policy zone: Normal Aug 13 01:31:59.880801 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:31:59.880809 kernel: software IO TLB: area num 2. Aug 13 01:31:59.880816 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 01:31:59.880822 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 01:31:59.880828 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 01:31:59.880835 kernel: Dynamic Preempt: voluntary Aug 13 01:31:59.880841 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 01:31:59.880853 kernel: rcu: RCU event tracing is enabled. Aug 13 01:31:59.880860 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 01:31:59.880867 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 01:31:59.880873 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:31:59.880881 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:31:59.880888 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:31:59.880894 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 01:31:59.880901 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:31:59.880914 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:31:59.880923 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:31:59.880929 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 01:31:59.880936 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 01:31:59.880942 kernel: Console: colour VGA+ 80x25 Aug 13 01:31:59.880949 kernel: printk: legacy console [tty0] enabled Aug 13 01:31:59.880955 kernel: printk: legacy console [ttyS0] enabled Aug 13 01:31:59.880964 kernel: ACPI: Core revision 20240827 Aug 13 01:31:59.880971 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 01:31:59.880977 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:31:59.880984 kernel: x2apic enabled Aug 13 01:31:59.880990 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 01:31:59.880999 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 01:31:59.881006 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 01:31:59.881013 kernel: kvm-guest: setup PV IPIs Aug 13 01:31:59.881019 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:31:59.881026 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Aug 13 01:31:59.881033 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Aug 13 01:31:59.881039 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 01:31:59.881046 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 01:31:59.881052 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 01:31:59.881061 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:31:59.881068 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 01:31:59.881075 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 01:31:59.881081 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 01:31:59.881088 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:31:59.881095 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 01:31:59.881101 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 01:31:59.881108 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 01:31:59.881117 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 01:31:59.881123 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 01:31:59.881130 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:31:59.881137 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:31:59.881143 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:31:59.881150 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:31:59.881156 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 01:31:59.881163 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:31:59.881169 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 01:31:59.881178 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 01:31:59.881185 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:31:59.881191 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:31:59.881198 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 01:31:59.881204 kernel: landlock: Up and running. Aug 13 01:31:59.881211 kernel: SELinux: Initializing. Aug 13 01:31:59.881217 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:31:59.881224 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:31:59.881231 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 01:31:59.881240 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 01:31:59.881246 kernel: ... version: 0 Aug 13 01:31:59.881253 kernel: ... bit width: 48 Aug 13 01:31:59.881259 kernel: ... generic registers: 6 Aug 13 01:31:59.881266 kernel: ... value mask: 0000ffffffffffff Aug 13 01:31:59.881272 kernel: ... max period: 00007fffffffffff Aug 13 01:31:59.881279 kernel: ... fixed-purpose events: 0 Aug 13 01:31:59.881285 kernel: ... event mask: 000000000000003f Aug 13 01:31:59.881292 kernel: signal: max sigframe size: 3376 Aug 13 01:31:59.881300 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:31:59.881307 kernel: rcu: Max phase no-delay instances is 400. Aug 13 01:31:59.881313 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 01:31:59.881320 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:31:59.881326 kernel: smpboot: x86: Booting SMP configuration: Aug 13 01:31:59.881333 kernel: .... node #0, CPUs: #1 Aug 13 01:31:59.881339 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:31:59.881346 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Aug 13 01:31:59.881353 kernel: Memory: 3961048K/4193772K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 227296K reserved, 0K cma-reserved) Aug 13 01:31:59.881362 kernel: devtmpfs: initialized Aug 13 01:31:59.881368 kernel: x86/mm: Memory block size: 128MB Aug 13 01:31:59.881375 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:31:59.881381 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 01:31:59.883118 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:31:59.883136 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:31:59.883144 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:31:59.883151 kernel: audit: type=2000 audit(1755048717.850:1): state=initialized audit_enabled=0 res=1 Aug 13 01:31:59.883158 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:31:59.883169 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:31:59.883176 kernel: cpuidle: using governor menu Aug 13 01:31:59.883182 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:31:59.883189 kernel: dca service started, version 1.12.1 Aug 13 01:31:59.883196 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Aug 13 01:31:59.883202 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 01:31:59.883209 kernel: PCI: Using configuration type 1 for base access Aug 13 01:31:59.883216 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:31:59.883222 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:31:59.883231 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 01:31:59.883238 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:31:59.883244 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 01:31:59.883251 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:31:59.883257 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:31:59.883264 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:31:59.883270 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 01:31:59.883277 kernel: ACPI: Interpreter enabled Aug 13 01:31:59.883283 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 01:31:59.883292 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:31:59.883299 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:31:59.883305 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 01:31:59.883312 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 01:31:59.883319 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 01:31:59.884777 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:31:59.884898 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 01:31:59.885013 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 01:31:59.885022 kernel: PCI host bridge to bus 0000:00 Aug 13 01:31:59.885142 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:31:59.885242 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:31:59.885339 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:31:59.886742 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 01:31:59.886844 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 01:31:59.886940 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 01:31:59.887043 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 01:31:59.887173 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 01:31:59.887300 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 01:31:59.887430 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Aug 13 01:31:59.887542 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Aug 13 01:31:59.887648 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Aug 13 01:31:59.887758 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:31:59.887876 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 01:31:59.887989 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Aug 13 01:31:59.888096 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Aug 13 01:31:59.888203 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 01:31:59.888319 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 01:31:59.890219 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Aug 13 01:31:59.890349 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Aug 13 01:31:59.890484 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 01:31:59.890593 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Aug 13 01:31:59.890713 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 01:31:59.890820 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 01:31:59.890940 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 01:31:59.891055 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Aug 13 01:31:59.891160 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Aug 13 01:31:59.891274 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 01:31:59.891380 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Aug 13 01:31:59.892519 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 01:31:59.892547 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 01:31:59.892555 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:31:59.892566 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 01:31:59.892572 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 01:31:59.892579 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 01:31:59.892585 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 01:31:59.892592 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 01:31:59.892599 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 01:31:59.892605 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 01:31:59.892612 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 01:31:59.892618 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 01:31:59.892627 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 01:31:59.892634 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 01:31:59.892641 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 01:31:59.892647 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 01:31:59.892654 kernel: iommu: Default domain type: Translated Aug 13 01:31:59.892661 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:31:59.892667 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:31:59.892674 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:31:59.892681 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 01:31:59.892689 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 01:31:59.892819 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 01:31:59.892932 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 01:31:59.893039 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:31:59.893049 kernel: vgaarb: loaded Aug 13 01:31:59.893056 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 01:31:59.893063 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 01:31:59.893069 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 01:31:59.893076 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:31:59.893086 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:31:59.893093 kernel: pnp: PnP ACPI init Aug 13 01:31:59.893228 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 01:31:59.893239 kernel: pnp: PnP ACPI: found 5 devices Aug 13 01:31:59.893246 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:31:59.893253 kernel: NET: Registered PF_INET protocol family Aug 13 01:31:59.893260 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:31:59.893266 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 01:31:59.893276 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:31:59.893283 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:31:59.893290 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 01:31:59.893296 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 01:31:59.893303 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:31:59.893310 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:31:59.893316 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:31:59.893323 kernel: NET: Registered PF_XDP protocol family Aug 13 01:31:59.894617 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:31:59.894731 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:31:59.894829 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:31:59.894926 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 01:31:59.895021 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 01:31:59.895117 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 01:31:59.895126 kernel: PCI: CLS 0 bytes, default 64 Aug 13 01:31:59.895133 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 01:31:59.895140 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 01:31:59.895151 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Aug 13 01:31:59.895157 kernel: Initialise system trusted keyrings Aug 13 01:31:59.895164 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 01:31:59.895171 kernel: Key type asymmetric registered Aug 13 01:31:59.895178 kernel: Asymmetric key parser 'x509' registered Aug 13 01:31:59.895184 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 01:31:59.895191 kernel: io scheduler mq-deadline registered Aug 13 01:31:59.895198 kernel: io scheduler kyber registered Aug 13 01:31:59.895204 kernel: io scheduler bfq registered Aug 13 01:31:59.895214 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:31:59.895221 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 01:31:59.895228 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 01:31:59.895235 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:31:59.895241 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:31:59.895248 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 01:31:59.895255 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:31:59.895261 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:31:59.895377 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 01:31:59.896414 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 01:31:59.896531 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 01:31:59.896633 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T01:31:59 UTC (1755048719) Aug 13 01:31:59.896738 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 01:31:59.896748 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 01:31:59.896755 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:31:59.896762 kernel: Segment Routing with IPv6 Aug 13 01:31:59.896768 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:31:59.896778 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:31:59.896785 kernel: Key type dns_resolver registered Aug 13 01:31:59.896792 kernel: IPI shorthand broadcast: enabled Aug 13 01:31:59.896799 kernel: sched_clock: Marking stable (2744004194, 219776825)->(3001790940, -38009921) Aug 13 01:31:59.896805 kernel: registered taskstats version 1 Aug 13 01:31:59.896812 kernel: Loading compiled-in X.509 certificates Aug 13 01:31:59.896819 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 01:31:59.896826 kernel: Demotion targets for Node 0: null Aug 13 01:31:59.896832 kernel: Key type .fscrypt registered Aug 13 01:31:59.896841 kernel: Key type fscrypt-provisioning registered Aug 13 01:31:59.896847 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:31:59.896854 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:31:59.896861 kernel: ima: No architecture policies found Aug 13 01:31:59.896867 kernel: clk: Disabling unused clocks Aug 13 01:31:59.896874 kernel: Warning: unable to open an initial console. Aug 13 01:31:59.896881 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 01:31:59.896888 kernel: Write protecting the kernel read-only data: 24576k Aug 13 01:31:59.896894 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 01:31:59.896903 kernel: Run /init as init process Aug 13 01:31:59.896909 kernel: with arguments: Aug 13 01:31:59.896916 kernel: /init Aug 13 01:31:59.896922 kernel: with environment: Aug 13 01:31:59.896929 kernel: HOME=/ Aug 13 01:31:59.896951 kernel: TERM=linux Aug 13 01:31:59.896960 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:31:59.896968 systemd[1]: Successfully made /usr/ read-only. Aug 13 01:31:59.896980 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:31:59.896988 systemd[1]: Detected virtualization kvm. Aug 13 01:31:59.896995 systemd[1]: Detected architecture x86-64. Aug 13 01:31:59.897002 systemd[1]: Running in initrd. Aug 13 01:31:59.897010 systemd[1]: No hostname configured, using default hostname. Aug 13 01:31:59.897017 systemd[1]: Hostname set to . Aug 13 01:31:59.897024 systemd[1]: Initializing machine ID from random generator. Aug 13 01:31:59.897032 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:31:59.897041 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:31:59.897049 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:31:59.897057 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 01:31:59.897064 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:31:59.897072 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 01:31:59.897080 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 01:31:59.897090 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 01:31:59.897098 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 01:31:59.897105 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:31:59.897113 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:31:59.897120 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:31:59.897128 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:31:59.897135 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:31:59.897142 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:31:59.897150 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:31:59.897159 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:31:59.897167 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 01:31:59.897174 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 01:31:59.897181 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:31:59.897189 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:31:59.897196 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:31:59.897203 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:31:59.897213 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 01:31:59.897220 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:31:59.897228 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 01:31:59.897235 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 01:31:59.897243 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:31:59.897250 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:31:59.897259 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:31:59.897267 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:31:59.897274 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 01:31:59.897282 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:31:59.897308 systemd-journald[207]: Collecting audit messages is disabled. Aug 13 01:31:59.897328 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:31:59.897336 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:31:59.897345 systemd-journald[207]: Journal started Aug 13 01:31:59.897370 systemd-journald[207]: Runtime Journal (/run/log/journal/ec8521fca0b54fc59df57ed11a09833e) is 8M, max 78.5M, 70.5M free. Aug 13 01:31:59.891621 systemd-modules-load[208]: Inserted module 'overlay' Aug 13 01:31:59.972621 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:31:59.972637 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:31:59.972647 kernel: Bridge firewalling registered Aug 13 01:31:59.918729 systemd-modules-load[208]: Inserted module 'br_netfilter' Aug 13 01:31:59.974090 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:31:59.975377 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:31:59.976963 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:31:59.982762 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:31:59.985793 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:31:59.990543 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:31:59.999979 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:32:00.011775 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:32:00.017979 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:32:00.024073 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:32:00.024824 systemd-tmpfiles[227]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 01:32:00.028673 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 01:32:00.030487 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:32:00.045002 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:32:00.059855 dracut-cmdline[244]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:32:00.096815 systemd-resolved[246]: Positive Trust Anchors: Aug 13 01:32:00.097580 systemd-resolved[246]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:32:00.097609 systemd-resolved[246]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:32:00.103877 systemd-resolved[246]: Defaulting to hostname 'linux'. Aug 13 01:32:00.105536 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:32:00.106463 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:32:00.160624 kernel: SCSI subsystem initialized Aug 13 01:32:00.170425 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:32:00.182430 kernel: iscsi: registered transport (tcp) Aug 13 01:32:00.202630 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:32:00.202757 kernel: QLogic iSCSI HBA Driver Aug 13 01:32:00.225353 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:32:00.241533 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:32:00.244680 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:32:00.304005 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 01:32:00.306568 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 01:32:00.365430 kernel: raid6: avx2x4 gen() 29564 MB/s Aug 13 01:32:00.383426 kernel: raid6: avx2x2 gen() 24313 MB/s Aug 13 01:32:00.401751 kernel: raid6: avx2x1 gen() 16995 MB/s Aug 13 01:32:00.401811 kernel: raid6: using algorithm avx2x4 gen() 29564 MB/s Aug 13 01:32:00.420738 kernel: raid6: .... xor() 3048 MB/s, rmw enabled Aug 13 01:32:00.420767 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:32:00.440619 kernel: xor: automatically using best checksumming function avx Aug 13 01:32:00.571446 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 01:32:00.580763 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:32:00.583335 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:32:00.612520 systemd-udevd[455]: Using default interface naming scheme 'v255'. Aug 13 01:32:00.617890 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:32:00.621303 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 01:32:00.653479 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Aug 13 01:32:00.688833 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:32:00.690959 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:32:00.758077 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:32:00.762833 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 01:32:00.826459 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Aug 13 01:32:00.835440 kernel: scsi host0: Virtio SCSI HBA Aug 13 01:32:00.844414 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:32:00.854692 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 01:32:00.866453 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 01:32:00.886914 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:32:00.887047 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:32:01.018284 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:32:01.025423 kernel: libata version 3.00 loaded. Aug 13 01:32:01.025466 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:32:01.030236 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:32:01.076806 kernel: AES CTR mode by8 optimization enabled Aug 13 01:32:01.079315 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 01:32:01.081333 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 01:32:01.090623 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 01:32:01.090785 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 01:32:01.090919 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 01:32:01.092704 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 01:32:01.096587 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 01:32:01.098167 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 01:32:01.098495 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 01:32:01.098687 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 01:32:01.098868 kernel: scsi host1: ahci Aug 13 01:32:01.100424 kernel: scsi host2: ahci Aug 13 01:32:01.103778 kernel: scsi host3: ahci Aug 13 01:32:01.103940 kernel: scsi host4: ahci Aug 13 01:32:01.108140 kernel: scsi host5: ahci Aug 13 01:32:01.108311 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:32:01.108324 kernel: GPT:9289727 != 9297919 Aug 13 01:32:01.108334 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:32:01.108344 kernel: GPT:9289727 != 9297919 Aug 13 01:32:01.108358 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:32:01.108368 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:32:01.110554 kernel: scsi host6: ahci Aug 13 01:32:01.110717 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Aug 13 01:32:01.110729 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Aug 13 01:32:01.110739 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Aug 13 01:32:01.110748 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Aug 13 01:32:01.110757 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Aug 13 01:32:01.110766 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Aug 13 01:32:01.110779 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 01:32:01.205857 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:32:01.416431 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 01:32:01.416482 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 01:32:01.426716 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 01:32:01.426763 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 01:32:01.426775 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 01:32:01.427408 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 01:32:01.491639 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 01:32:01.506846 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:32:01.515632 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 01:32:01.516519 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 01:32:01.524475 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 01:32:01.525128 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 01:32:01.528013 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:32:01.528720 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:32:01.530011 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:32:01.532206 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 01:32:01.534262 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 01:32:01.546096 disk-uuid[634]: Primary Header is updated. Aug 13 01:32:01.546096 disk-uuid[634]: Secondary Entries is updated. Aug 13 01:32:01.546096 disk-uuid[634]: Secondary Header is updated. Aug 13 01:32:01.551818 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:32:01.557420 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:32:02.574247 disk-uuid[638]: The operation has completed successfully. Aug 13 01:32:02.575057 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:32:02.627443 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:32:02.627582 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 01:32:02.655649 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 01:32:02.667780 sh[656]: Success Aug 13 01:32:02.685623 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:32:02.685669 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:32:02.688583 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 01:32:02.700428 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 01:32:02.751669 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 01:32:02.757670 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 01:32:02.772076 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 01:32:02.781417 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 01:32:02.781480 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (668) Aug 13 01:32:02.787671 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 01:32:02.787693 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:32:02.790366 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 01:32:02.798633 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 01:32:02.799653 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:32:02.800519 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 01:32:02.801495 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 01:32:02.804315 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 01:32:02.828780 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (699) Aug 13 01:32:02.828830 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:32:02.831618 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:32:02.833587 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:32:02.845626 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:32:02.847160 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 01:32:02.850510 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 01:32:02.953607 ignition[758]: Ignition 2.21.0 Aug 13 01:32:02.954437 ignition[758]: Stage: fetch-offline Aug 13 01:32:02.955671 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:32:02.954471 ignition[758]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:32:02.954481 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:32:02.954566 ignition[758]: parsed url from cmdline: "" Aug 13 01:32:02.954570 ignition[758]: no config URL provided Aug 13 01:32:02.954575 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:32:02.954583 ignition[758]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:32:02.960527 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:32:02.954587 ignition[758]: failed to fetch config: resource requires networking Aug 13 01:32:02.961853 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:32:02.954741 ignition[758]: Ignition finished successfully Aug 13 01:32:02.996429 systemd-networkd[844]: lo: Link UP Aug 13 01:32:02.996490 systemd-networkd[844]: lo: Gained carrier Aug 13 01:32:02.998179 systemd-networkd[844]: Enumeration completed Aug 13 01:32:02.998898 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:32:02.999182 systemd-networkd[844]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:32:02.999187 systemd-networkd[844]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:32:03.001550 systemd[1]: Reached target network.target - Network. Aug 13 01:32:03.001798 systemd-networkd[844]: eth0: Link UP Aug 13 01:32:03.001973 systemd-networkd[844]: eth0: Gained carrier Aug 13 01:32:03.001982 systemd-networkd[844]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:32:03.003832 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 01:32:03.029812 ignition[848]: Ignition 2.21.0 Aug 13 01:32:03.030650 ignition[848]: Stage: fetch Aug 13 01:32:03.030772 ignition[848]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:32:03.030782 ignition[848]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:32:03.030853 ignition[848]: parsed url from cmdline: "" Aug 13 01:32:03.030857 ignition[848]: no config URL provided Aug 13 01:32:03.030862 ignition[848]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:32:03.030870 ignition[848]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:32:03.030903 ignition[848]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 01:32:03.031066 ignition[848]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:32:03.231687 ignition[848]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 01:32:03.231862 ignition[848]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:32:03.530455 systemd-networkd[844]: eth0: DHCPv4 address 172.234.27.175/24, gateway 172.234.27.1 acquired from 23.194.118.74 Aug 13 01:32:03.632125 ignition[848]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 01:32:03.738631 ignition[848]: PUT result: OK Aug 13 01:32:03.738713 ignition[848]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 01:32:03.870473 ignition[848]: GET result: OK Aug 13 01:32:03.870660 ignition[848]: parsing config with SHA512: ae23ba0b4b3b6a4272b92c444983de4479b6ca86f6337f327683977393405c0fe7a62643f154a535c560e6f89c98d940cc15033b1145b268b5f13b491b1e44d1 Aug 13 01:32:03.873669 unknown[848]: fetched base config from "system" Aug 13 01:32:03.874252 unknown[848]: fetched base config from "system" Aug 13 01:32:03.874263 unknown[848]: fetched user config from "akamai" Aug 13 01:32:03.874494 ignition[848]: fetch: fetch complete Aug 13 01:32:03.874499 ignition[848]: fetch: fetch passed Aug 13 01:32:03.874537 ignition[848]: Ignition finished successfully Aug 13 01:32:03.878588 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 01:32:03.900252 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 01:32:03.924066 ignition[856]: Ignition 2.21.0 Aug 13 01:32:03.924081 ignition[856]: Stage: kargs Aug 13 01:32:03.924186 ignition[856]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:32:03.924196 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:32:03.924737 ignition[856]: kargs: kargs passed Aug 13 01:32:03.926277 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 01:32:03.924769 ignition[856]: Ignition finished successfully Aug 13 01:32:03.929527 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 01:32:03.946008 ignition[862]: Ignition 2.21.0 Aug 13 01:32:03.946022 ignition[862]: Stage: disks Aug 13 01:32:03.946133 ignition[862]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:32:03.946142 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:32:03.946803 ignition[862]: disks: disks passed Aug 13 01:32:03.948095 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 01:32:03.946837 ignition[862]: Ignition finished successfully Aug 13 01:32:03.949243 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 01:32:03.950059 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 01:32:03.951110 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:32:03.952080 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:32:03.953464 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:32:03.955325 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 01:32:03.989310 systemd-fsck[870]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 01:32:03.996304 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 01:32:04.000478 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 01:32:04.080623 systemd-networkd[844]: eth0: Gained IPv6LL Aug 13 01:32:04.109416 kernel: EXT4-fs (sda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 01:32:04.109994 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 01:32:04.110938 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 01:32:04.113693 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:32:04.117455 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 01:32:04.118914 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 01:32:04.119775 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:32:04.119799 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:32:04.124627 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 01:32:04.126969 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 01:32:04.134412 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (878) Aug 13 01:32:04.137541 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:32:04.137565 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:32:04.140626 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:32:04.144101 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:32:04.173145 initrd-setup-root[902]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:32:04.177263 initrd-setup-root[909]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:32:04.181457 initrd-setup-root[916]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:32:04.185925 initrd-setup-root[923]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:32:04.261556 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 01:32:04.263463 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 01:32:04.265515 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 01:32:04.277743 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 01:32:04.281535 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:32:04.296383 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 01:32:04.304221 ignition[992]: INFO : Ignition 2.21.0 Aug 13 01:32:04.305843 ignition[992]: INFO : Stage: mount Aug 13 01:32:04.305843 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:32:04.305843 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:32:04.308425 ignition[992]: INFO : mount: mount passed Aug 13 01:32:04.308425 ignition[992]: INFO : Ignition finished successfully Aug 13 01:32:04.310293 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 01:32:04.312215 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 01:32:05.111651 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:32:05.138428 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1003) Aug 13 01:32:05.141458 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:32:05.141479 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:32:05.144201 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:32:05.147892 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:32:05.170030 ignition[1019]: INFO : Ignition 2.21.0 Aug 13 01:32:05.170030 ignition[1019]: INFO : Stage: files Aug 13 01:32:05.171429 ignition[1019]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:32:05.171429 ignition[1019]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:32:05.171429 ignition[1019]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:32:05.173929 ignition[1019]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:32:05.173929 ignition[1019]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:32:05.175730 ignition[1019]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:32:05.175730 ignition[1019]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:32:05.175730 ignition[1019]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:32:05.174681 unknown[1019]: wrote ssh authorized keys file for user: core Aug 13 01:32:05.179171 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:32:05.180151 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 01:32:05.471619 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 01:32:05.634102 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:32:05.634102 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:32:05.634102 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:32:05.634102 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:32:05.634102 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:32:05.634102 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:32:05.634102 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:32:05.634102 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:32:05.634102 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:32:05.645079 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:32:05.645079 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:32:05.645079 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:32:05.645079 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:32:05.645079 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:32:05.645079 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 01:32:06.218081 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 01:32:06.692358 ignition[1019]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:32:06.692358 ignition[1019]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 01:32:06.694726 ignition[1019]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:32:06.695713 ignition[1019]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:32:06.695713 ignition[1019]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 01:32:06.695713 ignition[1019]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 13 01:32:06.695713 ignition[1019]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:32:06.701458 ignition[1019]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:32:06.701458 ignition[1019]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 13 01:32:06.701458 ignition[1019]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:32:06.701458 ignition[1019]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:32:06.701458 ignition[1019]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:32:06.701458 ignition[1019]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:32:06.701458 ignition[1019]: INFO : files: files passed Aug 13 01:32:06.701458 ignition[1019]: INFO : Ignition finished successfully Aug 13 01:32:06.699048 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 01:32:06.700634 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 01:32:06.704467 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 01:32:06.714923 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:32:06.715027 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 01:32:06.723384 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:32:06.723384 initrd-setup-root-after-ignition[1049]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:32:06.725111 initrd-setup-root-after-ignition[1053]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:32:06.726708 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:32:06.728190 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 01:32:06.729513 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 01:32:06.784418 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:32:06.784532 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 01:32:06.785767 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 01:32:06.786768 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 01:32:06.787989 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 01:32:06.788657 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 01:32:06.818195 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:32:06.819832 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 01:32:06.832215 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:32:06.832892 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:32:06.834192 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 01:32:06.835376 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:32:06.835526 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:32:06.836731 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 01:32:06.837501 systemd[1]: Stopped target basic.target - Basic System. Aug 13 01:32:06.838651 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 01:32:06.839658 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:32:06.840768 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 01:32:06.841909 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:32:06.843185 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 01:32:06.844302 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:32:06.845829 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 01:32:06.846799 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 01:32:06.848064 systemd[1]: Stopped target swap.target - Swaps. Aug 13 01:32:06.849127 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:32:06.849219 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:32:06.850491 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:32:06.851262 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:32:06.852241 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 01:32:06.852841 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:32:06.854157 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:32:06.854290 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 01:32:06.855601 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:32:06.855742 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:32:06.856468 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:32:06.856556 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 01:32:06.859470 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 01:32:06.862552 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 01:32:06.863067 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:32:06.863188 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:32:06.864248 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:32:06.864821 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:32:06.869777 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:32:06.869868 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 01:32:06.885159 ignition[1073]: INFO : Ignition 2.21.0 Aug 13 01:32:06.885159 ignition[1073]: INFO : Stage: umount Aug 13 01:32:06.888051 ignition[1073]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:32:06.888051 ignition[1073]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:32:06.888051 ignition[1073]: INFO : umount: umount passed Aug 13 01:32:06.888051 ignition[1073]: INFO : Ignition finished successfully Aug 13 01:32:06.889253 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:32:06.889375 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 01:32:06.891903 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:32:06.891953 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 01:32:06.921667 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:32:06.921740 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 01:32:06.924566 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 01:32:06.924626 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 01:32:06.926711 systemd[1]: Stopped target network.target - Network. Aug 13 01:32:06.928603 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:32:06.928657 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:32:06.929506 systemd[1]: Stopped target paths.target - Path Units. Aug 13 01:32:06.929996 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:32:06.930048 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:32:06.931046 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 01:32:06.932218 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 01:32:06.933215 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:32:06.933257 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:32:06.934310 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:32:06.934347 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:32:06.935449 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:32:06.935498 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 01:32:06.936440 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 01:32:06.936483 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 01:32:06.937829 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 01:32:06.938995 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 01:32:06.941526 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:32:06.942099 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:32:06.942206 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 01:32:06.944720 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:32:06.944804 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 01:32:06.946455 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:32:06.946586 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 01:32:06.951958 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 01:32:06.952297 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:32:06.952463 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 01:32:06.954501 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 01:32:06.955846 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 01:32:06.957058 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:32:06.957101 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:32:06.958987 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 01:32:06.960759 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:32:06.960822 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:32:06.961431 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:32:06.961488 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:32:06.963915 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:32:06.963968 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 01:32:06.966116 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 01:32:06.966165 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:32:06.968813 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:32:06.972726 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:32:06.972809 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:32:06.987179 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:32:06.987898 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 01:32:06.988909 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:32:06.989084 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:32:06.990625 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:32:06.990699 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 01:32:06.992038 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:32:06.992077 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:32:06.993474 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:32:06.993527 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:32:06.995232 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:32:06.995280 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 01:32:06.996489 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:32:06.996543 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:32:06.998618 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 01:32:06.999854 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 01:32:06.999907 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:32:07.003120 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:32:07.003197 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:32:07.005570 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 01:32:07.005621 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:32:07.007796 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:32:07.007843 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:32:07.009968 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:32:07.010015 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:32:07.012620 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 13 01:32:07.012682 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Aug 13 01:32:07.012727 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 01:32:07.012774 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:32:07.014731 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:32:07.014858 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 01:32:07.016687 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 01:32:07.019496 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 01:32:07.033686 systemd[1]: Switching root. Aug 13 01:32:07.067444 systemd-journald[207]: Received SIGTERM from PID 1 (systemd). Aug 13 01:32:07.067504 systemd-journald[207]: Journal stopped Aug 13 01:32:08.228772 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:32:08.228799 kernel: SELinux: policy capability open_perms=1 Aug 13 01:32:08.228810 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:32:08.228819 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:32:08.228832 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:32:08.228841 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:32:08.228850 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:32:08.228859 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:32:08.228868 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 01:32:08.228876 kernel: audit: type=1403 audit(1755048727.222:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:32:08.228888 systemd[1]: Successfully loaded SELinux policy in 79.176ms. Aug 13 01:32:08.228901 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.077ms. Aug 13 01:32:08.228912 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:32:08.228922 systemd[1]: Detected virtualization kvm. Aug 13 01:32:08.228934 systemd[1]: Detected architecture x86-64. Aug 13 01:32:08.228946 systemd[1]: Detected first boot. Aug 13 01:32:08.228956 systemd[1]: Initializing machine ID from random generator. Aug 13 01:32:08.228966 zram_generator::config[1122]: No configuration found. Aug 13 01:32:08.228977 kernel: Guest personality initialized and is inactive Aug 13 01:32:08.228986 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 01:32:08.228996 kernel: Initialized host personality Aug 13 01:32:08.229006 kernel: NET: Registered PF_VSOCK protocol family Aug 13 01:32:08.229017 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:32:08.229028 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 01:32:08.229038 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:32:08.229048 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 01:32:08.229058 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:32:08.229067 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 01:32:08.229078 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 01:32:08.229090 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 01:32:08.229100 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 01:32:08.229110 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 01:32:08.229120 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 01:32:08.229131 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 01:32:08.229141 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 01:32:08.229152 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:32:08.229162 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:32:08.229174 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 01:32:08.229185 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 01:32:08.229196 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 01:32:08.229209 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:32:08.229219 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 01:32:08.229230 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:32:08.229240 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:32:08.229252 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 01:32:08.229262 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 01:32:08.229272 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 01:32:08.229282 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 01:32:08.229292 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:32:08.229302 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:32:08.229311 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:32:08.229321 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:32:08.229331 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 01:32:08.229344 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 01:32:08.229354 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 01:32:08.229364 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:32:08.229376 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:32:08.229408 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:32:08.229420 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 01:32:08.229430 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 01:32:08.229440 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 01:32:08.229450 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 01:32:08.229461 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:32:08.229471 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 01:32:08.229481 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 01:32:08.229491 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 01:32:08.229505 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:32:08.229515 systemd[1]: Reached target machines.target - Containers. Aug 13 01:32:08.229525 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 01:32:08.229536 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:32:08.229546 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:32:08.229556 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 01:32:08.229566 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:32:08.229577 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:32:08.229589 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:32:08.229599 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 01:32:08.229610 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:32:08.229620 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:32:08.229631 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:32:08.229641 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 01:32:08.229653 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:32:08.229664 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:32:08.229677 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:32:08.229687 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:32:08.229697 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:32:08.229708 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:32:08.229718 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 01:32:08.229728 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 01:32:08.229738 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:32:08.229748 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:32:08.229760 kernel: loop: module loaded Aug 13 01:32:08.229770 systemd[1]: Stopped verity-setup.service. Aug 13 01:32:08.229780 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:32:08.229791 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 01:32:08.229801 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 01:32:08.229811 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 01:32:08.229820 kernel: fuse: init (API version 7.41) Aug 13 01:32:08.229830 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 01:32:08.229840 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 01:32:08.229852 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 01:32:08.229863 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 01:32:08.229874 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:32:08.229885 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:32:08.229895 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 01:32:08.229905 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:32:08.229915 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:32:08.229925 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:32:08.229936 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:32:08.229948 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:32:08.229958 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 01:32:08.229968 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:32:08.229978 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:32:08.229988 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:32:08.229998 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 01:32:08.230008 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 01:32:08.230017 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 01:32:08.230030 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:32:08.230040 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:32:08.230055 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 01:32:08.230087 systemd-journald[1209]: Collecting audit messages is disabled. Aug 13 01:32:08.230110 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 01:32:08.230124 systemd-journald[1209]: Journal started Aug 13 01:32:08.230145 systemd-journald[1209]: Runtime Journal (/run/log/journal/6bde58c30f254ddca174a507d89ba196) is 8M, max 78.5M, 70.5M free. Aug 13 01:32:07.807634 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:32:07.828167 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 01:32:07.828975 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:32:08.234430 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:32:08.243731 kernel: ACPI: bus type drm_connector registered Aug 13 01:32:08.248604 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 01:32:08.254421 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:32:08.261150 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 01:32:08.261194 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:32:08.270676 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:32:08.283959 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 01:32:08.284011 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:32:08.290433 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:32:08.293020 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:32:08.294542 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:32:08.298608 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:32:08.299667 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 01:32:08.306382 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:32:08.308322 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 01:32:08.309280 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 01:32:08.310207 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 01:32:08.339114 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 01:32:08.340595 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:32:08.353828 kernel: loop0: detected capacity change from 0 to 113872 Aug 13 01:32:08.345564 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 01:32:08.351571 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 01:32:08.374437 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:32:08.375291 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Aug 13 01:32:08.375314 systemd-tmpfiles[1225]: ACLs are not supported, ignoring. Aug 13 01:32:08.380663 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:32:08.390692 systemd-journald[1209]: Time spent on flushing to /var/log/journal/6bde58c30f254ddca174a507d89ba196 is 18.239ms for 1009 entries. Aug 13 01:32:08.390692 systemd-journald[1209]: System Journal (/var/log/journal/6bde58c30f254ddca174a507d89ba196) is 8M, max 195.6M, 187.6M free. Aug 13 01:32:08.418643 systemd-journald[1209]: Received client request to flush runtime journal. Aug 13 01:32:08.418690 kernel: loop1: detected capacity change from 0 to 8 Aug 13 01:32:08.395432 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 01:32:08.399160 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:32:08.403713 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 01:32:08.422835 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 01:32:08.429506 kernel: loop2: detected capacity change from 0 to 221472 Aug 13 01:32:08.477537 kernel: loop3: detected capacity change from 0 to 146240 Aug 13 01:32:08.482380 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 01:32:08.486519 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:32:08.522413 kernel: loop4: detected capacity change from 0 to 113872 Aug 13 01:32:08.522929 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Aug 13 01:32:08.522950 systemd-tmpfiles[1265]: ACLs are not supported, ignoring. Aug 13 01:32:08.530855 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:32:08.538584 kernel: loop5: detected capacity change from 0 to 8 Aug 13 01:32:08.543601 kernel: loop6: detected capacity change from 0 to 221472 Aug 13 01:32:08.563491 kernel: loop7: detected capacity change from 0 to 146240 Aug 13 01:32:08.578125 (sd-merge)[1268]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 01:32:08.578732 (sd-merge)[1268]: Merged extensions into '/usr'. Aug 13 01:32:08.590311 systemd[1]: Reload requested from client PID 1224 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 01:32:08.590331 systemd[1]: Reloading... Aug 13 01:32:08.715430 zram_generator::config[1298]: No configuration found. Aug 13 01:32:08.814508 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:32:08.890418 ldconfig[1220]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:32:08.907374 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:32:08.907671 systemd[1]: Reloading finished in 316 ms. Aug 13 01:32:08.923877 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 01:32:08.925228 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 01:32:08.938722 systemd[1]: Starting ensure-sysext.service... Aug 13 01:32:08.943743 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:32:08.973528 systemd[1]: Reload requested from client PID 1338 ('systemctl') (unit ensure-sysext.service)... Aug 13 01:32:08.973620 systemd[1]: Reloading... Aug 13 01:32:08.989908 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 01:32:08.989944 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 01:32:08.990293 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:32:08.992536 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 01:32:08.994317 systemd-tmpfiles[1339]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:32:08.995736 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Aug 13 01:32:08.995804 systemd-tmpfiles[1339]: ACLs are not supported, ignoring. Aug 13 01:32:09.004991 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:32:09.005007 systemd-tmpfiles[1339]: Skipping /boot Aug 13 01:32:09.030148 systemd-tmpfiles[1339]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:32:09.032427 systemd-tmpfiles[1339]: Skipping /boot Aug 13 01:32:09.077427 zram_generator::config[1365]: No configuration found. Aug 13 01:32:09.179711 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:32:09.249428 systemd[1]: Reloading finished in 275 ms. Aug 13 01:32:09.264986 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 01:32:09.278194 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:32:09.287139 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:32:09.289565 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 01:32:09.296558 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 01:32:09.302808 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:32:09.312739 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:32:09.315770 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 01:32:09.321362 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:32:09.321563 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:32:09.325206 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:32:09.330044 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:32:09.340604 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:32:09.341254 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:32:09.341353 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:32:09.341492 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:32:09.343518 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 01:32:09.354072 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 01:32:09.373594 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 01:32:09.377151 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:32:09.378118 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:32:09.380921 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:32:09.383684 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:32:09.383862 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:32:09.383941 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:32:09.384019 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:32:09.384091 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:32:09.388247 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:32:09.392422 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:32:09.403316 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:32:09.403733 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:32:09.406376 systemd-udevd[1416]: Using default interface naming scheme 'v255'. Aug 13 01:32:09.409643 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:32:09.443871 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:32:09.451937 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:32:09.452709 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:32:09.452811 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:32:09.452936 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:32:09.454922 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 01:32:09.457280 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 01:32:09.459804 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:32:09.460026 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:32:09.462053 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 01:32:09.463234 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:32:09.463653 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:32:09.475756 systemd[1]: Finished ensure-sysext.service. Aug 13 01:32:09.480655 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:32:09.485502 augenrules[1454]: No rules Aug 13 01:32:09.487028 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 01:32:09.488658 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:32:09.489093 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:32:09.489496 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:32:09.490280 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:32:09.491569 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:32:09.496701 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:32:09.496952 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:32:09.498575 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:32:09.502136 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:32:09.508121 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:32:09.508897 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 01:32:09.656297 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 01:32:09.756771 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 01:32:09.758011 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 01:32:09.789605 systemd-networkd[1469]: lo: Link UP Aug 13 01:32:09.789621 systemd-networkd[1469]: lo: Gained carrier Aug 13 01:32:09.792274 systemd-networkd[1469]: Enumeration completed Aug 13 01:32:09.792606 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:32:09.794770 systemd-networkd[1469]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:32:09.794786 systemd-networkd[1469]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:32:09.796263 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 01:32:09.798251 systemd-networkd[1469]: eth0: Link UP Aug 13 01:32:09.798748 systemd-networkd[1469]: eth0: Gained carrier Aug 13 01:32:09.798773 systemd-networkd[1469]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:32:09.799122 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 01:32:09.799431 systemd-resolved[1414]: Positive Trust Anchors: Aug 13 01:32:09.799445 systemd-resolved[1414]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:32:09.799476 systemd-resolved[1414]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:32:09.806119 systemd-resolved[1414]: Defaulting to hostname 'linux'. Aug 13 01:32:09.808965 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:32:09.809696 systemd[1]: Reached target network.target - Network. Aug 13 01:32:09.811791 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:32:09.813473 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:32:09.814077 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 01:32:09.815487 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 01:32:09.817450 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 01:32:09.818152 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 01:32:09.818797 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 01:32:09.820457 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 01:32:09.821046 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:32:09.821067 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:32:09.822451 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:32:09.824505 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 01:32:09.829709 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 01:32:09.833972 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 01:32:09.836579 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 01:32:09.837141 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 01:32:09.846552 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 01:32:09.847775 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 01:32:09.850030 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 01:32:09.857679 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 01:32:09.864053 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:32:09.865455 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:32:09.866228 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:32:09.866253 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:32:09.876825 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 01:32:09.881453 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 01:32:09.885648 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 01:32:09.892429 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:32:09.893925 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 01:32:09.896679 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 01:32:09.904049 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 01:32:09.906266 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 01:32:09.907649 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 01:32:09.911441 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 01:32:09.918585 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 01:32:09.930532 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 01:32:09.936145 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 01:32:09.943858 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 01:32:09.947733 jq[1520]: false Aug 13 01:32:09.948446 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:32:09.950799 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 01:32:09.952154 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:32:09.952705 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:32:09.956058 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 01:32:09.960538 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 01:32:09.963808 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 01:32:09.964692 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:32:09.964933 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 01:32:09.984389 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Refreshing passwd entry cache Aug 13 01:32:09.982598 oslogin_cache_refresh[1522]: Refreshing passwd entry cache Aug 13 01:32:09.996552 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Failure getting users, quitting Aug 13 01:32:09.996552 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:32:09.996552 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Refreshing group entry cache Aug 13 01:32:09.996552 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Failure getting groups, quitting Aug 13 01:32:09.996552 google_oslogin_nss_cache[1522]: oslogin_cache_refresh[1522]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:32:09.993610 oslogin_cache_refresh[1522]: Failure getting users, quitting Aug 13 01:32:09.993625 oslogin_cache_refresh[1522]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:32:09.993667 oslogin_cache_refresh[1522]: Refreshing group entry cache Aug 13 01:32:09.994132 oslogin_cache_refresh[1522]: Failure getting groups, quitting Aug 13 01:32:09.994140 oslogin_cache_refresh[1522]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:32:10.003522 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:32:10.004202 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 01:32:10.011869 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 01:32:10.014476 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 01:32:10.022083 extend-filesystems[1521]: Found /dev/sda6 Aug 13 01:32:10.033441 jq[1531]: true Aug 13 01:32:10.035448 extend-filesystems[1521]: Found /dev/sda9 Aug 13 01:32:10.044689 update_engine[1530]: I20250813 01:32:10.040267 1530 main.cc:92] Flatcar Update Engine starting Aug 13 01:32:10.046906 extend-filesystems[1521]: Checking size of /dev/sda9 Aug 13 01:32:10.051023 (ntainerd)[1544]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 01:32:10.053252 tar[1533]: linux-amd64/helm Aug 13 01:32:10.070353 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:32:10.070709 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 01:32:10.104123 dbus-daemon[1518]: [system] SELinux support is enabled Aug 13 01:32:10.109741 jq[1554]: true Aug 13 01:32:10.104345 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 01:32:10.120729 extend-filesystems[1521]: Resized partition /dev/sda9 Aug 13 01:32:10.156910 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 01:32:10.156962 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 01:32:10.118314 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:32:10.157057 update_engine[1530]: I20250813 01:32:10.148599 1530 update_check_scheduler.cc:74] Next update check in 6m54s Aug 13 01:32:10.157087 extend-filesystems[1567]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 01:32:10.157087 extend-filesystems[1567]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 01:32:10.157087 extend-filesystems[1567]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 01:32:10.157087 extend-filesystems[1567]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 01:32:10.186649 kernel: EDAC MC: Ver: 3.0.0 Aug 13 01:32:10.118360 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 01:32:10.186791 extend-filesystems[1521]: Resized filesystem in /dev/sda9 Aug 13 01:32:10.140664 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:32:10.140687 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 01:32:10.142215 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:32:10.142525 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 01:32:10.154191 systemd[1]: Started update-engine.service - Update Engine. Aug 13 01:32:10.156212 systemd-logind[1529]: New seat seat0. Aug 13 01:32:10.201669 coreos-metadata[1516]: Aug 13 01:32:10.201 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:32:10.214642 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 01:32:10.215386 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 01:32:10.217837 bash[1592]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:32:10.219447 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 01:32:10.226548 systemd[1]: Starting sshkeys.service... Aug 13 01:32:10.253915 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:32:10.261499 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 01:32:10.285742 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 01:32:10.291734 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 01:32:10.351823 systemd-networkd[1469]: eth0: DHCPv4 address 172.234.27.175/24, gateway 172.234.27.1 acquired from 23.194.118.74 Aug 13 01:32:10.354276 systemd-timesyncd[1459]: Network configuration changed, trying to establish connection. Aug 13 01:32:10.355538 dbus-daemon[1518]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1469 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 01:32:10.363863 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 01:32:10.393328 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 01:32:10.393582 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 01:32:10.436852 coreos-metadata[1600]: Aug 13 01:32:10.436 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:32:10.457132 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 01:32:10.465424 containerd[1544]: time="2025-08-13T01:32:10Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 01:32:10.465806 containerd[1544]: time="2025-08-13T01:32:10.465772849Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 01:32:10.494180 containerd[1544]: time="2025-08-13T01:32:10.494138544Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="14.21µs" Aug 13 01:32:10.494889 containerd[1544]: time="2025-08-13T01:32:10.494232174Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 01:32:10.495293 containerd[1544]: time="2025-08-13T01:32:10.495273804Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 01:32:10.496958 containerd[1544]: time="2025-08-13T01:32:10.496938335Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 01:32:10.497672 containerd[1544]: time="2025-08-13T01:32:10.497449875Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 01:32:10.497672 containerd[1544]: time="2025-08-13T01:32:10.497490735Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:32:10.497672 containerd[1544]: time="2025-08-13T01:32:10.497566235Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:32:10.497672 containerd[1544]: time="2025-08-13T01:32:10.497578605Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:32:10.498662 systemd-logind[1529]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:32:10.521996 systemd-logind[1529]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 01:32:10.538251 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:32:10.564989 containerd[1544]: time="2025-08-13T01:32:10.563647218Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:32:10.564989 containerd[1544]: time="2025-08-13T01:32:10.563672378Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:32:10.564989 containerd[1544]: time="2025-08-13T01:32:10.563695188Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:32:10.564989 containerd[1544]: time="2025-08-13T01:32:10.563726128Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 01:32:10.564989 containerd[1544]: time="2025-08-13T01:32:10.563824828Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 01:32:10.564989 containerd[1544]: time="2025-08-13T01:32:10.564049938Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:32:10.564989 containerd[1544]: time="2025-08-13T01:32:10.564081578Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:32:10.564989 containerd[1544]: time="2025-08-13T01:32:10.564091728Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 01:32:10.564989 containerd[1544]: time="2025-08-13T01:32:10.564170028Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 01:32:10.564989 containerd[1544]: time="2025-08-13T01:32:10.564538909Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 01:32:10.564989 containerd[1544]: time="2025-08-13T01:32:10.564625609Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:32:10.568987 coreos-metadata[1600]: Aug 13 01:32:10.568 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 01:32:10.575145 containerd[1544]: time="2025-08-13T01:32:10.573790313Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 01:32:10.575145 containerd[1544]: time="2025-08-13T01:32:10.573851113Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 01:32:10.575145 containerd[1544]: time="2025-08-13T01:32:10.573865773Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 01:32:10.575145 containerd[1544]: time="2025-08-13T01:32:10.573878223Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 01:32:10.575145 containerd[1544]: time="2025-08-13T01:32:10.573889363Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 01:32:10.575145 containerd[1544]: time="2025-08-13T01:32:10.573899233Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 01:32:10.575145 containerd[1544]: time="2025-08-13T01:32:10.573941263Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 01:32:10.575145 containerd[1544]: time="2025-08-13T01:32:10.573954633Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 01:32:10.575145 containerd[1544]: time="2025-08-13T01:32:10.573965143Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 01:32:10.575145 containerd[1544]: time="2025-08-13T01:32:10.573974593Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 01:32:10.575145 containerd[1544]: time="2025-08-13T01:32:10.573984613Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 01:32:10.575145 containerd[1544]: time="2025-08-13T01:32:10.574020913Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 01:32:10.575145 containerd[1544]: time="2025-08-13T01:32:10.574201764Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 01:32:10.575145 containerd[1544]: time="2025-08-13T01:32:10.574221724Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 01:32:10.575379 containerd[1544]: time="2025-08-13T01:32:10.574273974Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 01:32:10.575379 containerd[1544]: time="2025-08-13T01:32:10.574288084Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 01:32:10.575379 containerd[1544]: time="2025-08-13T01:32:10.574299374Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 01:32:10.575379 containerd[1544]: time="2025-08-13T01:32:10.574309494Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 01:32:10.575379 containerd[1544]: time="2025-08-13T01:32:10.574335444Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 01:32:10.575379 containerd[1544]: time="2025-08-13T01:32:10.574346164Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 01:32:10.575379 containerd[1544]: time="2025-08-13T01:32:10.574358954Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 01:32:10.575379 containerd[1544]: time="2025-08-13T01:32:10.574372554Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 01:32:10.575379 containerd[1544]: time="2025-08-13T01:32:10.574382694Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 01:32:10.575379 containerd[1544]: time="2025-08-13T01:32:10.574472754Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 01:32:10.575379 containerd[1544]: time="2025-08-13T01:32:10.574508644Z" level=info msg="Start snapshots syncer" Aug 13 01:32:10.575379 containerd[1544]: time="2025-08-13T01:32:10.574536914Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 01:32:10.575672 containerd[1544]: time="2025-08-13T01:32:10.574849784Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 01:32:10.575672 containerd[1544]: time="2025-08-13T01:32:10.574907564Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 01:32:10.580326 containerd[1544]: time="2025-08-13T01:32:10.579718916Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 01:32:10.580326 containerd[1544]: time="2025-08-13T01:32:10.579827086Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 01:32:10.580326 containerd[1544]: time="2025-08-13T01:32:10.579848356Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 01:32:10.580326 containerd[1544]: time="2025-08-13T01:32:10.579859446Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 01:32:10.580326 containerd[1544]: time="2025-08-13T01:32:10.579870926Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 01:32:10.580326 containerd[1544]: time="2025-08-13T01:32:10.579884486Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 01:32:10.580326 containerd[1544]: time="2025-08-13T01:32:10.579896986Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 01:32:10.580326 containerd[1544]: time="2025-08-13T01:32:10.579916106Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 01:32:10.580326 containerd[1544]: time="2025-08-13T01:32:10.579941516Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 01:32:10.580326 containerd[1544]: time="2025-08-13T01:32:10.580152806Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 01:32:10.580326 containerd[1544]: time="2025-08-13T01:32:10.580173976Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 01:32:10.584179 containerd[1544]: time="2025-08-13T01:32:10.581909967Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:32:10.584179 containerd[1544]: time="2025-08-13T01:32:10.581984867Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:32:10.584179 containerd[1544]: time="2025-08-13T01:32:10.581997497Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:32:10.584179 containerd[1544]: time="2025-08-13T01:32:10.582013907Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:32:10.584179 containerd[1544]: time="2025-08-13T01:32:10.582026527Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 01:32:10.584179 containerd[1544]: time="2025-08-13T01:32:10.582041487Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 01:32:10.584179 containerd[1544]: time="2025-08-13T01:32:10.582056607Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 01:32:10.584179 containerd[1544]: time="2025-08-13T01:32:10.582082147Z" level=info msg="runtime interface created" Aug 13 01:32:10.584179 containerd[1544]: time="2025-08-13T01:32:10.582088067Z" level=info msg="created NRI interface" Aug 13 01:32:10.584179 containerd[1544]: time="2025-08-13T01:32:10.582101447Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 01:32:10.584179 containerd[1544]: time="2025-08-13T01:32:10.582124167Z" level=info msg="Connect containerd service" Aug 13 01:32:10.584179 containerd[1544]: time="2025-08-13T01:32:10.582193277Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 01:32:10.584416 containerd[1544]: time="2025-08-13T01:32:10.584332359Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:32:10.724728 coreos-metadata[1600]: Aug 13 01:32:10.724 INFO Fetch successful Aug 13 01:32:10.787164 update-ssh-keys[1636]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:32:10.787671 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 01:32:10.796366 systemd[1]: Finished sshkeys.service. Aug 13 01:32:10.870784 locksmithd[1577]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:32:10.876797 containerd[1544]: time="2025-08-13T01:32:10.876714675Z" level=info msg="Start subscribing containerd event" Aug 13 01:32:10.876797 containerd[1544]: time="2025-08-13T01:32:10.876780355Z" level=info msg="Start recovering state" Aug 13 01:32:10.876933 containerd[1544]: time="2025-08-13T01:32:10.876903115Z" level=info msg="Start event monitor" Aug 13 01:32:10.876933 containerd[1544]: time="2025-08-13T01:32:10.876931535Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:32:10.876977 containerd[1544]: time="2025-08-13T01:32:10.876942315Z" level=info msg="Start streaming server" Aug 13 01:32:10.876977 containerd[1544]: time="2025-08-13T01:32:10.876952565Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 01:32:10.876977 containerd[1544]: time="2025-08-13T01:32:10.876960455Z" level=info msg="runtime interface starting up..." Aug 13 01:32:10.876977 containerd[1544]: time="2025-08-13T01:32:10.876966365Z" level=info msg="starting plugins..." Aug 13 01:32:10.877045 containerd[1544]: time="2025-08-13T01:32:10.876981415Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 01:32:10.877923 containerd[1544]: time="2025-08-13T01:32:10.877570935Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:32:10.877923 containerd[1544]: time="2025-08-13T01:32:10.877713915Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:32:10.883504 containerd[1544]: time="2025-08-13T01:32:10.883470528Z" level=info msg="containerd successfully booted in 0.418783s" Aug 13 01:32:10.883650 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 01:32:10.928751 systemd-networkd[1469]: eth0: Gained IPv6LL Aug 13 01:32:10.929344 systemd-timesyncd[1459]: Network configuration changed, trying to establish connection. Aug 13 01:32:10.977002 dbus-daemon[1518]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 01:32:10.977468 dbus-daemon[1518]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1611 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 01:32:11.005463 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 01:32:11.053073 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 01:32:11.061604 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 01:32:11.094322 sshd_keygen[1555]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:32:11.103659 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:32:11.130982 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 01:32:11.139094 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 01:32:11.159762 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 01:32:11.190331 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:32:11.203663 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 01:32:11.206727 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 01:32:11.227209 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:32:11.227506 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 01:32:11.232757 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 01:32:11.235145 coreos-metadata[1516]: Aug 13 01:32:11.235 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 01:32:11.261806 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 01:32:11.264580 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 01:32:11.267033 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 01:32:11.268259 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 01:32:11.313113 tar[1533]: linux-amd64/LICENSE Aug 13 01:32:11.313113 tar[1533]: linux-amd64/README.md Aug 13 01:32:11.332429 polkitd[1656]: Started polkitd version 126 Aug 13 01:32:11.338434 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 01:32:11.343264 polkitd[1656]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 01:32:11.344120 polkitd[1656]: Loading rules from directory /run/polkit-1/rules.d Aug 13 01:32:11.344209 polkitd[1656]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:32:11.344489 polkitd[1656]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 13 01:32:11.344550 polkitd[1656]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:32:11.344726 polkitd[1656]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 01:32:11.345531 polkitd[1656]: Finished loading, compiling and executing 2 rules Aug 13 01:32:11.346158 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 01:32:11.347213 coreos-metadata[1516]: Aug 13 01:32:11.347 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 01:32:11.348357 dbus-daemon[1518]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 01:32:11.348936 polkitd[1656]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 01:32:11.359461 systemd-resolved[1414]: System hostname changed to '172-234-27-175'. Aug 13 01:32:11.359596 systemd-hostnamed[1611]: Hostname set to <172-234-27-175> (transient) Aug 13 01:32:11.558876 coreos-metadata[1516]: Aug 13 01:32:11.558 INFO Fetch successful Aug 13 01:32:11.559131 coreos-metadata[1516]: Aug 13 01:32:11.559 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 01:32:11.973717 coreos-metadata[1516]: Aug 13 01:32:11.973 INFO Fetch successful Aug 13 01:32:12.083707 systemd-timesyncd[1459]: Network configuration changed, trying to establish connection. Aug 13 01:32:12.084630 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 01:32:12.086971 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 01:32:12.109528 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:32:12.110554 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 01:32:12.114225 systemd[1]: Startup finished in 2.805s (kernel) + 7.567s (initrd) + 4.969s (userspace) = 15.342s. Aug 13 01:32:12.155750 (kubelet)[1713]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:32:12.651917 kubelet[1713]: E0813 01:32:12.651850 1713 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:32:12.655324 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:32:12.655753 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:32:12.656121 systemd[1]: kubelet.service: Consumed 854ms CPU time, 266.8M memory peak. Aug 13 01:32:13.745242 systemd-timesyncd[1459]: Network configuration changed, trying to establish connection. Aug 13 01:32:14.512973 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 01:32:14.514180 systemd[1]: Started sshd@0-172.234.27.175:22-147.75.109.163:35634.service - OpenSSH per-connection server daemon (147.75.109.163:35634). Aug 13 01:32:14.868226 sshd[1725]: Accepted publickey for core from 147.75.109.163 port 35634 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:32:14.870220 sshd-session[1725]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:32:14.878758 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 01:32:14.880508 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 01:32:14.889212 systemd-logind[1529]: New session 1 of user core. Aug 13 01:32:14.903622 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 01:32:14.906862 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 01:32:14.918028 (systemd)[1729]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:32:14.920878 systemd-logind[1529]: New session c1 of user core. Aug 13 01:32:15.060773 systemd[1729]: Queued start job for default target default.target. Aug 13 01:32:15.067592 systemd[1729]: Created slice app.slice - User Application Slice. Aug 13 01:32:15.067628 systemd[1729]: Reached target paths.target - Paths. Aug 13 01:32:15.067672 systemd[1729]: Reached target timers.target - Timers. Aug 13 01:32:15.069704 systemd[1729]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 01:32:15.082526 systemd[1729]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 01:32:15.082669 systemd[1729]: Reached target sockets.target - Sockets. Aug 13 01:32:15.082814 systemd[1729]: Reached target basic.target - Basic System. Aug 13 01:32:15.082912 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 01:32:15.083798 systemd[1729]: Reached target default.target - Main User Target. Aug 13 01:32:15.083851 systemd[1729]: Startup finished in 154ms. Aug 13 01:32:15.090546 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 01:32:15.353783 systemd[1]: Started sshd@1-172.234.27.175:22-147.75.109.163:35650.service - OpenSSH per-connection server daemon (147.75.109.163:35650). Aug 13 01:32:15.704328 sshd[1740]: Accepted publickey for core from 147.75.109.163 port 35650 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:32:15.706106 sshd-session[1740]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:32:15.712467 systemd-logind[1529]: New session 2 of user core. Aug 13 01:32:15.717542 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 01:32:15.955797 sshd[1742]: Connection closed by 147.75.109.163 port 35650 Aug 13 01:32:15.956024 sshd-session[1740]: pam_unix(sshd:session): session closed for user core Aug 13 01:32:15.961186 systemd-logind[1529]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:32:15.961879 systemd[1]: sshd@1-172.234.27.175:22-147.75.109.163:35650.service: Deactivated successfully. Aug 13 01:32:15.969855 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:32:15.972671 systemd-logind[1529]: Removed session 2. Aug 13 01:32:16.014484 systemd[1]: Started sshd@2-172.234.27.175:22-147.75.109.163:35654.service - OpenSSH per-connection server daemon (147.75.109.163:35654). Aug 13 01:32:16.356835 sshd[1748]: Accepted publickey for core from 147.75.109.163 port 35654 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:32:16.359433 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:32:16.366171 systemd-logind[1529]: New session 3 of user core. Aug 13 01:32:16.372552 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 01:32:16.596930 sshd[1750]: Connection closed by 147.75.109.163 port 35654 Aug 13 01:32:16.597875 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Aug 13 01:32:16.602636 systemd[1]: sshd@2-172.234.27.175:22-147.75.109.163:35654.service: Deactivated successfully. Aug 13 01:32:16.604936 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:32:16.606481 systemd-logind[1529]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:32:16.608006 systemd-logind[1529]: Removed session 3. Aug 13 01:32:16.662120 systemd[1]: Started sshd@3-172.234.27.175:22-147.75.109.163:35658.service - OpenSSH per-connection server daemon (147.75.109.163:35658). Aug 13 01:32:17.006625 sshd[1756]: Accepted publickey for core from 147.75.109.163 port 35658 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:32:17.008122 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:32:17.012205 systemd-logind[1529]: New session 4 of user core. Aug 13 01:32:17.018523 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 01:32:17.257054 sshd[1758]: Connection closed by 147.75.109.163 port 35658 Aug 13 01:32:17.257769 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Aug 13 01:32:17.261694 systemd-logind[1529]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:32:17.262369 systemd[1]: sshd@3-172.234.27.175:22-147.75.109.163:35658.service: Deactivated successfully. Aug 13 01:32:17.264215 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:32:17.265997 systemd-logind[1529]: Removed session 4. Aug 13 01:32:17.319891 systemd[1]: Started sshd@4-172.234.27.175:22-147.75.109.163:56850.service - OpenSSH per-connection server daemon (147.75.109.163:56850). Aug 13 01:32:17.652759 sshd[1764]: Accepted publickey for core from 147.75.109.163 port 56850 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:32:17.655056 sshd-session[1764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:32:17.660132 systemd-logind[1529]: New session 5 of user core. Aug 13 01:32:17.670524 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 01:32:17.853615 sudo[1767]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 01:32:17.853978 sudo[1767]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:32:17.872097 sudo[1767]: pam_unix(sudo:session): session closed for user root Aug 13 01:32:17.921994 sshd[1766]: Connection closed by 147.75.109.163 port 56850 Aug 13 01:32:17.923216 sshd-session[1764]: pam_unix(sshd:session): session closed for user core Aug 13 01:32:17.927217 systemd[1]: sshd@4-172.234.27.175:22-147.75.109.163:56850.service: Deactivated successfully. Aug 13 01:32:17.929576 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:32:17.931453 systemd-logind[1529]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:32:17.937420 systemd-logind[1529]: Removed session 5. Aug 13 01:32:17.984226 systemd[1]: Started sshd@5-172.234.27.175:22-147.75.109.163:56860.service - OpenSSH per-connection server daemon (147.75.109.163:56860). Aug 13 01:32:18.321458 sshd[1773]: Accepted publickey for core from 147.75.109.163 port 56860 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:32:18.323418 sshd-session[1773]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:32:18.329315 systemd-logind[1529]: New session 6 of user core. Aug 13 01:32:18.336570 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 01:32:18.519175 sudo[1777]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 01:32:18.519562 sudo[1777]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:32:18.525352 sudo[1777]: pam_unix(sudo:session): session closed for user root Aug 13 01:32:18.539456 sudo[1776]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 01:32:18.540073 sudo[1776]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:32:18.552233 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:32:18.596324 augenrules[1799]: No rules Aug 13 01:32:18.598357 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:32:18.598764 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:32:18.600092 sudo[1776]: pam_unix(sudo:session): session closed for user root Aug 13 01:32:18.650538 sshd[1775]: Connection closed by 147.75.109.163 port 56860 Aug 13 01:32:18.651116 sshd-session[1773]: pam_unix(sshd:session): session closed for user core Aug 13 01:32:18.656502 systemd-logind[1529]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:32:18.656901 systemd[1]: sshd@5-172.234.27.175:22-147.75.109.163:56860.service: Deactivated successfully. Aug 13 01:32:18.659683 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:32:18.661731 systemd-logind[1529]: Removed session 6. Aug 13 01:32:18.716717 systemd[1]: Started sshd@6-172.234.27.175:22-147.75.109.163:56876.service - OpenSSH per-connection server daemon (147.75.109.163:56876). Aug 13 01:32:19.058037 sshd[1808]: Accepted publickey for core from 147.75.109.163 port 56876 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:32:19.059708 sshd-session[1808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:32:19.065622 systemd-logind[1529]: New session 7 of user core. Aug 13 01:32:19.076555 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 01:32:19.256364 sudo[1811]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:32:19.256885 sudo[1811]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:32:19.536871 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 01:32:19.551714 (dockerd)[1829]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 01:32:19.731164 dockerd[1829]: time="2025-08-13T01:32:19.731100809Z" level=info msg="Starting up" Aug 13 01:32:19.732388 dockerd[1829]: time="2025-08-13T01:32:19.732364709Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 01:32:19.763944 systemd[1]: var-lib-docker-metacopy\x2dcheck1327679170-merged.mount: Deactivated successfully. Aug 13 01:32:19.784211 dockerd[1829]: time="2025-08-13T01:32:19.784186955Z" level=info msg="Loading containers: start." Aug 13 01:32:19.794434 kernel: Initializing XFRM netlink socket Aug 13 01:32:19.977523 systemd-timesyncd[1459]: Network configuration changed, trying to establish connection. Aug 13 01:32:20.019079 systemd-networkd[1469]: docker0: Link UP Aug 13 01:32:20.021711 dockerd[1829]: time="2025-08-13T01:32:20.021675384Z" level=info msg="Loading containers: done." Aug 13 01:32:20.035010 dockerd[1829]: time="2025-08-13T01:32:20.033808330Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:32:20.035010 dockerd[1829]: time="2025-08-13T01:32:20.033865760Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 01:32:20.035010 dockerd[1829]: time="2025-08-13T01:32:20.033956890Z" level=info msg="Initializing buildkit" Aug 13 01:32:20.033902 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3541930913-merged.mount: Deactivated successfully. Aug 13 01:32:20.053200 dockerd[1829]: time="2025-08-13T01:32:20.053177250Z" level=info msg="Completed buildkit initialization" Aug 13 01:32:20.058756 dockerd[1829]: time="2025-08-13T01:32:20.058736642Z" level=info msg="Daemon has completed initialization" Aug 13 01:32:20.059117 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 01:32:20.059451 dockerd[1829]: time="2025-08-13T01:32:20.058841243Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:32:21.285340 systemd-resolved[1414]: Clock change detected. Flushing caches. Aug 13 01:32:21.285757 systemd-timesyncd[1459]: Contacted time server [2600:3c03::f03c:91ff:fedf:1e98]:123 (2.flatcar.pool.ntp.org). Aug 13 01:32:21.285807 systemd-timesyncd[1459]: Initial clock synchronization to Wed 2025-08-13 01:32:21.285118 UTC. Aug 13 01:32:21.792689 containerd[1544]: time="2025-08-13T01:32:21.792393870Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 01:32:22.589010 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1742994034.mount: Deactivated successfully. Aug 13 01:32:23.968356 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:32:23.970662 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:32:24.073469 containerd[1544]: time="2025-08-13T01:32:24.073424399Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:24.075374 containerd[1544]: time="2025-08-13T01:32:24.075351510Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 01:32:24.075899 containerd[1544]: time="2025-08-13T01:32:24.075877931Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:24.079158 containerd[1544]: time="2025-08-13T01:32:24.079118452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:24.079773 containerd[1544]: time="2025-08-13T01:32:24.079646083Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 2.287204503s" Aug 13 01:32:24.080302 containerd[1544]: time="2025-08-13T01:32:24.080174733Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 01:32:24.080867 containerd[1544]: time="2025-08-13T01:32:24.080847543Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 01:32:24.153701 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:32:24.157288 (kubelet)[2092]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:32:24.196574 kubelet[2092]: E0813 01:32:24.196526 2092 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:32:24.201476 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:32:24.201663 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:32:24.202113 systemd[1]: kubelet.service: Consumed 193ms CPU time, 108.3M memory peak. Aug 13 01:32:25.782760 containerd[1544]: time="2025-08-13T01:32:25.782677314Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:25.783895 containerd[1544]: time="2025-08-13T01:32:25.783673554Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 01:32:25.784701 containerd[1544]: time="2025-08-13T01:32:25.784675775Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:25.787004 containerd[1544]: time="2025-08-13T01:32:25.786978996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:25.788153 containerd[1544]: time="2025-08-13T01:32:25.788113176Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.707171663s" Aug 13 01:32:25.788190 containerd[1544]: time="2025-08-13T01:32:25.788164476Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 01:32:25.788918 containerd[1544]: time="2025-08-13T01:32:25.788877317Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 01:32:27.171337 containerd[1544]: time="2025-08-13T01:32:27.170700347Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 01:32:27.171337 containerd[1544]: time="2025-08-13T01:32:27.171263807Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:27.177614 containerd[1544]: time="2025-08-13T01:32:27.173008818Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:27.177614 containerd[1544]: time="2025-08-13T01:32:27.174248439Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:27.177614 containerd[1544]: time="2025-08-13T01:32:27.174896749Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.385992782s" Aug 13 01:32:27.177614 containerd[1544]: time="2025-08-13T01:32:27.174938809Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 01:32:27.178414 containerd[1544]: time="2025-08-13T01:32:27.178232121Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 01:32:28.419178 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3909266930.mount: Deactivated successfully. Aug 13 01:32:28.727421 containerd[1544]: time="2025-08-13T01:32:28.727361735Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:28.728325 containerd[1544]: time="2025-08-13T01:32:28.728064395Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 01:32:28.728831 containerd[1544]: time="2025-08-13T01:32:28.728796306Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:28.730286 containerd[1544]: time="2025-08-13T01:32:28.730251276Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:28.730872 containerd[1544]: time="2025-08-13T01:32:28.730838967Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 1.552575056s" Aug 13 01:32:28.730946 containerd[1544]: time="2025-08-13T01:32:28.730931257Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 01:32:28.731719 containerd[1544]: time="2025-08-13T01:32:28.731686537Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:32:29.467651 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3585873734.mount: Deactivated successfully. Aug 13 01:32:30.142531 containerd[1544]: time="2025-08-13T01:32:30.141162721Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:30.142531 containerd[1544]: time="2025-08-13T01:32:30.142395192Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 01:32:30.142531 containerd[1544]: time="2025-08-13T01:32:30.142432972Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:30.145188 containerd[1544]: time="2025-08-13T01:32:30.145147793Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:30.146542 containerd[1544]: time="2025-08-13T01:32:30.146121814Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.414400767s" Aug 13 01:32:30.146542 containerd[1544]: time="2025-08-13T01:32:30.146182474Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:32:30.147348 containerd[1544]: time="2025-08-13T01:32:30.147321684Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:32:30.854922 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount178227126.mount: Deactivated successfully. Aug 13 01:32:30.860346 containerd[1544]: time="2025-08-13T01:32:30.860306421Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:32:30.861027 containerd[1544]: time="2025-08-13T01:32:30.860927991Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 01:32:30.861587 containerd[1544]: time="2025-08-13T01:32:30.861553621Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:32:30.864167 containerd[1544]: time="2025-08-13T01:32:30.863166782Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:32:30.864167 containerd[1544]: time="2025-08-13T01:32:30.864000462Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 716.593178ms" Aug 13 01:32:30.864167 containerd[1544]: time="2025-08-13T01:32:30.864034562Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:32:30.865210 containerd[1544]: time="2025-08-13T01:32:30.865177333Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 01:32:31.629381 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3666972530.mount: Deactivated successfully. Aug 13 01:32:33.229270 containerd[1544]: time="2025-08-13T01:32:33.229200094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:33.231034 containerd[1544]: time="2025-08-13T01:32:33.230397185Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 01:32:33.231566 containerd[1544]: time="2025-08-13T01:32:33.231499605Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:33.234034 containerd[1544]: time="2025-08-13T01:32:33.233973507Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:33.235150 containerd[1544]: time="2025-08-13T01:32:33.235044917Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.369839804s" Aug 13 01:32:33.235150 containerd[1544]: time="2025-08-13T01:32:33.235076347Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:32:34.218700 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 01:32:34.221284 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:32:34.419263 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:32:34.424423 (kubelet)[2254]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:32:34.465792 kubelet[2254]: E0813 01:32:34.465744 2254 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:32:34.470373 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:32:34.470770 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:32:34.471589 systemd[1]: kubelet.service: Consumed 198ms CPU time, 108.4M memory peak. Aug 13 01:32:35.000741 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:32:35.001384 systemd[1]: kubelet.service: Consumed 198ms CPU time, 108.4M memory peak. Aug 13 01:32:35.003980 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:32:35.031316 systemd[1]: Reload requested from client PID 2268 ('systemctl') (unit session-7.scope)... Aug 13 01:32:35.031425 systemd[1]: Reloading... Aug 13 01:32:35.164159 zram_generator::config[2309]: No configuration found. Aug 13 01:32:35.266047 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:32:35.371473 systemd[1]: Reloading finished in 339 ms. Aug 13 01:32:35.436478 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:32:35.440267 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:32:35.440553 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:32:35.440590 systemd[1]: kubelet.service: Consumed 141ms CPU time, 98.3M memory peak. Aug 13 01:32:35.442049 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:32:35.609107 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:32:35.619617 (kubelet)[2368]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:32:35.659161 kubelet[2368]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:32:35.659161 kubelet[2368]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:32:35.659161 kubelet[2368]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:32:35.659161 kubelet[2368]: I0813 01:32:35.658703 2368 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:32:35.957461 kubelet[2368]: I0813 01:32:35.956224 2368 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:32:35.957888 kubelet[2368]: I0813 01:32:35.957847 2368 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:32:35.958219 kubelet[2368]: I0813 01:32:35.958200 2368 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:32:35.984336 kubelet[2368]: E0813 01:32:35.984300 2368 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.234.27.175:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.234.27.175:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:32:35.986275 kubelet[2368]: I0813 01:32:35.986164 2368 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:32:35.998651 kubelet[2368]: I0813 01:32:35.998618 2368 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:32:36.003322 kubelet[2368]: I0813 01:32:36.003294 2368 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:32:36.003419 kubelet[2368]: I0813 01:32:36.003402 2368 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:32:36.003556 kubelet[2368]: I0813 01:32:36.003527 2368 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:32:36.003722 kubelet[2368]: I0813 01:32:36.003554 2368 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-27-175","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:32:36.003846 kubelet[2368]: I0813 01:32:36.003734 2368 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:32:36.003846 kubelet[2368]: I0813 01:32:36.003744 2368 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:32:36.003890 kubelet[2368]: I0813 01:32:36.003847 2368 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:32:36.006328 kubelet[2368]: I0813 01:32:36.006308 2368 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:32:36.006328 kubelet[2368]: I0813 01:32:36.006329 2368 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:32:36.006393 kubelet[2368]: I0813 01:32:36.006362 2368 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:32:36.006393 kubelet[2368]: I0813 01:32:36.006379 2368 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:32:36.015003 kubelet[2368]: W0813 01:32:36.014720 2368 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.234.27.175:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.234.27.175:6443: connect: connection refused Aug 13 01:32:36.015003 kubelet[2368]: I0813 01:32:36.014743 2368 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:32:36.015003 kubelet[2368]: E0813 01:32:36.014795 2368 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.234.27.175:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.234.27.175:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:32:36.015003 kubelet[2368]: W0813 01:32:36.014874 2368 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.234.27.175:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-27-175&limit=500&resourceVersion=0": dial tcp 172.234.27.175:6443: connect: connection refused Aug 13 01:32:36.015003 kubelet[2368]: E0813 01:32:36.014901 2368 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.234.27.175:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-234-27-175&limit=500&resourceVersion=0\": dial tcp 172.234.27.175:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:32:36.015388 kubelet[2368]: I0813 01:32:36.015177 2368 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:32:36.015388 kubelet[2368]: W0813 01:32:36.015243 2368 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:32:36.017380 kubelet[2368]: I0813 01:32:36.017115 2368 server.go:1274] "Started kubelet" Aug 13 01:32:36.019243 kubelet[2368]: I0813 01:32:36.018977 2368 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:32:36.022151 kubelet[2368]: E0813 01:32:36.020781 2368 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.234.27.175:6443/api/v1/namespaces/default/events\": dial tcp 172.234.27.175:6443: connect: connection refused" event="&Event{ObjectMeta:{172-234-27-175.185b2f8b81e3b685 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-234-27-175,UID:172-234-27-175,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-234-27-175,},FirstTimestamp:2025-08-13 01:32:36.017084037 +0000 UTC m=+0.391987597,LastTimestamp:2025-08-13 01:32:36.017084037 +0000 UTC m=+0.391987597,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-234-27-175,}" Aug 13 01:32:36.024890 kubelet[2368]: I0813 01:32:36.024857 2368 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:32:36.025456 kubelet[2368]: I0813 01:32:36.025399 2368 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:32:36.025688 kubelet[2368]: E0813 01:32:36.025658 2368 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-234-27-175\" not found" Aug 13 01:32:36.025917 kubelet[2368]: I0813 01:32:36.025904 2368 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:32:36.027800 kubelet[2368]: I0813 01:32:36.027776 2368 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:32:36.028106 kubelet[2368]: I0813 01:32:36.028092 2368 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:32:36.028355 kubelet[2368]: I0813 01:32:36.028341 2368 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:32:36.028518 kubelet[2368]: I0813 01:32:36.028491 2368 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:32:36.028561 kubelet[2368]: I0813 01:32:36.028543 2368 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:32:36.030029 kubelet[2368]: E0813 01:32:36.029986 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.27.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-27-175?timeout=10s\": dial tcp 172.234.27.175:6443: connect: connection refused" interval="200ms" Aug 13 01:32:36.030239 kubelet[2368]: I0813 01:32:36.030225 2368 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:32:36.030369 kubelet[2368]: I0813 01:32:36.030353 2368 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:32:36.031984 kubelet[2368]: E0813 01:32:36.031969 2368 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:32:36.032767 kubelet[2368]: I0813 01:32:36.032233 2368 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:32:36.035573 kubelet[2368]: W0813 01:32:36.035541 2368 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.234.27.175:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.234.27.175:6443: connect: connection refused Aug 13 01:32:36.043534 kubelet[2368]: E0813 01:32:36.043492 2368 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.234.27.175:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.234.27.175:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:32:36.046167 kubelet[2368]: I0813 01:32:36.046100 2368 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:32:36.048844 kubelet[2368]: I0813 01:32:36.048810 2368 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:32:36.048844 kubelet[2368]: I0813 01:32:36.048833 2368 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:32:36.048844 kubelet[2368]: I0813 01:32:36.048849 2368 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:32:36.048921 kubelet[2368]: E0813 01:32:36.048892 2368 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:32:36.065617 kubelet[2368]: W0813 01:32:36.065546 2368 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.234.27.175:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.234.27.175:6443: connect: connection refused Aug 13 01:32:36.065617 kubelet[2368]: E0813 01:32:36.065585 2368 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.234.27.175:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.234.27.175:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:32:36.066785 kubelet[2368]: I0813 01:32:36.066767 2368 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:32:36.066785 kubelet[2368]: I0813 01:32:36.066781 2368 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:32:36.066863 kubelet[2368]: I0813 01:32:36.066796 2368 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:32:36.068981 kubelet[2368]: I0813 01:32:36.068931 2368 policy_none.go:49] "None policy: Start" Aug 13 01:32:36.069836 kubelet[2368]: I0813 01:32:36.069815 2368 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:32:36.069836 kubelet[2368]: I0813 01:32:36.069835 2368 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:32:36.076836 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 01:32:36.088403 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 01:32:36.091595 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 01:32:36.103952 kubelet[2368]: I0813 01:32:36.103929 2368 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:32:36.105149 kubelet[2368]: I0813 01:32:36.105137 2368 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:32:36.105726 kubelet[2368]: I0813 01:32:36.105680 2368 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:32:36.107174 kubelet[2368]: I0813 01:32:36.107155 2368 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:32:36.108846 kubelet[2368]: E0813 01:32:36.108826 2368 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-234-27-175\" not found" Aug 13 01:32:36.158907 systemd[1]: Created slice kubepods-burstable-pod44e1a0a7f53ff4aaaa0113bb8a2eccde.slice - libcontainer container kubepods-burstable-pod44e1a0a7f53ff4aaaa0113bb8a2eccde.slice. Aug 13 01:32:36.183558 systemd[1]: Created slice kubepods-burstable-podbe721076e46768e56a8be3669904730e.slice - libcontainer container kubepods-burstable-podbe721076e46768e56a8be3669904730e.slice. Aug 13 01:32:36.197195 systemd[1]: Created slice kubepods-burstable-pod756eb7c7de22f033a09e7368738f8d54.slice - libcontainer container kubepods-burstable-pod756eb7c7de22f033a09e7368738f8d54.slice. Aug 13 01:32:36.209297 kubelet[2368]: I0813 01:32:36.209213 2368 kubelet_node_status.go:72] "Attempting to register node" node="172-234-27-175" Aug 13 01:32:36.209808 kubelet[2368]: E0813 01:32:36.209770 2368 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.234.27.175:6443/api/v1/nodes\": dial tcp 172.234.27.175:6443: connect: connection refused" node="172-234-27-175" Aug 13 01:32:36.230325 kubelet[2368]: I0813 01:32:36.230288 2368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/44e1a0a7f53ff4aaaa0113bb8a2eccde-ca-certs\") pod \"kube-apiserver-172-234-27-175\" (UID: \"44e1a0a7f53ff4aaaa0113bb8a2eccde\") " pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:32:36.230325 kubelet[2368]: I0813 01:32:36.230315 2368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/44e1a0a7f53ff4aaaa0113bb8a2eccde-k8s-certs\") pod \"kube-apiserver-172-234-27-175\" (UID: \"44e1a0a7f53ff4aaaa0113bb8a2eccde\") " pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:32:36.230403 kubelet[2368]: I0813 01:32:36.230333 2368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/44e1a0a7f53ff4aaaa0113bb8a2eccde-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-27-175\" (UID: \"44e1a0a7f53ff4aaaa0113bb8a2eccde\") " pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:32:36.230403 kubelet[2368]: I0813 01:32:36.230350 2368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be721076e46768e56a8be3669904730e-ca-certs\") pod \"kube-controller-manager-172-234-27-175\" (UID: \"be721076e46768e56a8be3669904730e\") " pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:32:36.230403 kubelet[2368]: I0813 01:32:36.230366 2368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/756eb7c7de22f033a09e7368738f8d54-kubeconfig\") pod \"kube-scheduler-172-234-27-175\" (UID: \"756eb7c7de22f033a09e7368738f8d54\") " pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:32:36.230403 kubelet[2368]: I0813 01:32:36.230381 2368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/be721076e46768e56a8be3669904730e-flexvolume-dir\") pod \"kube-controller-manager-172-234-27-175\" (UID: \"be721076e46768e56a8be3669904730e\") " pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:32:36.230403 kubelet[2368]: I0813 01:32:36.230396 2368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be721076e46768e56a8be3669904730e-k8s-certs\") pod \"kube-controller-manager-172-234-27-175\" (UID: \"be721076e46768e56a8be3669904730e\") " pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:32:36.230511 kubelet[2368]: I0813 01:32:36.230414 2368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be721076e46768e56a8be3669904730e-kubeconfig\") pod \"kube-controller-manager-172-234-27-175\" (UID: \"be721076e46768e56a8be3669904730e\") " pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:32:36.230511 kubelet[2368]: I0813 01:32:36.230434 2368 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be721076e46768e56a8be3669904730e-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-27-175\" (UID: \"be721076e46768e56a8be3669904730e\") " pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:32:36.230763 kubelet[2368]: E0813 01:32:36.230737 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.27.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-27-175?timeout=10s\": dial tcp 172.234.27.175:6443: connect: connection refused" interval="400ms" Aug 13 01:32:36.412368 kubelet[2368]: I0813 01:32:36.412336 2368 kubelet_node_status.go:72] "Attempting to register node" node="172-234-27-175" Aug 13 01:32:36.412633 kubelet[2368]: E0813 01:32:36.412567 2368 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.234.27.175:6443/api/v1/nodes\": dial tcp 172.234.27.175:6443: connect: connection refused" node="172-234-27-175" Aug 13 01:32:36.476476 kubelet[2368]: E0813 01:32:36.476429 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:36.477260 containerd[1544]: time="2025-08-13T01:32:36.477200077Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-27-175,Uid:44e1a0a7f53ff4aaaa0113bb8a2eccde,Namespace:kube-system,Attempt:0,}" Aug 13 01:32:36.486860 kubelet[2368]: E0813 01:32:36.486591 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:36.487208 containerd[1544]: time="2025-08-13T01:32:36.487108842Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-27-175,Uid:be721076e46768e56a8be3669904730e,Namespace:kube-system,Attempt:0,}" Aug 13 01:32:36.500903 kubelet[2368]: E0813 01:32:36.500649 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:36.502658 containerd[1544]: time="2025-08-13T01:32:36.502433620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-27-175,Uid:756eb7c7de22f033a09e7368738f8d54,Namespace:kube-system,Attempt:0,}" Aug 13 01:32:36.505018 containerd[1544]: time="2025-08-13T01:32:36.504935171Z" level=info msg="connecting to shim ff5eed12ff9911f7290032159a5579d26de0de2dfab28d7fdefd59244ba1684e" address="unix:///run/containerd/s/496343a2b00ee84d92b9109e8371e05fff9e726fc99d437a0af291cc815916c2" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:32:36.526436 containerd[1544]: time="2025-08-13T01:32:36.526405132Z" level=info msg="connecting to shim 73121c63989692fa6b1e26cb9716c059d8a7a659d97ee473a7e4c13f6a375193" address="unix:///run/containerd/s/4374397702afecd4d42e9feac53e72a16c8dbce3e645b7da7f79a7463e59bd8a" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:32:36.550162 containerd[1544]: time="2025-08-13T01:32:36.549895983Z" level=info msg="connecting to shim c5c4b572d07b5586469a38582952fbfb9aae352ffacee9799ef87b03c4709b21" address="unix:///run/containerd/s/b2a111b63855f34244eec16422ef17b6279b96cc21ac12587fa005ca7369d69e" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:32:36.561391 systemd[1]: Started cri-containerd-ff5eed12ff9911f7290032159a5579d26de0de2dfab28d7fdefd59244ba1684e.scope - libcontainer container ff5eed12ff9911f7290032159a5579d26de0de2dfab28d7fdefd59244ba1684e. Aug 13 01:32:36.575170 systemd[1]: Started cri-containerd-73121c63989692fa6b1e26cb9716c059d8a7a659d97ee473a7e4c13f6a375193.scope - libcontainer container 73121c63989692fa6b1e26cb9716c059d8a7a659d97ee473a7e4c13f6a375193. Aug 13 01:32:36.599279 systemd[1]: Started cri-containerd-c5c4b572d07b5586469a38582952fbfb9aae352ffacee9799ef87b03c4709b21.scope - libcontainer container c5c4b572d07b5586469a38582952fbfb9aae352ffacee9799ef87b03c4709b21. Aug 13 01:32:36.632394 kubelet[2368]: E0813 01:32:36.632323 2368 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.234.27.175:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-234-27-175?timeout=10s\": dial tcp 172.234.27.175:6443: connect: connection refused" interval="800ms" Aug 13 01:32:36.645146 containerd[1544]: time="2025-08-13T01:32:36.645025181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-234-27-175,Uid:44e1a0a7f53ff4aaaa0113bb8a2eccde,Namespace:kube-system,Attempt:0,} returns sandbox id \"ff5eed12ff9911f7290032159a5579d26de0de2dfab28d7fdefd59244ba1684e\"" Aug 13 01:32:36.646378 kubelet[2368]: E0813 01:32:36.646318 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:36.650226 containerd[1544]: time="2025-08-13T01:32:36.649995203Z" level=info msg="CreateContainer within sandbox \"ff5eed12ff9911f7290032159a5579d26de0de2dfab28d7fdefd59244ba1684e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:32:36.669050 containerd[1544]: time="2025-08-13T01:32:36.669014153Z" level=info msg="Container 9911d22eb4bc92da55dbd4e7eba01692e6e22c9e0b49f66cbdd010ddd4200681: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:32:36.669478 containerd[1544]: time="2025-08-13T01:32:36.669361293Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-234-27-175,Uid:be721076e46768e56a8be3669904730e,Namespace:kube-system,Attempt:0,} returns sandbox id \"73121c63989692fa6b1e26cb9716c059d8a7a659d97ee473a7e4c13f6a375193\"" Aug 13 01:32:36.670166 kubelet[2368]: E0813 01:32:36.670111 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:36.673231 containerd[1544]: time="2025-08-13T01:32:36.673185135Z" level=info msg="CreateContainer within sandbox \"73121c63989692fa6b1e26cb9716c059d8a7a659d97ee473a7e4c13f6a375193\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:32:36.687682 containerd[1544]: time="2025-08-13T01:32:36.687553672Z" level=info msg="CreateContainer within sandbox \"ff5eed12ff9911f7290032159a5579d26de0de2dfab28d7fdefd59244ba1684e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"9911d22eb4bc92da55dbd4e7eba01692e6e22c9e0b49f66cbdd010ddd4200681\"" Aug 13 01:32:36.688247 containerd[1544]: time="2025-08-13T01:32:36.688225273Z" level=info msg="StartContainer for \"9911d22eb4bc92da55dbd4e7eba01692e6e22c9e0b49f66cbdd010ddd4200681\"" Aug 13 01:32:36.689586 containerd[1544]: time="2025-08-13T01:32:36.689561863Z" level=info msg="connecting to shim 9911d22eb4bc92da55dbd4e7eba01692e6e22c9e0b49f66cbdd010ddd4200681" address="unix:///run/containerd/s/496343a2b00ee84d92b9109e8371e05fff9e726fc99d437a0af291cc815916c2" protocol=ttrpc version=3 Aug 13 01:32:36.691195 containerd[1544]: time="2025-08-13T01:32:36.690148933Z" level=info msg="Container be17d71119d0300d5fbb86a8d436fad65f3a0945a651f7f10fe85cfe4dada3b0: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:32:36.701898 containerd[1544]: time="2025-08-13T01:32:36.701550639Z" level=info msg="CreateContainer within sandbox \"73121c63989692fa6b1e26cb9716c059d8a7a659d97ee473a7e4c13f6a375193\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"be17d71119d0300d5fbb86a8d436fad65f3a0945a651f7f10fe85cfe4dada3b0\"" Aug 13 01:32:36.702671 containerd[1544]: time="2025-08-13T01:32:36.702606430Z" level=info msg="StartContainer for \"be17d71119d0300d5fbb86a8d436fad65f3a0945a651f7f10fe85cfe4dada3b0\"" Aug 13 01:32:36.704943 containerd[1544]: time="2025-08-13T01:32:36.704905471Z" level=info msg="connecting to shim be17d71119d0300d5fbb86a8d436fad65f3a0945a651f7f10fe85cfe4dada3b0" address="unix:///run/containerd/s/4374397702afecd4d42e9feac53e72a16c8dbce3e645b7da7f79a7463e59bd8a" protocol=ttrpc version=3 Aug 13 01:32:36.709103 containerd[1544]: time="2025-08-13T01:32:36.709068353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-234-27-175,Uid:756eb7c7de22f033a09e7368738f8d54,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5c4b572d07b5586469a38582952fbfb9aae352ffacee9799ef87b03c4709b21\"" Aug 13 01:32:36.709826 kubelet[2368]: E0813 01:32:36.709798 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:36.713330 containerd[1544]: time="2025-08-13T01:32:36.712586815Z" level=info msg="CreateContainer within sandbox \"c5c4b572d07b5586469a38582952fbfb9aae352ffacee9799ef87b03c4709b21\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:32:36.721981 containerd[1544]: time="2025-08-13T01:32:36.721962479Z" level=info msg="Container 6a18018c7f1d7000fe773ee700f28b45b005f3998d12b9d5f603197603e568ff: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:32:36.727322 systemd[1]: Started cri-containerd-9911d22eb4bc92da55dbd4e7eba01692e6e22c9e0b49f66cbdd010ddd4200681.scope - libcontainer container 9911d22eb4bc92da55dbd4e7eba01692e6e22c9e0b49f66cbdd010ddd4200681. Aug 13 01:32:36.731207 containerd[1544]: time="2025-08-13T01:32:36.731184554Z" level=info msg="CreateContainer within sandbox \"c5c4b572d07b5586469a38582952fbfb9aae352ffacee9799ef87b03c4709b21\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"6a18018c7f1d7000fe773ee700f28b45b005f3998d12b9d5f603197603e568ff\"" Aug 13 01:32:36.732298 containerd[1544]: time="2025-08-13T01:32:36.732268135Z" level=info msg="StartContainer for \"6a18018c7f1d7000fe773ee700f28b45b005f3998d12b9d5f603197603e568ff\"" Aug 13 01:32:36.735987 containerd[1544]: time="2025-08-13T01:32:36.735944716Z" level=info msg="connecting to shim 6a18018c7f1d7000fe773ee700f28b45b005f3998d12b9d5f603197603e568ff" address="unix:///run/containerd/s/b2a111b63855f34244eec16422ef17b6279b96cc21ac12587fa005ca7369d69e" protocol=ttrpc version=3 Aug 13 01:32:36.744410 systemd[1]: Started cri-containerd-be17d71119d0300d5fbb86a8d436fad65f3a0945a651f7f10fe85cfe4dada3b0.scope - libcontainer container be17d71119d0300d5fbb86a8d436fad65f3a0945a651f7f10fe85cfe4dada3b0. Aug 13 01:32:36.766366 systemd[1]: Started cri-containerd-6a18018c7f1d7000fe773ee700f28b45b005f3998d12b9d5f603197603e568ff.scope - libcontainer container 6a18018c7f1d7000fe773ee700f28b45b005f3998d12b9d5f603197603e568ff. Aug 13 01:32:36.820778 kubelet[2368]: I0813 01:32:36.817970 2368 kubelet_node_status.go:72] "Attempting to register node" node="172-234-27-175" Aug 13 01:32:36.820778 kubelet[2368]: E0813 01:32:36.818839 2368 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.234.27.175:6443/api/v1/nodes\": dial tcp 172.234.27.175:6443: connect: connection refused" node="172-234-27-175" Aug 13 01:32:36.826563 containerd[1544]: time="2025-08-13T01:32:36.826511502Z" level=info msg="StartContainer for \"9911d22eb4bc92da55dbd4e7eba01692e6e22c9e0b49f66cbdd010ddd4200681\" returns successfully" Aug 13 01:32:36.850352 containerd[1544]: time="2025-08-13T01:32:36.850276843Z" level=info msg="StartContainer for \"be17d71119d0300d5fbb86a8d436fad65f3a0945a651f7f10fe85cfe4dada3b0\" returns successfully" Aug 13 01:32:36.892380 containerd[1544]: time="2025-08-13T01:32:36.892326575Z" level=info msg="StartContainer for \"6a18018c7f1d7000fe773ee700f28b45b005f3998d12b9d5f603197603e568ff\" returns successfully" Aug 13 01:32:37.077509 kubelet[2368]: E0813 01:32:37.077371 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:37.078989 kubelet[2368]: E0813 01:32:37.078961 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:37.082567 kubelet[2368]: E0813 01:32:37.082536 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:37.621268 kubelet[2368]: I0813 01:32:37.621220 2368 kubelet_node_status.go:72] "Attempting to register node" node="172-234-27-175" Aug 13 01:32:38.084709 kubelet[2368]: E0813 01:32:38.084660 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:38.085653 kubelet[2368]: E0813 01:32:38.085622 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:38.295693 kubelet[2368]: E0813 01:32:38.295611 2368 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-234-27-175\" not found" node="172-234-27-175" Aug 13 01:32:38.410227 kubelet[2368]: I0813 01:32:38.407998 2368 kubelet_node_status.go:75] "Successfully registered node" node="172-234-27-175" Aug 13 01:32:38.410227 kubelet[2368]: E0813 01:32:38.408034 2368 kubelet_node_status.go:535] "Error updating node status, will retry" err="error getting node \"172-234-27-175\": node \"172-234-27-175\" not found" Aug 13 01:32:39.015392 kubelet[2368]: I0813 01:32:39.015360 2368 apiserver.go:52] "Watching apiserver" Aug 13 01:32:39.029610 kubelet[2368]: I0813 01:32:39.029591 2368 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:32:39.088037 kubelet[2368]: E0813 01:32:39.088016 2368 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-234-27-175\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:32:39.088388 kubelet[2368]: E0813 01:32:39.088175 2368 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:40.066660 systemd[1]: Reload requested from client PID 2644 ('systemctl') (unit session-7.scope)... Aug 13 01:32:40.066683 systemd[1]: Reloading... Aug 13 01:32:40.207205 zram_generator::config[2691]: No configuration found. Aug 13 01:32:40.317755 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:32:40.433153 systemd[1]: Reloading finished in 366 ms. Aug 13 01:32:40.468343 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:32:40.484514 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:32:40.484774 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:32:40.484824 systemd[1]: kubelet.service: Consumed 783ms CPU time, 130.5M memory peak. Aug 13 01:32:40.488060 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:32:40.693673 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:32:40.704567 (kubelet)[2739]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:32:40.744752 kubelet[2739]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:32:40.745154 kubelet[2739]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:32:40.745154 kubelet[2739]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:32:40.745609 kubelet[2739]: I0813 01:32:40.745576 2739 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:32:40.755046 kubelet[2739]: I0813 01:32:40.755010 2739 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:32:40.755163 kubelet[2739]: I0813 01:32:40.755152 2739 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:32:40.755430 kubelet[2739]: I0813 01:32:40.755417 2739 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:32:40.756755 kubelet[2739]: I0813 01:32:40.756729 2739 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 01:32:40.758879 kubelet[2739]: I0813 01:32:40.758865 2739 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:32:40.763910 kubelet[2739]: I0813 01:32:40.763859 2739 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:32:40.767845 kubelet[2739]: I0813 01:32:40.767811 2739 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:32:40.768289 kubelet[2739]: I0813 01:32:40.768278 2739 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:32:40.768574 kubelet[2739]: I0813 01:32:40.768556 2739 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:32:40.770153 kubelet[2739]: I0813 01:32:40.769155 2739 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-234-27-175","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:32:40.770153 kubelet[2739]: I0813 01:32:40.769366 2739 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:32:40.770153 kubelet[2739]: I0813 01:32:40.769376 2739 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:32:40.770153 kubelet[2739]: I0813 01:32:40.769407 2739 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:32:40.770153 kubelet[2739]: I0813 01:32:40.769495 2739 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:32:40.770337 kubelet[2739]: I0813 01:32:40.769507 2739 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:32:40.770337 kubelet[2739]: I0813 01:32:40.769540 2739 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:32:40.770337 kubelet[2739]: I0813 01:32:40.769549 2739 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:32:40.777448 kubelet[2739]: I0813 01:32:40.777432 2739 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:32:40.777963 kubelet[2739]: I0813 01:32:40.777951 2739 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:32:40.778513 kubelet[2739]: I0813 01:32:40.778500 2739 server.go:1274] "Started kubelet" Aug 13 01:32:40.779057 kubelet[2739]: I0813 01:32:40.779033 2739 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:32:40.780015 kubelet[2739]: I0813 01:32:40.779976 2739 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:32:40.780979 kubelet[2739]: I0813 01:32:40.780957 2739 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:32:40.781077 kubelet[2739]: I0813 01:32:40.781043 2739 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:32:40.781300 kubelet[2739]: I0813 01:32:40.781284 2739 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:32:40.786179 kubelet[2739]: I0813 01:32:40.786155 2739 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:32:40.788885 kubelet[2739]: I0813 01:32:40.788848 2739 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:32:40.790261 kubelet[2739]: I0813 01:32:40.790223 2739 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:32:40.790381 kubelet[2739]: I0813 01:32:40.790374 2739 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:32:40.790785 kubelet[2739]: I0813 01:32:40.790754 2739 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:32:40.791256 kubelet[2739]: I0813 01:32:40.791170 2739 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:32:40.791931 kubelet[2739]: E0813 01:32:40.791675 2739 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:32:40.794646 kubelet[2739]: I0813 01:32:40.794596 2739 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:32:40.795377 kubelet[2739]: I0813 01:32:40.795349 2739 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:32:40.796382 kubelet[2739]: I0813 01:32:40.796367 2739 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:32:40.796447 kubelet[2739]: I0813 01:32:40.796438 2739 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:32:40.796508 kubelet[2739]: I0813 01:32:40.796499 2739 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:32:40.796591 kubelet[2739]: E0813 01:32:40.796577 2739 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:32:40.858306 kubelet[2739]: I0813 01:32:40.858277 2739 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:32:40.858306 kubelet[2739]: I0813 01:32:40.858295 2739 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:32:40.858306 kubelet[2739]: I0813 01:32:40.858312 2739 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:32:40.858472 kubelet[2739]: I0813 01:32:40.858449 2739 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:32:40.858499 kubelet[2739]: I0813 01:32:40.858466 2739 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:32:40.858499 kubelet[2739]: I0813 01:32:40.858488 2739 policy_none.go:49] "None policy: Start" Aug 13 01:32:40.859049 kubelet[2739]: I0813 01:32:40.859033 2739 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:32:40.860089 kubelet[2739]: I0813 01:32:40.859245 2739 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:32:40.860089 kubelet[2739]: I0813 01:32:40.859398 2739 state_mem.go:75] "Updated machine memory state" Aug 13 01:32:40.864603 kubelet[2739]: I0813 01:32:40.864587 2739 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:32:40.865060 kubelet[2739]: I0813 01:32:40.865048 2739 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:32:40.865174 kubelet[2739]: I0813 01:32:40.865121 2739 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:32:40.865383 kubelet[2739]: I0813 01:32:40.865372 2739 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:32:40.974582 kubelet[2739]: I0813 01:32:40.974552 2739 kubelet_node_status.go:72] "Attempting to register node" node="172-234-27-175" Aug 13 01:32:40.981961 kubelet[2739]: I0813 01:32:40.981910 2739 kubelet_node_status.go:111] "Node was previously registered" node="172-234-27-175" Aug 13 01:32:40.982196 kubelet[2739]: I0813 01:32:40.982161 2739 kubelet_node_status.go:75] "Successfully registered node" node="172-234-27-175" Aug 13 01:32:40.990938 kubelet[2739]: I0813 01:32:40.990903 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/44e1a0a7f53ff4aaaa0113bb8a2eccde-usr-share-ca-certificates\") pod \"kube-apiserver-172-234-27-175\" (UID: \"44e1a0a7f53ff4aaaa0113bb8a2eccde\") " pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:32:40.990938 kubelet[2739]: I0813 01:32:40.990933 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/be721076e46768e56a8be3669904730e-ca-certs\") pod \"kube-controller-manager-172-234-27-175\" (UID: \"be721076e46768e56a8be3669904730e\") " pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:32:40.991023 kubelet[2739]: I0813 01:32:40.990951 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/be721076e46768e56a8be3669904730e-flexvolume-dir\") pod \"kube-controller-manager-172-234-27-175\" (UID: \"be721076e46768e56a8be3669904730e\") " pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:32:40.991023 kubelet[2739]: I0813 01:32:40.990967 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/be721076e46768e56a8be3669904730e-k8s-certs\") pod \"kube-controller-manager-172-234-27-175\" (UID: \"be721076e46768e56a8be3669904730e\") " pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:32:40.991023 kubelet[2739]: I0813 01:32:40.990983 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/be721076e46768e56a8be3669904730e-usr-share-ca-certificates\") pod \"kube-controller-manager-172-234-27-175\" (UID: \"be721076e46768e56a8be3669904730e\") " pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:32:40.991023 kubelet[2739]: I0813 01:32:40.990998 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/756eb7c7de22f033a09e7368738f8d54-kubeconfig\") pod \"kube-scheduler-172-234-27-175\" (UID: \"756eb7c7de22f033a09e7368738f8d54\") " pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:32:40.991023 kubelet[2739]: I0813 01:32:40.991013 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/44e1a0a7f53ff4aaaa0113bb8a2eccde-ca-certs\") pod \"kube-apiserver-172-234-27-175\" (UID: \"44e1a0a7f53ff4aaaa0113bb8a2eccde\") " pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:32:40.991237 kubelet[2739]: I0813 01:32:40.991028 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/44e1a0a7f53ff4aaaa0113bb8a2eccde-k8s-certs\") pod \"kube-apiserver-172-234-27-175\" (UID: \"44e1a0a7f53ff4aaaa0113bb8a2eccde\") " pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:32:40.991237 kubelet[2739]: I0813 01:32:40.991042 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/be721076e46768e56a8be3669904730e-kubeconfig\") pod \"kube-controller-manager-172-234-27-175\" (UID: \"be721076e46768e56a8be3669904730e\") " pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:32:41.203235 kubelet[2739]: E0813 01:32:41.203189 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:41.204384 kubelet[2739]: E0813 01:32:41.204363 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:41.207157 kubelet[2739]: E0813 01:32:41.206681 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:41.776940 kubelet[2739]: I0813 01:32:41.776863 2739 apiserver.go:52] "Watching apiserver" Aug 13 01:32:41.790400 kubelet[2739]: I0813 01:32:41.790363 2739 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:32:41.840260 kubelet[2739]: E0813 01:32:41.838526 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:41.840568 kubelet[2739]: E0813 01:32:41.840531 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:41.844737 kubelet[2739]: E0813 01:32:41.844708 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:41.874583 kubelet[2739]: I0813 01:32:41.874511 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-234-27-175" podStartSLOduration=1.874499784 podStartE2EDuration="1.874499784s" podCreationTimestamp="2025-08-13 01:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:32:41.868707631 +0000 UTC m=+1.157024749" watchObservedRunningTime="2025-08-13 01:32:41.874499784 +0000 UTC m=+1.162816902" Aug 13 01:32:41.879812 kubelet[2739]: I0813 01:32:41.879496 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-234-27-175" podStartSLOduration=1.879488386 podStartE2EDuration="1.879488386s" podCreationTimestamp="2025-08-13 01:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:32:41.874807624 +0000 UTC m=+1.163124742" watchObservedRunningTime="2025-08-13 01:32:41.879488386 +0000 UTC m=+1.167805504" Aug 13 01:32:42.562593 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 01:32:42.840194 kubelet[2739]: E0813 01:32:42.840046 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:44.925617 kubelet[2739]: E0813 01:32:44.925379 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:47.276631 kubelet[2739]: I0813 01:32:47.276592 2739 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:32:47.277040 containerd[1544]: time="2025-08-13T01:32:47.276941473Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:32:47.277302 kubelet[2739]: I0813 01:32:47.277265 2739 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:32:47.813107 kubelet[2739]: E0813 01:32:47.812791 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:47.827122 kubelet[2739]: I0813 01:32:47.826948 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-234-27-175" podStartSLOduration=7.826926268 podStartE2EDuration="7.826926268s" podCreationTimestamp="2025-08-13 01:32:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:32:41.879627626 +0000 UTC m=+1.167944744" watchObservedRunningTime="2025-08-13 01:32:47.826926268 +0000 UTC m=+7.115243396" Aug 13 01:32:47.846960 kubelet[2739]: E0813 01:32:47.846920 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:48.084835 systemd[1]: Created slice kubepods-besteffort-podd56a237f_0b7a_4d9f_aab0_d196702f4ec6.slice - libcontainer container kubepods-besteffort-podd56a237f_0b7a_4d9f_aab0_d196702f4ec6.slice. Aug 13 01:32:48.130674 kubelet[2739]: I0813 01:32:48.130637 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/d56a237f-0b7a-4d9f-aab0-d196702f4ec6-kube-proxy\") pod \"kube-proxy-kfjpt\" (UID: \"d56a237f-0b7a-4d9f-aab0-d196702f4ec6\") " pod="kube-system/kube-proxy-kfjpt" Aug 13 01:32:48.130674 kubelet[2739]: I0813 01:32:48.130671 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d56a237f-0b7a-4d9f-aab0-d196702f4ec6-xtables-lock\") pod \"kube-proxy-kfjpt\" (UID: \"d56a237f-0b7a-4d9f-aab0-d196702f4ec6\") " pod="kube-system/kube-proxy-kfjpt" Aug 13 01:32:48.130796 kubelet[2739]: I0813 01:32:48.130693 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d56a237f-0b7a-4d9f-aab0-d196702f4ec6-lib-modules\") pod \"kube-proxy-kfjpt\" (UID: \"d56a237f-0b7a-4d9f-aab0-d196702f4ec6\") " pod="kube-system/kube-proxy-kfjpt" Aug 13 01:32:48.130796 kubelet[2739]: I0813 01:32:48.130713 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sqfxk\" (UniqueName: \"kubernetes.io/projected/d56a237f-0b7a-4d9f-aab0-d196702f4ec6-kube-api-access-sqfxk\") pod \"kube-proxy-kfjpt\" (UID: \"d56a237f-0b7a-4d9f-aab0-d196702f4ec6\") " pod="kube-system/kube-proxy-kfjpt" Aug 13 01:32:48.197211 systemd[1]: Created slice kubepods-besteffort-pod880ca925_9f4c_4ae5_8b8f_68c734e86586.slice - libcontainer container kubepods-besteffort-pod880ca925_9f4c_4ae5_8b8f_68c734e86586.slice. Aug 13 01:32:48.231556 kubelet[2739]: I0813 01:32:48.231523 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thg2w\" (UniqueName: \"kubernetes.io/projected/880ca925-9f4c-4ae5-8b8f-68c734e86586-kube-api-access-thg2w\") pod \"tigera-operator-5bf8dfcb4-rkgkd\" (UID: \"880ca925-9f4c-4ae5-8b8f-68c734e86586\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-rkgkd" Aug 13 01:32:48.231634 kubelet[2739]: I0813 01:32:48.231565 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/880ca925-9f4c-4ae5-8b8f-68c734e86586-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-rkgkd\" (UID: \"880ca925-9f4c-4ae5-8b8f-68c734e86586\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-rkgkd" Aug 13 01:32:48.400755 kubelet[2739]: E0813 01:32:48.400658 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:48.402381 containerd[1544]: time="2025-08-13T01:32:48.402343446Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kfjpt,Uid:d56a237f-0b7a-4d9f-aab0-d196702f4ec6,Namespace:kube-system,Attempt:0,}" Aug 13 01:32:48.423312 containerd[1544]: time="2025-08-13T01:32:48.423265676Z" level=info msg="connecting to shim 0a8761380b0b839098631d41a2aec1349397d8cf41bbf35d6a28aa6e4e40e81d" address="unix:///run/containerd/s/c90690b43db42079e6f20396b54ce2e40d0cbb616a9525bcb84d684e026ae7ac" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:32:48.452264 systemd[1]: Started cri-containerd-0a8761380b0b839098631d41a2aec1349397d8cf41bbf35d6a28aa6e4e40e81d.scope - libcontainer container 0a8761380b0b839098631d41a2aec1349397d8cf41bbf35d6a28aa6e4e40e81d. Aug 13 01:32:48.478764 containerd[1544]: time="2025-08-13T01:32:48.478688794Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-kfjpt,Uid:d56a237f-0b7a-4d9f-aab0-d196702f4ec6,Namespace:kube-system,Attempt:0,} returns sandbox id \"0a8761380b0b839098631d41a2aec1349397d8cf41bbf35d6a28aa6e4e40e81d\"" Aug 13 01:32:48.479898 kubelet[2739]: E0813 01:32:48.479880 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:48.483191 containerd[1544]: time="2025-08-13T01:32:48.483012026Z" level=info msg="CreateContainer within sandbox \"0a8761380b0b839098631d41a2aec1349397d8cf41bbf35d6a28aa6e4e40e81d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:32:48.493278 containerd[1544]: time="2025-08-13T01:32:48.493258411Z" level=info msg="Container e9303afc77f4660bad5cd327e886b5f728daebd435ff30850f78ca8d2f80efbd: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:32:48.498371 containerd[1544]: time="2025-08-13T01:32:48.498347174Z" level=info msg="CreateContainer within sandbox \"0a8761380b0b839098631d41a2aec1349397d8cf41bbf35d6a28aa6e4e40e81d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"e9303afc77f4660bad5cd327e886b5f728daebd435ff30850f78ca8d2f80efbd\"" Aug 13 01:32:48.499086 containerd[1544]: time="2025-08-13T01:32:48.499013304Z" level=info msg="StartContainer for \"e9303afc77f4660bad5cd327e886b5f728daebd435ff30850f78ca8d2f80efbd\"" Aug 13 01:32:48.500663 containerd[1544]: time="2025-08-13T01:32:48.500644645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-rkgkd,Uid:880ca925-9f4c-4ae5-8b8f-68c734e86586,Namespace:tigera-operator,Attempt:0,}" Aug 13 01:32:48.501821 containerd[1544]: time="2025-08-13T01:32:48.501797005Z" level=info msg="connecting to shim e9303afc77f4660bad5cd327e886b5f728daebd435ff30850f78ca8d2f80efbd" address="unix:///run/containerd/s/c90690b43db42079e6f20396b54ce2e40d0cbb616a9525bcb84d684e026ae7ac" protocol=ttrpc version=3 Aug 13 01:32:48.521867 containerd[1544]: time="2025-08-13T01:32:48.521821275Z" level=info msg="connecting to shim 668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0" address="unix:///run/containerd/s/585a1f3433d37c8eeb30dc1ec8d58526084c14c3e2ed50122e7924c0cf4e0394" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:32:48.522353 systemd[1]: Started cri-containerd-e9303afc77f4660bad5cd327e886b5f728daebd435ff30850f78ca8d2f80efbd.scope - libcontainer container e9303afc77f4660bad5cd327e886b5f728daebd435ff30850f78ca8d2f80efbd. Aug 13 01:32:48.550258 systemd[1]: Started cri-containerd-668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0.scope - libcontainer container 668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0. Aug 13 01:32:48.581510 containerd[1544]: time="2025-08-13T01:32:48.581467985Z" level=info msg="StartContainer for \"e9303afc77f4660bad5cd327e886b5f728daebd435ff30850f78ca8d2f80efbd\" returns successfully" Aug 13 01:32:48.620119 containerd[1544]: time="2025-08-13T01:32:48.620077934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-rkgkd,Uid:880ca925-9f4c-4ae5-8b8f-68c734e86586,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0\"" Aug 13 01:32:48.622795 containerd[1544]: time="2025-08-13T01:32:48.622653086Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 01:32:48.851906 kubelet[2739]: E0813 01:32:48.851861 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:49.402771 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4202517451.mount: Deactivated successfully. Aug 13 01:32:50.310856 systemd[1]: Started sshd@7-172.234.27.175:22-101.126.90.52:35942.service - OpenSSH per-connection server daemon (101.126.90.52:35942). Aug 13 01:32:50.555524 kubelet[2739]: E0813 01:32:50.555479 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:50.575444 kubelet[2739]: I0813 01:32:50.575115 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-kfjpt" podStartSLOduration=2.575094831 podStartE2EDuration="2.575094831s" podCreationTimestamp="2025-08-13 01:32:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:32:48.864541656 +0000 UTC m=+8.152858794" watchObservedRunningTime="2025-08-13 01:32:50.575094831 +0000 UTC m=+9.863411959" Aug 13 01:32:50.857344 containerd[1544]: time="2025-08-13T01:32:50.856580122Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:50.858628 kubelet[2739]: E0813 01:32:50.856798 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:50.858698 containerd[1544]: time="2025-08-13T01:32:50.857946872Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 01:32:50.858698 containerd[1544]: time="2025-08-13T01:32:50.858549983Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:50.863568 containerd[1544]: time="2025-08-13T01:32:50.863515175Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:32:50.864615 containerd[1544]: time="2025-08-13T01:32:50.864577236Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.24188428s" Aug 13 01:32:50.864615 containerd[1544]: time="2025-08-13T01:32:50.864609626Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:32:50.867703 containerd[1544]: time="2025-08-13T01:32:50.867664297Z" level=info msg="CreateContainer within sandbox \"668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 01:32:50.879913 containerd[1544]: time="2025-08-13T01:32:50.879861133Z" level=info msg="Container df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:32:50.884988 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1109924604.mount: Deactivated successfully. Aug 13 01:32:50.892673 containerd[1544]: time="2025-08-13T01:32:50.892636170Z" level=info msg="CreateContainer within sandbox \"668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff\"" Aug 13 01:32:50.893851 containerd[1544]: time="2025-08-13T01:32:50.893652070Z" level=info msg="StartContainer for \"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff\"" Aug 13 01:32:50.895172 containerd[1544]: time="2025-08-13T01:32:50.895083761Z" level=info msg="connecting to shim df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff" address="unix:///run/containerd/s/585a1f3433d37c8eeb30dc1ec8d58526084c14c3e2ed50122e7924c0cf4e0394" protocol=ttrpc version=3 Aug 13 01:32:50.937279 systemd[1]: Started cri-containerd-df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff.scope - libcontainer container df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff. Aug 13 01:32:50.968120 containerd[1544]: time="2025-08-13T01:32:50.968060228Z" level=info msg="StartContainer for \"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff\" returns successfully" Aug 13 01:32:51.678062 sshd[3038]: Received disconnect from 101.126.90.52 port 35942:11: Bye Bye [preauth] Aug 13 01:32:51.678062 sshd[3038]: Disconnected from authenticating user root 101.126.90.52 port 35942 [preauth] Aug 13 01:32:51.680726 systemd[1]: sshd@7-172.234.27.175:22-101.126.90.52:35942.service: Deactivated successfully. Aug 13 01:32:51.869413 kubelet[2739]: I0813 01:32:51.869351 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-rkgkd" podStartSLOduration=1.6249356659999998 podStartE2EDuration="3.869334018s" podCreationTimestamp="2025-08-13 01:32:48 +0000 UTC" firstStartedPulling="2025-08-13 01:32:48.621761855 +0000 UTC m=+7.910078973" lastFinishedPulling="2025-08-13 01:32:50.866160197 +0000 UTC m=+10.154477325" observedRunningTime="2025-08-13 01:32:51.869216138 +0000 UTC m=+11.157533266" watchObservedRunningTime="2025-08-13 01:32:51.869334018 +0000 UTC m=+11.157651136" Aug 13 01:32:54.930375 kubelet[2739]: E0813 01:32:54.930329 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:55.867605 kubelet[2739]: E0813 01:32:55.867569 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:32:56.698042 sudo[1811]: pam_unix(sudo:session): session closed for user root Aug 13 01:32:56.750368 sshd[1810]: Connection closed by 147.75.109.163 port 56876 Aug 13 01:32:56.750036 sshd-session[1808]: pam_unix(sshd:session): session closed for user core Aug 13 01:32:56.757845 systemd[1]: sshd@6-172.234.27.175:22-147.75.109.163:56876.service: Deactivated successfully. Aug 13 01:32:56.758442 systemd-logind[1529]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:32:56.763013 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:32:56.763973 systemd[1]: session-7.scope: Consumed 3.687s CPU time, 229.3M memory peak. Aug 13 01:32:56.769771 systemd-logind[1529]: Removed session 7. Aug 13 01:32:56.956357 update_engine[1530]: I20250813 01:32:56.956171 1530 update_attempter.cc:509] Updating boot flags... Aug 13 01:32:59.543896 systemd[1]: Started sshd@8-172.234.27.175:22-103.186.1.197:50038.service - OpenSSH per-connection server daemon (103.186.1.197:50038). Aug 13 01:32:59.792767 systemd[1]: Created slice kubepods-besteffort-pod04e37abf_632a_4774_95bd_24fd06bd7c5c.slice - libcontainer container kubepods-besteffort-pod04e37abf_632a_4774_95bd_24fd06bd7c5c.slice. Aug 13 01:32:59.811159 kubelet[2739]: I0813 01:32:59.810508 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v6nbj\" (UniqueName: \"kubernetes.io/projected/04e37abf-632a-4774-95bd-24fd06bd7c5c-kube-api-access-v6nbj\") pod \"calico-typha-79464475b5-bbrtw\" (UID: \"04e37abf-632a-4774-95bd-24fd06bd7c5c\") " pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:32:59.811159 kubelet[2739]: I0813 01:32:59.810676 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/04e37abf-632a-4774-95bd-24fd06bd7c5c-tigera-ca-bundle\") pod \"calico-typha-79464475b5-bbrtw\" (UID: \"04e37abf-632a-4774-95bd-24fd06bd7c5c\") " pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:32:59.811159 kubelet[2739]: I0813 01:32:59.810697 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/04e37abf-632a-4774-95bd-24fd06bd7c5c-typha-certs\") pod \"calico-typha-79464475b5-bbrtw\" (UID: \"04e37abf-632a-4774-95bd-24fd06bd7c5c\") " pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:33:00.097189 kubelet[2739]: E0813 01:33:00.096676 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:33:00.099373 containerd[1544]: time="2025-08-13T01:33:00.098062183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79464475b5-bbrtw,Uid:04e37abf-632a-4774-95bd-24fd06bd7c5c,Namespace:calico-system,Attempt:0,}" Aug 13 01:33:00.122211 containerd[1544]: time="2025-08-13T01:33:00.121113601Z" level=info msg="connecting to shim 1303950867a319732051ad14e6c2e348c995301e491368e9b9c48e10541ac549" address="unix:///run/containerd/s/03806d760942d3fbd16d62d6bc6801ce2a89c57234fd455bef5a1abcd5a6ee10" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:33:00.148894 systemd[1]: Created slice kubepods-besteffort-pod7d03562e_9842_4425_9847_632615391bfb.slice - libcontainer container kubepods-besteffort-pod7d03562e_9842_4425_9847_632615391bfb.slice. Aug 13 01:33:00.163382 systemd[1]: Started cri-containerd-1303950867a319732051ad14e6c2e348c995301e491368e9b9c48e10541ac549.scope - libcontainer container 1303950867a319732051ad14e6c2e348c995301e491368e9b9c48e10541ac549. Aug 13 01:33:00.230408 containerd[1544]: time="2025-08-13T01:33:00.230359687Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-79464475b5-bbrtw,Uid:04e37abf-632a-4774-95bd-24fd06bd7c5c,Namespace:calico-system,Attempt:0,} returns sandbox id \"1303950867a319732051ad14e6c2e348c995301e491368e9b9c48e10541ac549\"" Aug 13 01:33:00.231302 kubelet[2739]: E0813 01:33:00.231282 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:33:00.233027 containerd[1544]: time="2025-08-13T01:33:00.232905999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 01:33:00.315826 kubelet[2739]: I0813 01:33:00.315794 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/7d03562e-9842-4425-9847-632615391bfb-policysync\") pod \"calico-node-5c47r\" (UID: \"7d03562e-9842-4425-9847-632615391bfb\") " pod="calico-system/calico-node-5c47r" Aug 13 01:33:00.315826 kubelet[2739]: I0813 01:33:00.315825 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/7d03562e-9842-4425-9847-632615391bfb-cni-bin-dir\") pod \"calico-node-5c47r\" (UID: \"7d03562e-9842-4425-9847-632615391bfb\") " pod="calico-system/calico-node-5c47r" Aug 13 01:33:00.316168 kubelet[2739]: I0813 01:33:00.315844 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7d03562e-9842-4425-9847-632615391bfb-lib-modules\") pod \"calico-node-5c47r\" (UID: \"7d03562e-9842-4425-9847-632615391bfb\") " pod="calico-system/calico-node-5c47r" Aug 13 01:33:00.316168 kubelet[2739]: I0813 01:33:00.315862 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d03562e-9842-4425-9847-632615391bfb-tigera-ca-bundle\") pod \"calico-node-5c47r\" (UID: \"7d03562e-9842-4425-9847-632615391bfb\") " pod="calico-system/calico-node-5c47r" Aug 13 01:33:00.316168 kubelet[2739]: I0813 01:33:00.315878 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7d03562e-9842-4425-9847-632615391bfb-xtables-lock\") pod \"calico-node-5c47r\" (UID: \"7d03562e-9842-4425-9847-632615391bfb\") " pod="calico-system/calico-node-5c47r" Aug 13 01:33:00.316168 kubelet[2739]: I0813 01:33:00.315893 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/7d03562e-9842-4425-9847-632615391bfb-cni-net-dir\") pod \"calico-node-5c47r\" (UID: \"7d03562e-9842-4425-9847-632615391bfb\") " pod="calico-system/calico-node-5c47r" Aug 13 01:33:00.316168 kubelet[2739]: I0813 01:33:00.315908 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/7d03562e-9842-4425-9847-632615391bfb-flexvol-driver-host\") pod \"calico-node-5c47r\" (UID: \"7d03562e-9842-4425-9847-632615391bfb\") " pod="calico-system/calico-node-5c47r" Aug 13 01:33:00.316292 kubelet[2739]: I0813 01:33:00.315927 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/7d03562e-9842-4425-9847-632615391bfb-node-certs\") pod \"calico-node-5c47r\" (UID: \"7d03562e-9842-4425-9847-632615391bfb\") " pod="calico-system/calico-node-5c47r" Aug 13 01:33:00.316292 kubelet[2739]: I0813 01:33:00.315943 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/7d03562e-9842-4425-9847-632615391bfb-cni-log-dir\") pod \"calico-node-5c47r\" (UID: \"7d03562e-9842-4425-9847-632615391bfb\") " pod="calico-system/calico-node-5c47r" Aug 13 01:33:00.316292 kubelet[2739]: I0813 01:33:00.315985 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/7d03562e-9842-4425-9847-632615391bfb-var-run-calico\") pod \"calico-node-5c47r\" (UID: \"7d03562e-9842-4425-9847-632615391bfb\") " pod="calico-system/calico-node-5c47r" Aug 13 01:33:00.316292 kubelet[2739]: I0813 01:33:00.316016 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8gp52\" (UniqueName: \"kubernetes.io/projected/7d03562e-9842-4425-9847-632615391bfb-kube-api-access-8gp52\") pod \"calico-node-5c47r\" (UID: \"7d03562e-9842-4425-9847-632615391bfb\") " pod="calico-system/calico-node-5c47r" Aug 13 01:33:00.316292 kubelet[2739]: I0813 01:33:00.316040 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7d03562e-9842-4425-9847-632615391bfb-var-lib-calico\") pod \"calico-node-5c47r\" (UID: \"7d03562e-9842-4425-9847-632615391bfb\") " pod="calico-system/calico-node-5c47r" Aug 13 01:33:00.424754 kubelet[2739]: E0813 01:33:00.424654 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.424754 kubelet[2739]: W0813 01:33:00.424673 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.425269 kubelet[2739]: E0813 01:33:00.425242 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.436302 kubelet[2739]: E0813 01:33:00.436124 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7bj49" podUID="4e5845c9-626c-4c83-900a-0da0bae2daed" Aug 13 01:33:00.442308 kubelet[2739]: E0813 01:33:00.442269 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.442308 kubelet[2739]: W0813 01:33:00.442296 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.442395 kubelet[2739]: E0813 01:33:00.442322 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.456183 containerd[1544]: time="2025-08-13T01:33:00.456040791Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5c47r,Uid:7d03562e-9842-4425-9847-632615391bfb,Namespace:calico-system,Attempt:0,}" Aug 13 01:33:00.490335 containerd[1544]: time="2025-08-13T01:33:00.490284772Z" level=info msg="connecting to shim c240561ffd890c1b5476094a7248023a31db54fc62397aa7089467e118977fc3" address="unix:///run/containerd/s/bb601809de608d4f6267fd6d2370633f52b680b0e7eaa9483a2b5d67d920826e" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:33:00.519062 kubelet[2739]: E0813 01:33:00.518861 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.519062 kubelet[2739]: W0813 01:33:00.518887 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.519062 kubelet[2739]: E0813 01:33:00.518912 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.520226 kubelet[2739]: E0813 01:33:00.520210 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.520363 kubelet[2739]: W0813 01:33:00.520293 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.520497 kubelet[2739]: E0813 01:33:00.520429 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.520740 kubelet[2739]: E0813 01:33:00.520700 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.520740 kubelet[2739]: W0813 01:33:00.520710 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.520740 kubelet[2739]: E0813 01:33:00.520718 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.521056 kubelet[2739]: E0813 01:33:00.521027 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.521148 kubelet[2739]: W0813 01:33:00.521103 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.521148 kubelet[2739]: E0813 01:33:00.521117 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.521488 kubelet[2739]: E0813 01:33:00.521429 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.521488 kubelet[2739]: W0813 01:33:00.521440 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.521488 kubelet[2739]: E0813 01:33:00.521447 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.522410 kubelet[2739]: E0813 01:33:00.521737 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.522806 kubelet[2739]: W0813 01:33:00.522465 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.522806 kubelet[2739]: E0813 01:33:00.522667 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.524038 kubelet[2739]: E0813 01:33:00.523820 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.524038 kubelet[2739]: W0813 01:33:00.523832 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.524038 kubelet[2739]: E0813 01:33:00.523843 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.526532 kubelet[2739]: E0813 01:33:00.526353 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.526532 kubelet[2739]: W0813 01:33:00.526369 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.526532 kubelet[2739]: E0813 01:33:00.526382 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.526742 kubelet[2739]: E0813 01:33:00.526665 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.526742 kubelet[2739]: W0813 01:33:00.526687 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.526742 kubelet[2739]: E0813 01:33:00.526707 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.526908 kubelet[2739]: E0813 01:33:00.526884 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.526908 kubelet[2739]: W0813 01:33:00.526896 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.526908 kubelet[2739]: E0813 01:33:00.526905 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.527620 kubelet[2739]: E0813 01:33:00.527054 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.527620 kubelet[2739]: W0813 01:33:00.527073 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.527620 kubelet[2739]: E0813 01:33:00.527081 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.529156 kubelet[2739]: E0813 01:33:00.527841 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.529156 kubelet[2739]: W0813 01:33:00.527855 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.529156 kubelet[2739]: E0813 01:33:00.527863 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.529156 kubelet[2739]: E0813 01:33:00.528019 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.529156 kubelet[2739]: W0813 01:33:00.528028 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.529156 kubelet[2739]: E0813 01:33:00.528036 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.529156 kubelet[2739]: E0813 01:33:00.528209 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.529156 kubelet[2739]: W0813 01:33:00.528216 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.529156 kubelet[2739]: E0813 01:33:00.528224 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.529506 kubelet[2739]: E0813 01:33:00.529285 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.529506 kubelet[2739]: W0813 01:33:00.529295 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.529506 kubelet[2739]: E0813 01:33:00.529304 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.529506 kubelet[2739]: E0813 01:33:00.529456 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.529506 kubelet[2739]: W0813 01:33:00.529463 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.529506 kubelet[2739]: E0813 01:33:00.529471 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.529647 kubelet[2739]: E0813 01:33:00.529629 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.529647 kubelet[2739]: W0813 01:33:00.529636 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.529647 kubelet[2739]: E0813 01:33:00.529644 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.530376 kubelet[2739]: E0813 01:33:00.529800 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.530376 kubelet[2739]: W0813 01:33:00.529812 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.530376 kubelet[2739]: E0813 01:33:00.529819 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.530376 kubelet[2739]: E0813 01:33:00.530232 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.530376 kubelet[2739]: W0813 01:33:00.530240 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.530376 kubelet[2739]: E0813 01:33:00.530248 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.530518 kubelet[2739]: E0813 01:33:00.530467 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.530518 kubelet[2739]: W0813 01:33:00.530475 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.530518 kubelet[2739]: E0813 01:33:00.530483 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.540258 systemd[1]: Started cri-containerd-c240561ffd890c1b5476094a7248023a31db54fc62397aa7089467e118977fc3.scope - libcontainer container c240561ffd890c1b5476094a7248023a31db54fc62397aa7089467e118977fc3. Aug 13 01:33:00.618954 kubelet[2739]: E0813 01:33:00.618913 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.618954 kubelet[2739]: W0813 01:33:00.618936 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.618954 kubelet[2739]: E0813 01:33:00.618951 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.619197 kubelet[2739]: I0813 01:33:00.619170 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9vtn6\" (UniqueName: \"kubernetes.io/projected/4e5845c9-626c-4c83-900a-0da0bae2daed-kube-api-access-9vtn6\") pod \"csi-node-driver-7bj49\" (UID: \"4e5845c9-626c-4c83-900a-0da0bae2daed\") " pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:00.620037 kubelet[2739]: E0813 01:33:00.619875 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.620037 kubelet[2739]: W0813 01:33:00.619892 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.620037 kubelet[2739]: E0813 01:33:00.619997 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.620037 kubelet[2739]: I0813 01:33:00.620020 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4e5845c9-626c-4c83-900a-0da0bae2daed-registration-dir\") pod \"csi-node-driver-7bj49\" (UID: \"4e5845c9-626c-4c83-900a-0da0bae2daed\") " pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:00.620333 kubelet[2739]: E0813 01:33:00.620304 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.620333 kubelet[2739]: W0813 01:33:00.620323 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.621335 kubelet[2739]: E0813 01:33:00.621311 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.622038 kubelet[2739]: E0813 01:33:00.621990 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.622089 kubelet[2739]: W0813 01:33:00.622051 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.622089 kubelet[2739]: E0813 01:33:00.622071 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.622867 kubelet[2739]: E0813 01:33:00.622421 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.622867 kubelet[2739]: W0813 01:33:00.622463 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.622867 kubelet[2739]: E0813 01:33:00.622481 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.622867 kubelet[2739]: I0813 01:33:00.622498 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4e5845c9-626c-4c83-900a-0da0bae2daed-varrun\") pod \"csi-node-driver-7bj49\" (UID: \"4e5845c9-626c-4c83-900a-0da0bae2daed\") " pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:00.623231 kubelet[2739]: E0813 01:33:00.623195 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.623291 kubelet[2739]: W0813 01:33:00.623224 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.623367 kubelet[2739]: E0813 01:33:00.623340 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.623407 kubelet[2739]: I0813 01:33:00.623379 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4e5845c9-626c-4c83-900a-0da0bae2daed-kubelet-dir\") pod \"csi-node-driver-7bj49\" (UID: \"4e5845c9-626c-4c83-900a-0da0bae2daed\") " pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:00.624504 kubelet[2739]: E0813 01:33:00.624473 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.624845 kubelet[2739]: W0813 01:33:00.624816 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.625067 kubelet[2739]: E0813 01:33:00.625036 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.625715 kubelet[2739]: E0813 01:33:00.625686 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.625758 kubelet[2739]: W0813 01:33:00.625732 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.626151 kubelet[2739]: E0813 01:33:00.625786 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.626151 kubelet[2739]: E0813 01:33:00.626107 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.626151 kubelet[2739]: W0813 01:33:00.626116 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.626454 kubelet[2739]: E0813 01:33:00.626426 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.626765 kubelet[2739]: E0813 01:33:00.626738 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.626801 kubelet[2739]: W0813 01:33:00.626767 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.627206 kubelet[2739]: I0813 01:33:00.627174 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4e5845c9-626c-4c83-900a-0da0bae2daed-socket-dir\") pod \"csi-node-driver-7bj49\" (UID: \"4e5845c9-626c-4c83-900a-0da0bae2daed\") " pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:00.627234 kubelet[2739]: E0813 01:33:00.627205 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.627354 kubelet[2739]: E0813 01:33:00.627289 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.627427 kubelet[2739]: W0813 01:33:00.627400 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.628149 kubelet[2739]: E0813 01:33:00.627456 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.628355 kubelet[2739]: E0813 01:33:00.628320 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.628355 kubelet[2739]: W0813 01:33:00.628351 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.628422 kubelet[2739]: E0813 01:33:00.628396 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.629165 kubelet[2739]: E0813 01:33:00.629105 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.629165 kubelet[2739]: W0813 01:33:00.629124 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.629165 kubelet[2739]: E0813 01:33:00.629146 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.629492 kubelet[2739]: E0813 01:33:00.629466 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.629492 kubelet[2739]: W0813 01:33:00.629484 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.629492 kubelet[2739]: E0813 01:33:00.629493 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.629778 kubelet[2739]: E0813 01:33:00.629754 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.629778 kubelet[2739]: W0813 01:33:00.629770 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.629778 kubelet[2739]: E0813 01:33:00.629779 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.681888 containerd[1544]: time="2025-08-13T01:33:00.681411430Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-5c47r,Uid:7d03562e-9842-4425-9847-632615391bfb,Namespace:calico-system,Attempt:0,} returns sandbox id \"c240561ffd890c1b5476094a7248023a31db54fc62397aa7089467e118977fc3\"" Aug 13 01:33:00.728834 kubelet[2739]: E0813 01:33:00.728300 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.728834 kubelet[2739]: W0813 01:33:00.728325 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.728834 kubelet[2739]: E0813 01:33:00.728368 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.728834 kubelet[2739]: E0813 01:33:00.728645 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.728834 kubelet[2739]: W0813 01:33:00.728654 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.728834 kubelet[2739]: E0813 01:33:00.728695 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.731985 kubelet[2739]: E0813 01:33:00.731941 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.731985 kubelet[2739]: W0813 01:33:00.731964 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.731985 kubelet[2739]: E0813 01:33:00.731975 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.733280 kubelet[2739]: E0813 01:33:00.733261 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.733280 kubelet[2739]: W0813 01:33:00.733276 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.733280 kubelet[2739]: E0813 01:33:00.733286 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.733965 kubelet[2739]: E0813 01:33:00.733477 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.733965 kubelet[2739]: W0813 01:33:00.733497 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.733965 kubelet[2739]: E0813 01:33:00.733520 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.735292 kubelet[2739]: E0813 01:33:00.735278 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.735365 kubelet[2739]: W0813 01:33:00.735353 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.735523 kubelet[2739]: E0813 01:33:00.735510 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.737522 kubelet[2739]: E0813 01:33:00.737490 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.737522 kubelet[2739]: W0813 01:33:00.737506 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.737522 kubelet[2739]: E0813 01:33:00.737517 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.737929 kubelet[2739]: E0813 01:33:00.737901 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.737929 kubelet[2739]: W0813 01:33:00.737916 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.738537 kubelet[2739]: E0813 01:33:00.738497 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.738703 kubelet[2739]: E0813 01:33:00.738678 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.738703 kubelet[2739]: W0813 01:33:00.738692 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.738975 kubelet[2739]: E0813 01:33:00.738944 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.739261 kubelet[2739]: E0813 01:33:00.739213 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.739261 kubelet[2739]: W0813 01:33:00.739237 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.739431 kubelet[2739]: E0813 01:33:00.739354 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.739636 kubelet[2739]: E0813 01:33:00.739625 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.739739 kubelet[2739]: W0813 01:33:00.739677 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.739739 kubelet[2739]: E0813 01:33:00.739704 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.740106 kubelet[2739]: E0813 01:33:00.740022 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.740106 kubelet[2739]: W0813 01:33:00.740055 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.740257 kubelet[2739]: E0813 01:33:00.740199 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.740503 kubelet[2739]: E0813 01:33:00.740469 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.740503 kubelet[2739]: W0813 01:33:00.740479 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.740656 kubelet[2739]: E0813 01:33:00.740631 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.740831 kubelet[2739]: E0813 01:33:00.740815 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.740831 kubelet[2739]: W0813 01:33:00.740826 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.741051 kubelet[2739]: E0813 01:33:00.740918 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.741408 kubelet[2739]: E0813 01:33:00.741018 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.741408 kubelet[2739]: W0813 01:33:00.741405 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.741672 kubelet[2739]: E0813 01:33:00.741638 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.741773 kubelet[2739]: E0813 01:33:00.741749 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.741773 kubelet[2739]: W0813 01:33:00.741767 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.741851 kubelet[2739]: E0813 01:33:00.741836 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.742084 kubelet[2739]: E0813 01:33:00.742058 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.742084 kubelet[2739]: W0813 01:33:00.742075 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.742219 kubelet[2739]: E0813 01:33:00.742194 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.742451 kubelet[2739]: E0813 01:33:00.742428 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.742451 kubelet[2739]: W0813 01:33:00.742443 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.742507 kubelet[2739]: E0813 01:33:00.742477 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.742735 kubelet[2739]: E0813 01:33:00.742712 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.742735 kubelet[2739]: W0813 01:33:00.742729 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.742828 kubelet[2739]: E0813 01:33:00.742746 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.742956 kubelet[2739]: E0813 01:33:00.742938 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.742956 kubelet[2739]: W0813 01:33:00.742951 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.743050 kubelet[2739]: E0813 01:33:00.743036 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.743183 kubelet[2739]: E0813 01:33:00.743162 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.743183 kubelet[2739]: W0813 01:33:00.743177 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.743286 kubelet[2739]: E0813 01:33:00.743261 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.743503 kubelet[2739]: E0813 01:33:00.743435 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.743503 kubelet[2739]: W0813 01:33:00.743447 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.743503 kubelet[2739]: E0813 01:33:00.743462 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.743923 kubelet[2739]: E0813 01:33:00.743894 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.743962 kubelet[2739]: W0813 01:33:00.743944 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.743986 kubelet[2739]: E0813 01:33:00.743961 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.744276 kubelet[2739]: E0813 01:33:00.744248 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.744276 kubelet[2739]: W0813 01:33:00.744261 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.744349 kubelet[2739]: E0813 01:33:00.744294 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.744549 kubelet[2739]: E0813 01:33:00.744524 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.744549 kubelet[2739]: W0813 01:33:00.744541 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.744549 kubelet[2739]: E0813 01:33:00.744549 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:00.752777 kubelet[2739]: E0813 01:33:00.752746 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:00.752777 kubelet[2739]: W0813 01:33:00.752766 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:00.752777 kubelet[2739]: E0813 01:33:00.752777 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:01.007305 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2014033200.mount: Deactivated successfully. Aug 13 01:33:01.300729 sshd[3157]: Received disconnect from 103.186.1.197 port 50038:11: Bye Bye [preauth] Aug 13 01:33:01.300729 sshd[3157]: Disconnected from authenticating user root 103.186.1.197 port 50038 [preauth] Aug 13 01:33:01.303753 systemd[1]: sshd@8-172.234.27.175:22-103.186.1.197:50038.service: Deactivated successfully. Aug 13 01:33:01.598465 containerd[1544]: time="2025-08-13T01:33:01.598347741Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:33:01.599566 containerd[1544]: time="2025-08-13T01:33:01.599332398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 01:33:01.600197 containerd[1544]: time="2025-08-13T01:33:01.600165816Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:33:01.601606 containerd[1544]: time="2025-08-13T01:33:01.601571636Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:33:01.602194 containerd[1544]: time="2025-08-13T01:33:01.602164788Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.36921369s" Aug 13 01:33:01.602258 containerd[1544]: time="2025-08-13T01:33:01.602245427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 01:33:01.603317 containerd[1544]: time="2025-08-13T01:33:01.603288243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 01:33:01.621447 containerd[1544]: time="2025-08-13T01:33:01.621392301Z" level=info msg="CreateContainer within sandbox \"1303950867a319732051ad14e6c2e348c995301e491368e9b9c48e10541ac549\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 01:33:01.628091 containerd[1544]: time="2025-08-13T01:33:01.628028558Z" level=info msg="Container 2ecd7b6f28e681a31582a13f29c2f00ddf3e436ab8f1098d193e3c5053bf5a55: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:33:01.633613 containerd[1544]: time="2025-08-13T01:33:01.633573291Z" level=info msg="CreateContainer within sandbox \"1303950867a319732051ad14e6c2e348c995301e491368e9b9c48e10541ac549\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2ecd7b6f28e681a31582a13f29c2f00ddf3e436ab8f1098d193e3c5053bf5a55\"" Aug 13 01:33:01.634245 containerd[1544]: time="2025-08-13T01:33:01.634197492Z" level=info msg="StartContainer for \"2ecd7b6f28e681a31582a13f29c2f00ddf3e436ab8f1098d193e3c5053bf5a55\"" Aug 13 01:33:01.637956 containerd[1544]: time="2025-08-13T01:33:01.637843452Z" level=info msg="connecting to shim 2ecd7b6f28e681a31582a13f29c2f00ddf3e436ab8f1098d193e3c5053bf5a55" address="unix:///run/containerd/s/03806d760942d3fbd16d62d6bc6801ce2a89c57234fd455bef5a1abcd5a6ee10" protocol=ttrpc version=3 Aug 13 01:33:01.672470 systemd[1]: Started cri-containerd-2ecd7b6f28e681a31582a13f29c2f00ddf3e436ab8f1098d193e3c5053bf5a55.scope - libcontainer container 2ecd7b6f28e681a31582a13f29c2f00ddf3e436ab8f1098d193e3c5053bf5a55. Aug 13 01:33:01.736700 containerd[1544]: time="2025-08-13T01:33:01.736657207Z" level=info msg="StartContainer for \"2ecd7b6f28e681a31582a13f29c2f00ddf3e436ab8f1098d193e3c5053bf5a55\" returns successfully" Aug 13 01:33:01.889630 kubelet[2739]: E0813 01:33:01.889519 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:33:01.948354 kubelet[2739]: E0813 01:33:01.948328 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:01.948354 kubelet[2739]: W0813 01:33:01.948346 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:01.948466 kubelet[2739]: E0813 01:33:01.948364 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:01.948713 kubelet[2739]: E0813 01:33:01.948695 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:01.948713 kubelet[2739]: W0813 01:33:01.948709 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:01.948774 kubelet[2739]: E0813 01:33:01.948719 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:01.949166 kubelet[2739]: E0813 01:33:01.949112 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:01.949166 kubelet[2739]: W0813 01:33:01.949154 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:01.949166 kubelet[2739]: E0813 01:33:01.949164 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:01.950258 kubelet[2739]: E0813 01:33:01.950207 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:01.950258 kubelet[2739]: W0813 01:33:01.950223 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:01.950258 kubelet[2739]: E0813 01:33:01.950233 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:01.950570 kubelet[2739]: E0813 01:33:01.950545 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:01.950570 kubelet[2739]: W0813 01:33:01.950563 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:01.950616 kubelet[2739]: E0813 01:33:01.950573 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:01.950848 kubelet[2739]: E0813 01:33:01.950827 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:01.950848 kubelet[2739]: W0813 01:33:01.950843 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:01.950906 kubelet[2739]: E0813 01:33:01.950852 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:01.951088 kubelet[2739]: E0813 01:33:01.951071 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:01.951088 kubelet[2739]: W0813 01:33:01.951084 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:01.951154 kubelet[2739]: E0813 01:33:01.951094 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:01.951406 kubelet[2739]: E0813 01:33:01.951386 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:01.951406 kubelet[2739]: W0813 01:33:01.951399 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:01.952229 kubelet[2739]: E0813 01:33:01.952199 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:01.952678 kubelet[2739]: E0813 01:33:01.952650 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:01.952678 kubelet[2739]: W0813 01:33:01.952665 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:01.952678 kubelet[2739]: E0813 01:33:01.952674 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:01.952937 kubelet[2739]: E0813 01:33:01.952905 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:01.952937 kubelet[2739]: W0813 01:33:01.952920 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:01.952937 kubelet[2739]: E0813 01:33:01.952929 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:01.953224 kubelet[2739]: E0813 01:33:01.953200 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:01.953224 kubelet[2739]: W0813 01:33:01.953217 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:01.953224 kubelet[2739]: E0813 01:33:01.953225 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:01.953531 kubelet[2739]: E0813 01:33:01.953492 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:01.953531 kubelet[2739]: W0813 01:33:01.953531 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:01.953531 kubelet[2739]: E0813 01:33:01.953541 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:01.954353 kubelet[2739]: E0813 01:33:01.954321 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:01.954353 kubelet[2739]: W0813 01:33:01.954357 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:01.954353 kubelet[2739]: E0813 01:33:01.954367 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:01.954591 kubelet[2739]: E0813 01:33:01.954536 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:01.954591 kubelet[2739]: W0813 01:33:01.954550 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:01.954591 kubelet[2739]: E0813 01:33:01.954558 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:01.954762 kubelet[2739]: E0813 01:33:01.954729 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:01.954762 kubelet[2739]: W0813 01:33:01.954744 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:01.954762 kubelet[2739]: E0813 01:33:01.954751 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.048821 kubelet[2739]: E0813 01:33:02.048692 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:02.048821 kubelet[2739]: W0813 01:33:02.048732 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:02.048821 kubelet[2739]: E0813 01:33:02.048747 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.049089 kubelet[2739]: E0813 01:33:02.048998 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:02.049089 kubelet[2739]: W0813 01:33:02.049006 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:02.049253 kubelet[2739]: E0813 01:33:02.049026 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.049287 kubelet[2739]: E0813 01:33:02.049276 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:02.049287 kubelet[2739]: W0813 01:33:02.049284 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:02.049337 kubelet[2739]: E0813 01:33:02.049314 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.049616 kubelet[2739]: E0813 01:33:02.049557 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:02.049616 kubelet[2739]: W0813 01:33:02.049570 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:02.049733 kubelet[2739]: E0813 01:33:02.049587 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.049870 kubelet[2739]: E0813 01:33:02.049787 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:02.049870 kubelet[2739]: W0813 01:33:02.049795 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:02.049870 kubelet[2739]: E0813 01:33:02.049811 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.050056 kubelet[2739]: E0813 01:33:02.050029 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:02.050056 kubelet[2739]: W0813 01:33:02.050047 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:02.050154 kubelet[2739]: E0813 01:33:02.050070 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.050286 kubelet[2739]: E0813 01:33:02.050253 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:02.050318 kubelet[2739]: W0813 01:33:02.050297 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:02.050390 kubelet[2739]: E0813 01:33:02.050368 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.050556 kubelet[2739]: E0813 01:33:02.050535 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:02.050556 kubelet[2739]: W0813 01:33:02.050550 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:02.050771 kubelet[2739]: E0813 01:33:02.050635 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.050874 kubelet[2739]: E0813 01:33:02.050855 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:02.050910 kubelet[2739]: W0813 01:33:02.050884 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:02.050910 kubelet[2739]: E0813 01:33:02.050902 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.051320 kubelet[2739]: E0813 01:33:02.051301 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:02.051320 kubelet[2739]: W0813 01:33:02.051314 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:02.051725 kubelet[2739]: E0813 01:33:02.051598 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.052071 kubelet[2739]: E0813 01:33:02.052044 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:02.052071 kubelet[2739]: W0813 01:33:02.052063 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:02.052337 kubelet[2739]: E0813 01:33:02.052194 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.052509 kubelet[2739]: E0813 01:33:02.052467 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:02.052509 kubelet[2739]: W0813 01:33:02.052480 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:02.052679 kubelet[2739]: E0813 01:33:02.052594 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.053008 kubelet[2739]: E0813 01:33:02.052978 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:02.053146 kubelet[2739]: W0813 01:33:02.053012 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:02.053146 kubelet[2739]: E0813 01:33:02.053092 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.053382 kubelet[2739]: E0813 01:33:02.053363 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:02.053382 kubelet[2739]: W0813 01:33:02.053375 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:02.053531 kubelet[2739]: E0813 01:33:02.053509 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.054058 kubelet[2739]: E0813 01:33:02.054026 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:02.054058 kubelet[2739]: W0813 01:33:02.054042 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:02.054115 kubelet[2739]: E0813 01:33:02.054063 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.054306 kubelet[2739]: E0813 01:33:02.054284 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:02.054306 kubelet[2739]: W0813 01:33:02.054297 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:02.054353 kubelet[2739]: E0813 01:33:02.054305 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.054625 kubelet[2739]: E0813 01:33:02.054606 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:02.054625 kubelet[2739]: W0813 01:33:02.054619 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:02.054682 kubelet[2739]: E0813 01:33:02.054628 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.055519 kubelet[2739]: E0813 01:33:02.055412 2739 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:33:02.055519 kubelet[2739]: W0813 01:33:02.055476 2739 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:33:02.055519 kubelet[2739]: E0813 01:33:02.055487 2739 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:33:02.415188 containerd[1544]: time="2025-08-13T01:33:02.415104224Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:33:02.416037 containerd[1544]: time="2025-08-13T01:33:02.416003093Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 01:33:02.418179 containerd[1544]: time="2025-08-13T01:33:02.416968800Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:33:02.419033 containerd[1544]: time="2025-08-13T01:33:02.418985664Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:33:02.419670 containerd[1544]: time="2025-08-13T01:33:02.419493867Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 816.177415ms" Aug 13 01:33:02.419670 containerd[1544]: time="2025-08-13T01:33:02.419535686Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 01:33:02.423040 containerd[1544]: time="2025-08-13T01:33:02.423013611Z" level=info msg="CreateContainer within sandbox \"c240561ffd890c1b5476094a7248023a31db54fc62397aa7089467e118977fc3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 01:33:02.431327 containerd[1544]: time="2025-08-13T01:33:02.430294227Z" level=info msg="Container 4b3a30b7a287826ac1295bcea5cc787ba5fe5d980f667908848f0efe25748812: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:33:02.441634 containerd[1544]: time="2025-08-13T01:33:02.441603949Z" level=info msg="CreateContainer within sandbox \"c240561ffd890c1b5476094a7248023a31db54fc62397aa7089467e118977fc3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"4b3a30b7a287826ac1295bcea5cc787ba5fe5d980f667908848f0efe25748812\"" Aug 13 01:33:02.443505 containerd[1544]: time="2025-08-13T01:33:02.443462575Z" level=info msg="StartContainer for \"4b3a30b7a287826ac1295bcea5cc787ba5fe5d980f667908848f0efe25748812\"" Aug 13 01:33:02.446289 containerd[1544]: time="2025-08-13T01:33:02.446219150Z" level=info msg="connecting to shim 4b3a30b7a287826ac1295bcea5cc787ba5fe5d980f667908848f0efe25748812" address="unix:///run/containerd/s/bb601809de608d4f6267fd6d2370633f52b680b0e7eaa9483a2b5d67d920826e" protocol=ttrpc version=3 Aug 13 01:33:02.476264 systemd[1]: Started cri-containerd-4b3a30b7a287826ac1295bcea5cc787ba5fe5d980f667908848f0efe25748812.scope - libcontainer container 4b3a30b7a287826ac1295bcea5cc787ba5fe5d980f667908848f0efe25748812. Aug 13 01:33:02.523159 containerd[1544]: time="2025-08-13T01:33:02.523076420Z" level=info msg="StartContainer for \"4b3a30b7a287826ac1295bcea5cc787ba5fe5d980f667908848f0efe25748812\" returns successfully" Aug 13 01:33:02.537869 systemd[1]: cri-containerd-4b3a30b7a287826ac1295bcea5cc787ba5fe5d980f667908848f0efe25748812.scope: Deactivated successfully. Aug 13 01:33:02.542914 containerd[1544]: time="2025-08-13T01:33:02.542887992Z" level=info msg="TaskExit event in podsandbox handler container_id:\"4b3a30b7a287826ac1295bcea5cc787ba5fe5d980f667908848f0efe25748812\" id:\"4b3a30b7a287826ac1295bcea5cc787ba5fe5d980f667908848f0efe25748812\" pid:3436 exited_at:{seconds:1755048782 nanos:542446568}" Aug 13 01:33:02.543103 containerd[1544]: time="2025-08-13T01:33:02.543008901Z" level=info msg="received exit event container_id:\"4b3a30b7a287826ac1295bcea5cc787ba5fe5d980f667908848f0efe25748812\" id:\"4b3a30b7a287826ac1295bcea5cc787ba5fe5d980f667908848f0efe25748812\" pid:3436 exited_at:{seconds:1755048782 nanos:542446568}" Aug 13 01:33:02.568830 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4b3a30b7a287826ac1295bcea5cc787ba5fe5d980f667908848f0efe25748812-rootfs.mount: Deactivated successfully. Aug 13 01:33:02.797919 kubelet[2739]: E0813 01:33:02.797889 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7bj49" podUID="4e5845c9-626c-4c83-900a-0da0bae2daed" Aug 13 01:33:02.896431 kubelet[2739]: I0813 01:33:02.896298 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:33:02.898642 kubelet[2739]: E0813 01:33:02.896804 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:33:02.898692 containerd[1544]: time="2025-08-13T01:33:02.898163891Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 01:33:02.934288 kubelet[2739]: I0813 01:33:02.934216 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-79464475b5-bbrtw" podStartSLOduration=2.563567094 podStartE2EDuration="3.934193243s" podCreationTimestamp="2025-08-13 01:32:59 +0000 UTC" firstStartedPulling="2025-08-13 01:33:00.232523595 +0000 UTC m=+19.520840713" lastFinishedPulling="2025-08-13 01:33:01.603149744 +0000 UTC m=+20.891466862" observedRunningTime="2025-08-13 01:33:01.898722743 +0000 UTC m=+21.187039871" watchObservedRunningTime="2025-08-13 01:33:02.934193243 +0000 UTC m=+22.222510361" Aug 13 01:33:04.798785 kubelet[2739]: E0813 01:33:04.798502 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-7bj49" podUID="4e5845c9-626c-4c83-900a-0da0bae2daed" Aug 13 01:33:05.073539 containerd[1544]: time="2025-08-13T01:33:05.072738076Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:33:05.073915 containerd[1544]: time="2025-08-13T01:33:05.073895984Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 01:33:05.074415 containerd[1544]: time="2025-08-13T01:33:05.074372419Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:33:05.076398 containerd[1544]: time="2025-08-13T01:33:05.076352248Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:33:05.077346 containerd[1544]: time="2025-08-13T01:33:05.077324367Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 2.179094756s" Aug 13 01:33:05.077615 containerd[1544]: time="2025-08-13T01:33:05.077409186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 01:33:05.080402 containerd[1544]: time="2025-08-13T01:33:05.080354525Z" level=info msg="CreateContainer within sandbox \"c240561ffd890c1b5476094a7248023a31db54fc62397aa7089467e118977fc3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 01:33:05.089419 containerd[1544]: time="2025-08-13T01:33:05.089391759Z" level=info msg="Container a65e6689d6a03dc3711ceb587b55b5c1c1eebf060c9380a11700607c63d24ebe: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:33:05.097403 containerd[1544]: time="2025-08-13T01:33:05.097357974Z" level=info msg="CreateContainer within sandbox \"c240561ffd890c1b5476094a7248023a31db54fc62397aa7089467e118977fc3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"a65e6689d6a03dc3711ceb587b55b5c1c1eebf060c9380a11700607c63d24ebe\"" Aug 13 01:33:05.098077 containerd[1544]: time="2025-08-13T01:33:05.097850180Z" level=info msg="StartContainer for \"a65e6689d6a03dc3711ceb587b55b5c1c1eebf060c9380a11700607c63d24ebe\"" Aug 13 01:33:05.099292 containerd[1544]: time="2025-08-13T01:33:05.099235364Z" level=info msg="connecting to shim a65e6689d6a03dc3711ceb587b55b5c1c1eebf060c9380a11700607c63d24ebe" address="unix:///run/containerd/s/bb601809de608d4f6267fd6d2370633f52b680b0e7eaa9483a2b5d67d920826e" protocol=ttrpc version=3 Aug 13 01:33:05.132328 systemd[1]: Started cri-containerd-a65e6689d6a03dc3711ceb587b55b5c1c1eebf060c9380a11700607c63d24ebe.scope - libcontainer container a65e6689d6a03dc3711ceb587b55b5c1c1eebf060c9380a11700607c63d24ebe. Aug 13 01:33:05.188383 containerd[1544]: time="2025-08-13T01:33:05.188283998Z" level=info msg="StartContainer for \"a65e6689d6a03dc3711ceb587b55b5c1c1eebf060c9380a11700607c63d24ebe\" returns successfully" Aug 13 01:33:05.718788 containerd[1544]: time="2025-08-13T01:33:05.718737960Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:33:05.721444 systemd[1]: cri-containerd-a65e6689d6a03dc3711ceb587b55b5c1c1eebf060c9380a11700607c63d24ebe.scope: Deactivated successfully. Aug 13 01:33:05.721763 systemd[1]: cri-containerd-a65e6689d6a03dc3711ceb587b55b5c1c1eebf060c9380a11700607c63d24ebe.scope: Consumed 541ms CPU time, 193M memory peak, 171.2M written to disk. Aug 13 01:33:05.723025 containerd[1544]: time="2025-08-13T01:33:05.722936605Z" level=info msg="received exit event container_id:\"a65e6689d6a03dc3711ceb587b55b5c1c1eebf060c9380a11700607c63d24ebe\" id:\"a65e6689d6a03dc3711ceb587b55b5c1c1eebf060c9380a11700607c63d24ebe\" pid:3491 exited_at:{seconds:1755048785 nanos:722416341}" Aug 13 01:33:05.723327 containerd[1544]: time="2025-08-13T01:33:05.723300251Z" level=info msg="TaskExit event in podsandbox handler container_id:\"a65e6689d6a03dc3711ceb587b55b5c1c1eebf060c9380a11700607c63d24ebe\" id:\"a65e6689d6a03dc3711ceb587b55b5c1c1eebf060c9380a11700607c63d24ebe\" pid:3491 exited_at:{seconds:1755048785 nanos:722416341}" Aug 13 01:33:05.749259 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a65e6689d6a03dc3711ceb587b55b5c1c1eebf060c9380a11700607c63d24ebe-rootfs.mount: Deactivated successfully. Aug 13 01:33:05.799458 kubelet[2739]: I0813 01:33:05.799303 2739 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 01:33:05.826902 kubelet[2739]: W0813 01:33:05.826795 2739 reflector.go:561] object-"kube-system"/"coredns": failed to list *v1.ConfigMap: configmaps "coredns" is forbidden: User "system:node:172-234-27-175" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-234-27-175' and this object Aug 13 01:33:05.827959 kubelet[2739]: E0813 01:33:05.827532 2739 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"coredns\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"coredns\" is forbidden: User \"system:node:172-234-27-175\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-234-27-175' and this object" logger="UnhandledError" Aug 13 01:33:05.835151 systemd[1]: Created slice kubepods-burstable-pod3bd5a2e2_42ee_4e27_a412_724b2f0527b4.slice - libcontainer container kubepods-burstable-pod3bd5a2e2_42ee_4e27_a412_724b2f0527b4.slice. Aug 13 01:33:05.856243 systemd[1]: Created slice kubepods-besteffort-pod72291e38_fab0_4133_a546_c5eb46eabd9b.slice - libcontainer container kubepods-besteffort-pod72291e38_fab0_4133_a546_c5eb46eabd9b.slice. Aug 13 01:33:05.865695 systemd[1]: Created slice kubepods-burstable-pod9cb65184_4613_43ed_9fa1_0cf23f1e0e56.slice - libcontainer container kubepods-burstable-pod9cb65184_4613_43ed_9fa1_0cf23f1e0e56.slice. Aug 13 01:33:05.875574 systemd[1]: Created slice kubepods-besteffort-podcc1cb31e_7164_4ad5_8b25_a666dc952f89.slice - libcontainer container kubepods-besteffort-podcc1cb31e_7164_4ad5_8b25_a666dc952f89.slice. Aug 13 01:33:05.892752 systemd[1]: Created slice kubepods-besteffort-pod56caad57_6b4a_4069_b011_1059db183012.slice - libcontainer container kubepods-besteffort-pod56caad57_6b4a_4069_b011_1059db183012.slice. Aug 13 01:33:05.906858 systemd[1]: Created slice kubepods-besteffort-podef690b37_51cc_4d0e_a97b_36e63996a0ea.slice - libcontainer container kubepods-besteffort-podef690b37_51cc_4d0e_a97b_36e63996a0ea.slice. Aug 13 01:33:05.916920 systemd[1]: Created slice kubepods-besteffort-pod801dfb54_f1cf_4e8c_867a_f37238d094a8.slice - libcontainer container kubepods-besteffort-pod801dfb54_f1cf_4e8c_867a_f37238d094a8.slice. Aug 13 01:33:05.922003 containerd[1544]: time="2025-08-13T01:33:05.921791991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:33:05.984971 kubelet[2739]: I0813 01:33:05.984481 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wbddt\" (UniqueName: \"kubernetes.io/projected/72291e38-fab0-4133-a546-c5eb46eabd9b-kube-api-access-wbddt\") pod \"calico-apiserver-7b69676bd6-774cg\" (UID: \"72291e38-fab0-4133-a546-c5eb46eabd9b\") " pod="calico-apiserver/calico-apiserver-7b69676bd6-774cg" Aug 13 01:33:05.984971 kubelet[2739]: I0813 01:33:05.984526 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hpgd5\" (UniqueName: \"kubernetes.io/projected/cc1cb31e-7164-4ad5-8b25-a666dc952f89-kube-api-access-hpgd5\") pod \"whisker-549d8f9755-gqj56\" (UID: \"cc1cb31e-7164-4ad5-8b25-a666dc952f89\") " pod="calico-system/whisker-549d8f9755-gqj56" Aug 13 01:33:05.984971 kubelet[2739]: I0813 01:33:05.984543 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc1cb31e-7164-4ad5-8b25-a666dc952f89-whisker-ca-bundle\") pod \"whisker-549d8f9755-gqj56\" (UID: \"cc1cb31e-7164-4ad5-8b25-a666dc952f89\") " pod="calico-system/whisker-549d8f9755-gqj56" Aug 13 01:33:05.984971 kubelet[2739]: I0813 01:33:05.984559 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xgr86\" (UniqueName: \"kubernetes.io/projected/3bd5a2e2-42ee-4e27-a412-724b2f0527b4-kube-api-access-xgr86\") pod \"coredns-7c65d6cfc9-994jv\" (UID: \"3bd5a2e2-42ee-4e27-a412-724b2f0527b4\") " pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:33:05.984971 kubelet[2739]: I0813 01:33:05.984575 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef690b37-51cc-4d0e-a97b-36e63996a0ea-config\") pod \"goldmane-58fd7646b9-f4lx2\" (UID: \"ef690b37-51cc-4d0e-a97b-36e63996a0ea\") " pod="calico-system/goldmane-58fd7646b9-f4lx2" Aug 13 01:33:05.985505 kubelet[2739]: I0813 01:33:05.984780 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d94st\" (UniqueName: \"kubernetes.io/projected/801dfb54-f1cf-4e8c-867a-f37238d094a8-kube-api-access-d94st\") pod \"calico-apiserver-7b69676bd6-xrwvn\" (UID: \"801dfb54-f1cf-4e8c-867a-f37238d094a8\") " pod="calico-apiserver/calico-apiserver-7b69676bd6-xrwvn" Aug 13 01:33:05.985505 kubelet[2739]: I0813 01:33:05.984797 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef690b37-51cc-4d0e-a97b-36e63996a0ea-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-f4lx2\" (UID: \"ef690b37-51cc-4d0e-a97b-36e63996a0ea\") " pod="calico-system/goldmane-58fd7646b9-f4lx2" Aug 13 01:33:05.985505 kubelet[2739]: I0813 01:33:05.984811 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/72291e38-fab0-4133-a546-c5eb46eabd9b-calico-apiserver-certs\") pod \"calico-apiserver-7b69676bd6-774cg\" (UID: \"72291e38-fab0-4133-a546-c5eb46eabd9b\") " pod="calico-apiserver/calico-apiserver-7b69676bd6-774cg" Aug 13 01:33:05.985505 kubelet[2739]: I0813 01:33:05.984828 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3bd5a2e2-42ee-4e27-a412-724b2f0527b4-config-volume\") pod \"coredns-7c65d6cfc9-994jv\" (UID: \"3bd5a2e2-42ee-4e27-a412-724b2f0527b4\") " pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:33:05.985505 kubelet[2739]: I0813 01:33:05.984902 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mfzxc\" (UniqueName: \"kubernetes.io/projected/9cb65184-4613-43ed-9fa1-0cf23f1e0e56-kube-api-access-mfzxc\") pod \"coredns-7c65d6cfc9-mx5v9\" (UID: \"9cb65184-4613-43ed-9fa1-0cf23f1e0e56\") " pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:33:05.985698 kubelet[2739]: I0813 01:33:05.985182 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/801dfb54-f1cf-4e8c-867a-f37238d094a8-calico-apiserver-certs\") pod \"calico-apiserver-7b69676bd6-xrwvn\" (UID: \"801dfb54-f1cf-4e8c-867a-f37238d094a8\") " pod="calico-apiserver/calico-apiserver-7b69676bd6-xrwvn" Aug 13 01:33:05.985698 kubelet[2739]: I0813 01:33:05.985204 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ef690b37-51cc-4d0e-a97b-36e63996a0ea-goldmane-key-pair\") pod \"goldmane-58fd7646b9-f4lx2\" (UID: \"ef690b37-51cc-4d0e-a97b-36e63996a0ea\") " pod="calico-system/goldmane-58fd7646b9-f4lx2" Aug 13 01:33:05.985698 kubelet[2739]: I0813 01:33:05.985247 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc1cb31e-7164-4ad5-8b25-a666dc952f89-whisker-backend-key-pair\") pod \"whisker-549d8f9755-gqj56\" (UID: \"cc1cb31e-7164-4ad5-8b25-a666dc952f89\") " pod="calico-system/whisker-549d8f9755-gqj56" Aug 13 01:33:05.985698 kubelet[2739]: I0813 01:33:05.985269 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9cb65184-4613-43ed-9fa1-0cf23f1e0e56-config-volume\") pod \"coredns-7c65d6cfc9-mx5v9\" (UID: \"9cb65184-4613-43ed-9fa1-0cf23f1e0e56\") " pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:33:05.985698 kubelet[2739]: I0813 01:33:05.985291 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwl76\" (UniqueName: \"kubernetes.io/projected/56caad57-6b4a-4069-b011-1059db183012-kube-api-access-zwl76\") pod \"calico-kube-controllers-85fbc76f96-d5vf4\" (UID: \"56caad57-6b4a-4069-b011-1059db183012\") " pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:33:05.985872 kubelet[2739]: I0813 01:33:05.985428 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnhll\" (UniqueName: \"kubernetes.io/projected/ef690b37-51cc-4d0e-a97b-36e63996a0ea-kube-api-access-dnhll\") pod \"goldmane-58fd7646b9-f4lx2\" (UID: \"ef690b37-51cc-4d0e-a97b-36e63996a0ea\") " pod="calico-system/goldmane-58fd7646b9-f4lx2" Aug 13 01:33:05.985872 kubelet[2739]: I0813 01:33:05.985447 2739 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/56caad57-6b4a-4069-b011-1059db183012-tigera-ca-bundle\") pod \"calico-kube-controllers-85fbc76f96-d5vf4\" (UID: \"56caad57-6b4a-4069-b011-1059db183012\") " pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:33:06.162160 containerd[1544]: time="2025-08-13T01:33:06.162106618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b69676bd6-774cg,Uid:72291e38-fab0-4133-a546-c5eb46eabd9b,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:33:06.187166 containerd[1544]: time="2025-08-13T01:33:06.186884593Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-549d8f9755-gqj56,Uid:cc1cb31e-7164-4ad5-8b25-a666dc952f89,Namespace:calico-system,Attempt:0,}" Aug 13 01:33:06.204320 containerd[1544]: time="2025-08-13T01:33:06.204283879Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,}" Aug 13 01:33:06.213589 containerd[1544]: time="2025-08-13T01:33:06.213288330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-f4lx2,Uid:ef690b37-51cc-4d0e-a97b-36e63996a0ea,Namespace:calico-system,Attempt:0,}" Aug 13 01:33:06.220533 containerd[1544]: time="2025-08-13T01:33:06.220499909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b69676bd6-xrwvn,Uid:801dfb54-f1cf-4e8c-867a-f37238d094a8,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:33:06.316869 containerd[1544]: time="2025-08-13T01:33:06.316721433Z" level=error msg="Failed to destroy network for sandbox \"23605794b94034b720807b6d2019738d6084ebbe5b214d0fd37dbc100e2ab19e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:06.318005 containerd[1544]: time="2025-08-13T01:33:06.317965640Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b69676bd6-774cg,Uid:72291e38-fab0-4133-a546-c5eb46eabd9b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23605794b94034b720807b6d2019738d6084ebbe5b214d0fd37dbc100e2ab19e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:06.318323 kubelet[2739]: E0813 01:33:06.318278 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23605794b94034b720807b6d2019738d6084ebbe5b214d0fd37dbc100e2ab19e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:06.318474 kubelet[2739]: E0813 01:33:06.318446 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23605794b94034b720807b6d2019738d6084ebbe5b214d0fd37dbc100e2ab19e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b69676bd6-774cg" Aug 13 01:33:06.318510 kubelet[2739]: E0813 01:33:06.318473 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23605794b94034b720807b6d2019738d6084ebbe5b214d0fd37dbc100e2ab19e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b69676bd6-774cg" Aug 13 01:33:06.318571 kubelet[2739]: E0813 01:33:06.318536 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b69676bd6-774cg_calico-apiserver(72291e38-fab0-4133-a546-c5eb46eabd9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b69676bd6-774cg_calico-apiserver(72291e38-fab0-4133-a546-c5eb46eabd9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23605794b94034b720807b6d2019738d6084ebbe5b214d0fd37dbc100e2ab19e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b69676bd6-774cg" podUID="72291e38-fab0-4133-a546-c5eb46eabd9b" Aug 13 01:33:06.337069 containerd[1544]: time="2025-08-13T01:33:06.337029931Z" level=error msg="Failed to destroy network for sandbox \"fb32ae98a57a40054332b970eee85096ceb10f8fa35e33876d2477f7174fa5ea\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:06.337349 containerd[1544]: time="2025-08-13T01:33:06.337328338Z" level=error msg="Failed to destroy network for sandbox \"dc29315efa752da02d9c203b93d4f2152b435d3a54929ef7091ccb0fc18adad3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:06.338723 containerd[1544]: time="2025-08-13T01:33:06.338696765Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-549d8f9755-gqj56,Uid:cc1cb31e-7164-4ad5-8b25-a666dc952f89,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb32ae98a57a40054332b970eee85096ceb10f8fa35e33876d2477f7174fa5ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:06.339300 kubelet[2739]: E0813 01:33:06.339239 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb32ae98a57a40054332b970eee85096ceb10f8fa35e33876d2477f7174fa5ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:06.339466 kubelet[2739]: E0813 01:33:06.339419 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb32ae98a57a40054332b970eee85096ceb10f8fa35e33876d2477f7174fa5ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-549d8f9755-gqj56" Aug 13 01:33:06.339466 kubelet[2739]: E0813 01:33:06.339447 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb32ae98a57a40054332b970eee85096ceb10f8fa35e33876d2477f7174fa5ea\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-549d8f9755-gqj56" Aug 13 01:33:06.339561 containerd[1544]: time="2025-08-13T01:33:06.339410657Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc29315efa752da02d9c203b93d4f2152b435d3a54929ef7091ccb0fc18adad3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:06.340298 kubelet[2739]: E0813 01:33:06.339775 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-549d8f9755-gqj56_calico-system(cc1cb31e-7164-4ad5-8b25-a666dc952f89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-549d8f9755-gqj56_calico-system(cc1cb31e-7164-4ad5-8b25-a666dc952f89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb32ae98a57a40054332b970eee85096ceb10f8fa35e33876d2477f7174fa5ea\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-549d8f9755-gqj56" podUID="cc1cb31e-7164-4ad5-8b25-a666dc952f89" Aug 13 01:33:06.340298 kubelet[2739]: E0813 01:33:06.340241 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc29315efa752da02d9c203b93d4f2152b435d3a54929ef7091ccb0fc18adad3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:06.340298 kubelet[2739]: E0813 01:33:06.340264 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc29315efa752da02d9c203b93d4f2152b435d3a54929ef7091ccb0fc18adad3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:33:06.340454 kubelet[2739]: E0813 01:33:06.340277 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc29315efa752da02d9c203b93d4f2152b435d3a54929ef7091ccb0fc18adad3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:33:06.340454 kubelet[2739]: E0813 01:33:06.340301 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc29315efa752da02d9c203b93d4f2152b435d3a54929ef7091ccb0fc18adad3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:33:06.347243 containerd[1544]: time="2025-08-13T01:33:06.347106681Z" level=error msg="Failed to destroy network for sandbox \"b47e34a09eb4b263b67b4ac0d250a85075f8e6c4e77e3b3fa16fe7c4d2982eac\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:06.348184 containerd[1544]: time="2025-08-13T01:33:06.348147730Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b69676bd6-xrwvn,Uid:801dfb54-f1cf-4e8c-867a-f37238d094a8,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47e34a09eb4b263b67b4ac0d250a85075f8e6c4e77e3b3fa16fe7c4d2982eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:06.349148 kubelet[2739]: E0813 01:33:06.348954 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47e34a09eb4b263b67b4ac0d250a85075f8e6c4e77e3b3fa16fe7c4d2982eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:06.349148 kubelet[2739]: E0813 01:33:06.349106 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47e34a09eb4b263b67b4ac0d250a85075f8e6c4e77e3b3fa16fe7c4d2982eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b69676bd6-xrwvn" Aug 13 01:33:06.349287 kubelet[2739]: E0813 01:33:06.349123 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47e34a09eb4b263b67b4ac0d250a85075f8e6c4e77e3b3fa16fe7c4d2982eac\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b69676bd6-xrwvn" Aug 13 01:33:06.349325 kubelet[2739]: E0813 01:33:06.349305 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b69676bd6-xrwvn_calico-apiserver(801dfb54-f1cf-4e8c-867a-f37238d094a8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b69676bd6-xrwvn_calico-apiserver(801dfb54-f1cf-4e8c-867a-f37238d094a8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b47e34a09eb4b263b67b4ac0d250a85075f8e6c4e77e3b3fa16fe7c4d2982eac\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b69676bd6-xrwvn" podUID="801dfb54-f1cf-4e8c-867a-f37238d094a8" Aug 13 01:33:06.366030 containerd[1544]: time="2025-08-13T01:33:06.365983623Z" level=error msg="Failed to destroy network for sandbox \"cb0d1821c9082a0727a2f9028dc0e9e2c33631237a7c8d20a565bc66c11195a2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:06.366955 containerd[1544]: time="2025-08-13T01:33:06.366925644Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-f4lx2,Uid:ef690b37-51cc-4d0e-a97b-36e63996a0ea,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb0d1821c9082a0727a2f9028dc0e9e2c33631237a7c8d20a565bc66c11195a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:06.367420 kubelet[2739]: E0813 01:33:06.367074 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb0d1821c9082a0727a2f9028dc0e9e2c33631237a7c8d20a565bc66c11195a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:06.367420 kubelet[2739]: E0813 01:33:06.367107 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb0d1821c9082a0727a2f9028dc0e9e2c33631237a7c8d20a565bc66c11195a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-f4lx2" Aug 13 01:33:06.367420 kubelet[2739]: E0813 01:33:06.367122 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb0d1821c9082a0727a2f9028dc0e9e2c33631237a7c8d20a565bc66c11195a2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-f4lx2" Aug 13 01:33:06.367559 kubelet[2739]: E0813 01:33:06.367174 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-f4lx2_calico-system(ef690b37-51cc-4d0e-a97b-36e63996a0ea)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-f4lx2_calico-system(ef690b37-51cc-4d0e-a97b-36e63996a0ea)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb0d1821c9082a0727a2f9028dc0e9e2c33631237a7c8d20a565bc66c11195a2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-f4lx2" podUID="ef690b37-51cc-4d0e-a97b-36e63996a0ea" Aug 13 01:33:06.806328 systemd[1]: Created slice kubepods-besteffort-pod4e5845c9_626c_4c83_900a_0da0bae2daed.slice - libcontainer container kubepods-besteffort-pod4e5845c9_626c_4c83_900a_0da0bae2daed.slice. Aug 13 01:33:06.810444 containerd[1544]: time="2025-08-13T01:33:06.810206801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,}" Aug 13 01:33:06.889473 containerd[1544]: time="2025-08-13T01:33:06.889351165Z" level=error msg="Failed to destroy network for sandbox \"60c0186fed47b7870a827e0c60bb1c3e46b4504304809d38d26fdfdd370f5277\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:06.891675 containerd[1544]: time="2025-08-13T01:33:06.891573203Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60c0186fed47b7870a827e0c60bb1c3e46b4504304809d38d26fdfdd370f5277\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:06.892529 kubelet[2739]: E0813 01:33:06.892322 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60c0186fed47b7870a827e0c60bb1c3e46b4504304809d38d26fdfdd370f5277\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:06.893590 kubelet[2739]: E0813 01:33:06.892557 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60c0186fed47b7870a827e0c60bb1c3e46b4504304809d38d26fdfdd370f5277\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:06.893590 kubelet[2739]: E0813 01:33:06.892583 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60c0186fed47b7870a827e0c60bb1c3e46b4504304809d38d26fdfdd370f5277\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:06.893590 kubelet[2739]: E0813 01:33:06.892999 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60c0186fed47b7870a827e0c60bb1c3e46b4504304809d38d26fdfdd370f5277\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7bj49" podUID="4e5845c9-626c-4c83-900a-0da0bae2daed" Aug 13 01:33:07.044235 kubelet[2739]: E0813 01:33:07.044160 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:33:07.045735 containerd[1544]: time="2025-08-13T01:33:07.045657951Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-994jv,Uid:3bd5a2e2-42ee-4e27-a412-724b2f0527b4,Namespace:kube-system,Attempt:0,}" Aug 13 01:33:07.075257 kubelet[2739]: E0813 01:33:07.074434 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:33:07.077202 containerd[1544]: time="2025-08-13T01:33:07.076985501Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5v9,Uid:9cb65184-4613-43ed-9fa1-0cf23f1e0e56,Namespace:kube-system,Attempt:0,}" Aug 13 01:33:07.145496 containerd[1544]: time="2025-08-13T01:33:07.145454915Z" level=error msg="Failed to destroy network for sandbox \"8a1e10c65391bcf982aa554ed920f3ae11c993d70832afe077f7034e8ec991b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:07.149621 containerd[1544]: time="2025-08-13T01:33:07.149587557Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-994jv,Uid:3bd5a2e2-42ee-4e27-a412-724b2f0527b4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a1e10c65391bcf982aa554ed920f3ae11c993d70832afe077f7034e8ec991b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:07.150176 kubelet[2739]: E0813 01:33:07.149908 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a1e10c65391bcf982aa554ed920f3ae11c993d70832afe077f7034e8ec991b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:07.152079 kubelet[2739]: E0813 01:33:07.150308 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a1e10c65391bcf982aa554ed920f3ae11c993d70832afe077f7034e8ec991b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:33:07.152079 kubelet[2739]: E0813 01:33:07.150333 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8a1e10c65391bcf982aa554ed920f3ae11c993d70832afe077f7034e8ec991b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:33:07.152079 kubelet[2739]: E0813 01:33:07.150375 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-994jv_kube-system(3bd5a2e2-42ee-4e27-a412-724b2f0527b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-994jv_kube-system(3bd5a2e2-42ee-4e27-a412-724b2f0527b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8a1e10c65391bcf982aa554ed920f3ae11c993d70832afe077f7034e8ec991b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-994jv" podUID="3bd5a2e2-42ee-4e27-a412-724b2f0527b4" Aug 13 01:33:07.151057 systemd[1]: run-netns-cni\x2dd51e68d0\x2d66fc\x2dfeac\x2d511c\x2db05fc6bf8a5a.mount: Deactivated successfully. Aug 13 01:33:07.184271 containerd[1544]: time="2025-08-13T01:33:07.184221765Z" level=error msg="Failed to destroy network for sandbox \"84be3bde82d7c5f4eccb7e4dd5bf72234726b4b94d416b5f1f2b07872c136158\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:07.189888 containerd[1544]: time="2025-08-13T01:33:07.188979740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5v9,Uid:9cb65184-4613-43ed-9fa1-0cf23f1e0e56,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"84be3bde82d7c5f4eccb7e4dd5bf72234726b4b94d416b5f1f2b07872c136158\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:07.189980 kubelet[2739]: E0813 01:33:07.189283 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84be3bde82d7c5f4eccb7e4dd5bf72234726b4b94d416b5f1f2b07872c136158\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:07.189980 kubelet[2739]: E0813 01:33:07.189364 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84be3bde82d7c5f4eccb7e4dd5bf72234726b4b94d416b5f1f2b07872c136158\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:33:07.189980 kubelet[2739]: E0813 01:33:07.189404 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84be3bde82d7c5f4eccb7e4dd5bf72234726b4b94d416b5f1f2b07872c136158\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:33:07.190102 kubelet[2739]: E0813 01:33:07.189441 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mx5v9_kube-system(9cb65184-4613-43ed-9fa1-0cf23f1e0e56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mx5v9_kube-system(9cb65184-4613-43ed-9fa1-0cf23f1e0e56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84be3bde82d7c5f4eccb7e4dd5bf72234726b4b94d416b5f1f2b07872c136158\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mx5v9" podUID="9cb65184-4613-43ed-9fa1-0cf23f1e0e56" Aug 13 01:33:07.190299 systemd[1]: run-netns-cni\x2d7c53ac7c\x2dd7e3\x2db5ef\x2d31d9\x2d8a4a88d04c6e.mount: Deactivated successfully. Aug 13 01:33:08.430050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1273000832.mount: Deactivated successfully. Aug 13 01:33:08.430798 containerd[1544]: time="2025-08-13T01:33:08.430122314Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1273000832: write /var/lib/containerd/tmpmounts/containerd-mount1273000832/usr/bin/calico-node: no space left on device" Aug 13 01:33:08.431089 containerd[1544]: time="2025-08-13T01:33:08.430808397Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:33:08.432006 kubelet[2739]: E0813 01:33:08.431968 2739 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1273000832: write /var/lib/containerd/tmpmounts/containerd-mount1273000832/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:33:08.432462 kubelet[2739]: E0813 01:33:08.432020 2739 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1273000832: write /var/lib/containerd/tmpmounts/containerd-mount1273000832/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:33:08.432868 kubelet[2739]: E0813 01:33:08.432796 2739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8gp52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-5c47r_calico-system(7d03562e-9842-4425-9847-632615391bfb): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1273000832: write /var/lib/containerd/tmpmounts/containerd-mount1273000832/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:33:08.434407 kubelet[2739]: E0813 01:33:08.434340 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1273000832: write /var/lib/containerd/tmpmounts/containerd-mount1273000832/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-5c47r" podUID="7d03562e-9842-4425-9847-632615391bfb" Aug 13 01:33:08.932056 kubelet[2739]: E0813 01:33:08.931758 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-5c47r" podUID="7d03562e-9842-4425-9847-632615391bfb" Aug 13 01:33:10.968002 kubelet[2739]: I0813 01:33:10.967966 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:33:10.968002 kubelet[2739]: I0813 01:33:10.968007 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:33:10.969894 kubelet[2739]: I0813 01:33:10.969872 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:33:10.978186 kubelet[2739]: I0813 01:33:10.978169 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:33:10.978261 kubelet[2739]: I0813 01:33:10.978242 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/goldmane-58fd7646b9-f4lx2","calico-system/whisker-549d8f9755-gqj56","calico-apiserver/calico-apiserver-7b69676bd6-774cg","calico-apiserver/calico-apiserver-7b69676bd6-xrwvn","calico-system/calico-kube-controllers-85fbc76f96-d5vf4","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","calico-system/csi-node-driver-7bj49","tigera-operator/tigera-operator-5bf8dfcb4-rkgkd","calico-system/calico-typha-79464475b5-bbrtw","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:33:10.982332 kubelet[2739]: I0813 01:33:10.982313 2739 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-system/goldmane-58fd7646b9-f4lx2" Aug 13 01:33:10.982332 kubelet[2739]: I0813 01:33:10.982330 2739 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/goldmane-58fd7646b9-f4lx2"] Aug 13 01:33:10.999874 kubelet[2739]: I0813 01:33:10.999841 2739 kubelet.go:2306] "Pod admission denied" podUID="f055fa28-f4ff-4c81-98ea-9fb634ae9f54" pod="calico-system/goldmane-58fd7646b9-jv854" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:11.021519 kubelet[2739]: I0813 01:33:11.021476 2739 kubelet.go:2306] "Pod admission denied" podUID="9e3997c8-825a-4aa2-83e0-a9f19b5bd2e3" pod="calico-system/goldmane-58fd7646b9-ltmzb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:11.042200 kubelet[2739]: I0813 01:33:11.042170 2739 kubelet.go:2306] "Pod admission denied" podUID="803a50f5-c4be-443e-aa8e-fb9f891ba91f" pod="calico-system/goldmane-58fd7646b9-btzvt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:11.059715 kubelet[2739]: I0813 01:33:11.059681 2739 kubelet.go:2306] "Pod admission denied" podUID="115c57b6-eb2a-4a61-8756-a0605ca47183" pod="calico-system/goldmane-58fd7646b9-blpgf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:11.097886 kubelet[2739]: I0813 01:33:11.097822 2739 kubelet.go:2306] "Pod admission denied" podUID="05a1a694-1f49-40cd-bfd8-466e121b5173" pod="calico-system/goldmane-58fd7646b9-pxscq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:11.119329 kubelet[2739]: I0813 01:33:11.118954 2739 kubelet.go:2306] "Pod admission denied" podUID="13b29759-4bb3-4c2a-94c7-7d51313912bd" pod="calico-system/goldmane-58fd7646b9-lz9wc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:11.121297 kubelet[2739]: I0813 01:33:11.119583 2739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef690b37-51cc-4d0e-a97b-36e63996a0ea-config\") pod \"ef690b37-51cc-4d0e-a97b-36e63996a0ea\" (UID: \"ef690b37-51cc-4d0e-a97b-36e63996a0ea\") " Aug 13 01:33:11.121297 kubelet[2739]: I0813 01:33:11.120256 2739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnhll\" (UniqueName: \"kubernetes.io/projected/ef690b37-51cc-4d0e-a97b-36e63996a0ea-kube-api-access-dnhll\") pod \"ef690b37-51cc-4d0e-a97b-36e63996a0ea\" (UID: \"ef690b37-51cc-4d0e-a97b-36e63996a0ea\") " Aug 13 01:33:11.121297 kubelet[2739]: I0813 01:33:11.120289 2739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ef690b37-51cc-4d0e-a97b-36e63996a0ea-goldmane-key-pair\") pod \"ef690b37-51cc-4d0e-a97b-36e63996a0ea\" (UID: \"ef690b37-51cc-4d0e-a97b-36e63996a0ea\") " Aug 13 01:33:11.121297 kubelet[2739]: I0813 01:33:11.120310 2739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef690b37-51cc-4d0e-a97b-36e63996a0ea-goldmane-ca-bundle\") pod \"ef690b37-51cc-4d0e-a97b-36e63996a0ea\" (UID: \"ef690b37-51cc-4d0e-a97b-36e63996a0ea\") " Aug 13 01:33:11.121297 kubelet[2739]: I0813 01:33:11.119937 2739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef690b37-51cc-4d0e-a97b-36e63996a0ea-config" (OuterVolumeSpecName: "config") pod "ef690b37-51cc-4d0e-a97b-36e63996a0ea" (UID: "ef690b37-51cc-4d0e-a97b-36e63996a0ea"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:33:11.121297 kubelet[2739]: I0813 01:33:11.120787 2739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ef690b37-51cc-4d0e-a97b-36e63996a0ea-goldmane-ca-bundle" (OuterVolumeSpecName: "goldmane-ca-bundle") pod "ef690b37-51cc-4d0e-a97b-36e63996a0ea" (UID: "ef690b37-51cc-4d0e-a97b-36e63996a0ea"). InnerVolumeSpecName "goldmane-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:33:11.124988 kubelet[2739]: I0813 01:33:11.124966 2739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ef690b37-51cc-4d0e-a97b-36e63996a0ea-kube-api-access-dnhll" (OuterVolumeSpecName: "kube-api-access-dnhll") pod "ef690b37-51cc-4d0e-a97b-36e63996a0ea" (UID: "ef690b37-51cc-4d0e-a97b-36e63996a0ea"). InnerVolumeSpecName "kube-api-access-dnhll". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:33:11.127096 systemd[1]: var-lib-kubelet-pods-ef690b37\x2d51cc\x2d4d0e\x2da97b\x2d36e63996a0ea-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2ddnhll.mount: Deactivated successfully. Aug 13 01:33:11.130329 kubelet[2739]: I0813 01:33:11.130209 2739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ef690b37-51cc-4d0e-a97b-36e63996a0ea-goldmane-key-pair" (OuterVolumeSpecName: "goldmane-key-pair") pod "ef690b37-51cc-4d0e-a97b-36e63996a0ea" (UID: "ef690b37-51cc-4d0e-a97b-36e63996a0ea"). InnerVolumeSpecName "goldmane-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:33:11.132439 systemd[1]: var-lib-kubelet-pods-ef690b37\x2d51cc\x2d4d0e\x2da97b\x2d36e63996a0ea-volumes-kubernetes.io\x7esecret-goldmane\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:33:11.154949 kubelet[2739]: I0813 01:33:11.154402 2739 kubelet.go:2306] "Pod admission denied" podUID="c266d73c-9101-4f68-b47f-49d96eeacb65" pod="calico-system/goldmane-58fd7646b9-rb7xs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:11.191165 kubelet[2739]: I0813 01:33:11.191039 2739 kubelet.go:2306] "Pod admission denied" podUID="98a06395-c960-4a14-a946-12cd03d90a87" pod="calico-system/goldmane-58fd7646b9-22bjt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:11.221499 kubelet[2739]: I0813 01:33:11.221345 2739 reconciler_common.go:293] "Volume detached for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/ef690b37-51cc-4d0e-a97b-36e63996a0ea-goldmane-ca-bundle\") on node \"172-234-27-175\" DevicePath \"\"" Aug 13 01:33:11.221499 kubelet[2739]: I0813 01:33:11.221395 2739 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ef690b37-51cc-4d0e-a97b-36e63996a0ea-config\") on node \"172-234-27-175\" DevicePath \"\"" Aug 13 01:33:11.221499 kubelet[2739]: I0813 01:33:11.221405 2739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnhll\" (UniqueName: \"kubernetes.io/projected/ef690b37-51cc-4d0e-a97b-36e63996a0ea-kube-api-access-dnhll\") on node \"172-234-27-175\" DevicePath \"\"" Aug 13 01:33:11.221499 kubelet[2739]: I0813 01:33:11.221413 2739 reconciler_common.go:293] "Volume detached for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/ef690b37-51cc-4d0e-a97b-36e63996a0ea-goldmane-key-pair\") on node \"172-234-27-175\" DevicePath \"\"" Aug 13 01:33:11.228916 kubelet[2739]: I0813 01:33:11.228691 2739 kubelet.go:2306] "Pod admission denied" podUID="18c03e19-697a-4598-8828-dbc56099b3dd" pod="calico-system/goldmane-58fd7646b9-k6bb2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:11.349996 kubelet[2739]: I0813 01:33:11.349955 2739 kubelet.go:2306] "Pod admission denied" podUID="9d822ed2-dff4-4c35-bd79-fb911dae3ba3" pod="calico-system/goldmane-58fd7646b9-f28fh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:11.941275 systemd[1]: Removed slice kubepods-besteffort-podef690b37_51cc_4d0e_a97b_36e63996a0ea.slice - libcontainer container kubepods-besteffort-podef690b37_51cc_4d0e_a97b_36e63996a0ea.slice. Aug 13 01:33:11.983066 kubelet[2739]: I0813 01:33:11.983020 2739 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-system/goldmane-58fd7646b9-f4lx2"] Aug 13 01:33:11.993054 kubelet[2739]: I0813 01:33:11.993033 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:33:11.993054 kubelet[2739]: I0813 01:33:11.993060 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:33:11.995230 kubelet[2739]: I0813 01:33:11.995115 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:33:12.005906 kubelet[2739]: I0813 01:33:12.005871 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:33:12.005959 kubelet[2739]: I0813 01:33:12.005940 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-7b69676bd6-xrwvn","calico-apiserver/calico-apiserver-7b69676bd6-774cg","calico-system/whisker-549d8f9755-gqj56","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/csi-node-driver-7bj49","calico-system/calico-node-5c47r","tigera-operator/tigera-operator-5bf8dfcb4-rkgkd","calico-system/calico-typha-79464475b5-bbrtw","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:33:12.011956 kubelet[2739]: I0813 01:33:12.011892 2739 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-7b69676bd6-xrwvn" Aug 13 01:33:12.011956 kubelet[2739]: I0813 01:33:12.011910 2739 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-7b69676bd6-xrwvn"] Aug 13 01:33:12.127089 kubelet[2739]: I0813 01:33:12.126711 2739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/801dfb54-f1cf-4e8c-867a-f37238d094a8-calico-apiserver-certs\") pod \"801dfb54-f1cf-4e8c-867a-f37238d094a8\" (UID: \"801dfb54-f1cf-4e8c-867a-f37238d094a8\") " Aug 13 01:33:12.127089 kubelet[2739]: I0813 01:33:12.126748 2739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d94st\" (UniqueName: \"kubernetes.io/projected/801dfb54-f1cf-4e8c-867a-f37238d094a8-kube-api-access-d94st\") pod \"801dfb54-f1cf-4e8c-867a-f37238d094a8\" (UID: \"801dfb54-f1cf-4e8c-867a-f37238d094a8\") " Aug 13 01:33:12.129788 kubelet[2739]: I0813 01:33:12.129765 2739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/801dfb54-f1cf-4e8c-867a-f37238d094a8-kube-api-access-d94st" (OuterVolumeSpecName: "kube-api-access-d94st") pod "801dfb54-f1cf-4e8c-867a-f37238d094a8" (UID: "801dfb54-f1cf-4e8c-867a-f37238d094a8"). InnerVolumeSpecName "kube-api-access-d94st". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:33:12.132310 kubelet[2739]: I0813 01:33:12.132291 2739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/801dfb54-f1cf-4e8c-867a-f37238d094a8-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "801dfb54-f1cf-4e8c-867a-f37238d094a8" (UID: "801dfb54-f1cf-4e8c-867a-f37238d094a8"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:33:12.133002 systemd[1]: var-lib-kubelet-pods-801dfb54\x2df1cf\x2d4e8c\x2d867a\x2df37238d094a8-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dd94st.mount: Deactivated successfully. Aug 13 01:33:12.133398 systemd[1]: var-lib-kubelet-pods-801dfb54\x2df1cf\x2d4e8c\x2d867a\x2df37238d094a8-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:33:12.227245 kubelet[2739]: I0813 01:33:12.227215 2739 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/801dfb54-f1cf-4e8c-867a-f37238d094a8-calico-apiserver-certs\") on node \"172-234-27-175\" DevicePath \"\"" Aug 13 01:33:12.227245 kubelet[2739]: I0813 01:33:12.227237 2739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d94st\" (UniqueName: \"kubernetes.io/projected/801dfb54-f1cf-4e8c-867a-f37238d094a8-kube-api-access-d94st\") on node \"172-234-27-175\" DevicePath \"\"" Aug 13 01:33:12.805450 systemd[1]: Removed slice kubepods-besteffort-pod801dfb54_f1cf_4e8c_867a_f37238d094a8.slice - libcontainer container kubepods-besteffort-pod801dfb54_f1cf_4e8c_867a_f37238d094a8.slice. Aug 13 01:33:13.012714 kubelet[2739]: I0813 01:33:13.012667 2739 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-7b69676bd6-xrwvn"] Aug 13 01:33:15.687476 kubelet[2739]: I0813 01:33:15.687201 2739 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:33:15.687865 kubelet[2739]: E0813 01:33:15.687546 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:33:15.943760 kubelet[2739]: E0813 01:33:15.943633 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:33:17.797936 containerd[1544]: time="2025-08-13T01:33:17.797892028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-549d8f9755-gqj56,Uid:cc1cb31e-7164-4ad5-8b25-a666dc952f89,Namespace:calico-system,Attempt:0,}" Aug 13 01:33:17.798339 containerd[1544]: time="2025-08-13T01:33:17.797892198Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b69676bd6-774cg,Uid:72291e38-fab0-4133-a546-c5eb46eabd9b,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:33:17.857805 containerd[1544]: time="2025-08-13T01:33:17.857753070Z" level=error msg="Failed to destroy network for sandbox \"7d4a0f2a98903974924bee34ddb032916adfa7818b6fbee0f14c0911e39edf43\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:17.860340 containerd[1544]: time="2025-08-13T01:33:17.860243299Z" level=error msg="Failed to destroy network for sandbox \"796427131d76c460a09c7fcc77da4096880d1a1bb1b8cd074e70a22ac24995bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:17.861359 containerd[1544]: time="2025-08-13T01:33:17.861330424Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-549d8f9755-gqj56,Uid:cc1cb31e-7164-4ad5-8b25-a666dc952f89,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"796427131d76c460a09c7fcc77da4096880d1a1bb1b8cd074e70a22ac24995bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:17.862222 systemd[1]: run-netns-cni\x2df5976eb7\x2df37b\x2d6894\x2d1a33\x2d5c38ebf33840.mount: Deactivated successfully. Aug 13 01:33:17.862457 kubelet[2739]: E0813 01:33:17.862303 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796427131d76c460a09c7fcc77da4096880d1a1bb1b8cd074e70a22ac24995bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:17.862457 kubelet[2739]: E0813 01:33:17.862354 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796427131d76c460a09c7fcc77da4096880d1a1bb1b8cd074e70a22ac24995bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-549d8f9755-gqj56" Aug 13 01:33:17.862457 kubelet[2739]: E0813 01:33:17.862373 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"796427131d76c460a09c7fcc77da4096880d1a1bb1b8cd074e70a22ac24995bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-549d8f9755-gqj56" Aug 13 01:33:17.862457 kubelet[2739]: E0813 01:33:17.862408 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-549d8f9755-gqj56_calico-system(cc1cb31e-7164-4ad5-8b25-a666dc952f89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-549d8f9755-gqj56_calico-system(cc1cb31e-7164-4ad5-8b25-a666dc952f89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"796427131d76c460a09c7fcc77da4096880d1a1bb1b8cd074e70a22ac24995bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-549d8f9755-gqj56" podUID="cc1cb31e-7164-4ad5-8b25-a666dc952f89" Aug 13 01:33:17.864213 containerd[1544]: time="2025-08-13T01:33:17.863186886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-7b69676bd6-774cg,Uid:72291e38-fab0-4133-a546-c5eb46eabd9b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d4a0f2a98903974924bee34ddb032916adfa7818b6fbee0f14c0911e39edf43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:17.864275 kubelet[2739]: E0813 01:33:17.863356 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d4a0f2a98903974924bee34ddb032916adfa7818b6fbee0f14c0911e39edf43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:17.864275 kubelet[2739]: E0813 01:33:17.863383 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d4a0f2a98903974924bee34ddb032916adfa7818b6fbee0f14c0911e39edf43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b69676bd6-774cg" Aug 13 01:33:17.864275 kubelet[2739]: E0813 01:33:17.863399 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d4a0f2a98903974924bee34ddb032916adfa7818b6fbee0f14c0911e39edf43\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-7b69676bd6-774cg" Aug 13 01:33:17.864275 kubelet[2739]: E0813 01:33:17.863424 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-7b69676bd6-774cg_calico-apiserver(72291e38-fab0-4133-a546-c5eb46eabd9b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-7b69676bd6-774cg_calico-apiserver(72291e38-fab0-4133-a546-c5eb46eabd9b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d4a0f2a98903974924bee34ddb032916adfa7818b6fbee0f14c0911e39edf43\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-7b69676bd6-774cg" podUID="72291e38-fab0-4133-a546-c5eb46eabd9b" Aug 13 01:33:17.866607 systemd[1]: run-netns-cni\x2d8ebd563d\x2da04e\x2dd1e0\x2db2f5\x2d24ccbbde5d68.mount: Deactivated successfully. Aug 13 01:33:18.798975 containerd[1544]: time="2025-08-13T01:33:18.798938238Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,}" Aug 13 01:33:18.799458 containerd[1544]: time="2025-08-13T01:33:18.799303558Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,}" Aug 13 01:33:18.859173 containerd[1544]: time="2025-08-13T01:33:18.857901005Z" level=error msg="Failed to destroy network for sandbox \"a123d7942227f915bdc0cad3882137f72cdae04d36caec399f4fde76f3be6bf6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:18.860332 containerd[1544]: time="2025-08-13T01:33:18.860239905Z" level=error msg="Failed to destroy network for sandbox \"82167a3ec251d127649d39651cb861717d5143d1cd0d8132a5dca27323c308ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:18.861068 systemd[1]: run-netns-cni\x2d7aa42100\x2d58e1\x2d2a40\x2dddc8\x2d28f90ff94aaa.mount: Deactivated successfully. Aug 13 01:33:18.864434 containerd[1544]: time="2025-08-13T01:33:18.864122598Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a123d7942227f915bdc0cad3882137f72cdae04d36caec399f4fde76f3be6bf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:18.865422 containerd[1544]: time="2025-08-13T01:33:18.865257913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82167a3ec251d127649d39651cb861717d5143d1cd0d8132a5dca27323c308ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:18.865479 kubelet[2739]: E0813 01:33:18.865245 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a123d7942227f915bdc0cad3882137f72cdae04d36caec399f4fde76f3be6bf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:18.865479 kubelet[2739]: E0813 01:33:18.865307 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a123d7942227f915bdc0cad3882137f72cdae04d36caec399f4fde76f3be6bf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:33:18.865479 kubelet[2739]: E0813 01:33:18.865325 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a123d7942227f915bdc0cad3882137f72cdae04d36caec399f4fde76f3be6bf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:33:18.865479 kubelet[2739]: E0813 01:33:18.865363 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a123d7942227f915bdc0cad3882137f72cdae04d36caec399f4fde76f3be6bf6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:33:18.865826 kubelet[2739]: E0813 01:33:18.865577 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82167a3ec251d127649d39651cb861717d5143d1cd0d8132a5dca27323c308ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:18.865826 kubelet[2739]: E0813 01:33:18.865596 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82167a3ec251d127649d39651cb861717d5143d1cd0d8132a5dca27323c308ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:18.865826 kubelet[2739]: E0813 01:33:18.865608 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82167a3ec251d127649d39651cb861717d5143d1cd0d8132a5dca27323c308ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:18.865826 kubelet[2739]: E0813 01:33:18.865627 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82167a3ec251d127649d39651cb861717d5143d1cd0d8132a5dca27323c308ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7bj49" podUID="4e5845c9-626c-4c83-900a-0da0bae2daed" Aug 13 01:33:18.867114 systemd[1]: run-netns-cni\x2df32e61c5\x2d066e\x2d4636\x2d790a\x2d46d398cc2921.mount: Deactivated successfully. Aug 13 01:33:19.797284 kubelet[2739]: E0813 01:33:19.797252 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:33:19.797893 containerd[1544]: time="2025-08-13T01:33:19.797833635Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-994jv,Uid:3bd5a2e2-42ee-4e27-a412-724b2f0527b4,Namespace:kube-system,Attempt:0,}" Aug 13 01:33:19.847890 containerd[1544]: time="2025-08-13T01:33:19.845646802Z" level=error msg="Failed to destroy network for sandbox \"faa6b4c5a622851476c48809d6c576e3ccc2bbd3b4e4803cec474e3e010d85aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:19.847612 systemd[1]: run-netns-cni\x2dea462aea\x2def24\x2d8390\x2d6dab\x2d31b3019dd61f.mount: Deactivated successfully. Aug 13 01:33:19.850023 containerd[1544]: time="2025-08-13T01:33:19.849995945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-994jv,Uid:3bd5a2e2-42ee-4e27-a412-724b2f0527b4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"faa6b4c5a622851476c48809d6c576e3ccc2bbd3b4e4803cec474e3e010d85aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:19.850252 kubelet[2739]: E0813 01:33:19.850214 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faa6b4c5a622851476c48809d6c576e3ccc2bbd3b4e4803cec474e3e010d85aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:19.850252 kubelet[2739]: E0813 01:33:19.850273 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faa6b4c5a622851476c48809d6c576e3ccc2bbd3b4e4803cec474e3e010d85aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:33:19.850392 kubelet[2739]: E0813 01:33:19.850294 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"faa6b4c5a622851476c48809d6c576e3ccc2bbd3b4e4803cec474e3e010d85aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:33:19.850392 kubelet[2739]: E0813 01:33:19.850329 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-994jv_kube-system(3bd5a2e2-42ee-4e27-a412-724b2f0527b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-994jv_kube-system(3bd5a2e2-42ee-4e27-a412-724b2f0527b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"faa6b4c5a622851476c48809d6c576e3ccc2bbd3b4e4803cec474e3e010d85aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-994jv" podUID="3bd5a2e2-42ee-4e27-a412-724b2f0527b4" Aug 13 01:33:20.799031 kubelet[2739]: E0813 01:33:20.798120 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:33:20.799441 containerd[1544]: time="2025-08-13T01:33:20.799329634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5v9,Uid:9cb65184-4613-43ed-9fa1-0cf23f1e0e56,Namespace:kube-system,Attempt:0,}" Aug 13 01:33:20.845857 containerd[1544]: time="2025-08-13T01:33:20.845793480Z" level=error msg="Failed to destroy network for sandbox \"20f68286bf10a7c08a9c517d69ef9f8552cd4e483e60bb08db20c3c106eb6e0a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:20.847830 systemd[1]: run-netns-cni\x2d1aa48bd9\x2d9f8e\x2dc1fe\x2d9dd4\x2d74ec3b2ed643.mount: Deactivated successfully. Aug 13 01:33:20.849265 containerd[1544]: time="2025-08-13T01:33:20.849218448Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5v9,Uid:9cb65184-4613-43ed-9fa1-0cf23f1e0e56,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20f68286bf10a7c08a9c517d69ef9f8552cd4e483e60bb08db20c3c106eb6e0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:20.849740 kubelet[2739]: E0813 01:33:20.849447 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20f68286bf10a7c08a9c517d69ef9f8552cd4e483e60bb08db20c3c106eb6e0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:20.849740 kubelet[2739]: E0813 01:33:20.849500 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20f68286bf10a7c08a9c517d69ef9f8552cd4e483e60bb08db20c3c106eb6e0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:33:20.849740 kubelet[2739]: E0813 01:33:20.849520 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20f68286bf10a7c08a9c517d69ef9f8552cd4e483e60bb08db20c3c106eb6e0a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:33:20.849740 kubelet[2739]: E0813 01:33:20.849565 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mx5v9_kube-system(9cb65184-4613-43ed-9fa1-0cf23f1e0e56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mx5v9_kube-system(9cb65184-4613-43ed-9fa1-0cf23f1e0e56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20f68286bf10a7c08a9c517d69ef9f8552cd4e483e60bb08db20c3c106eb6e0a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mx5v9" podUID="9cb65184-4613-43ed-9fa1-0cf23f1e0e56" Aug 13 01:33:22.800460 containerd[1544]: time="2025-08-13T01:33:22.800389229Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:33:23.042268 kubelet[2739]: I0813 01:33:23.042212 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:33:23.042268 kubelet[2739]: I0813 01:33:23.042266 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:33:23.045554 kubelet[2739]: I0813 01:33:23.045471 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:33:23.061324 kubelet[2739]: I0813 01:33:23.059185 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:33:23.061479 kubelet[2739]: I0813 01:33:23.061332 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-7b69676bd6-774cg","calico-system/whisker-549d8f9755-gqj56","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-node-5c47r","calico-system/csi-node-driver-7bj49","tigera-operator/tigera-operator-5bf8dfcb4-rkgkd","calico-system/calico-typha-79464475b5-bbrtw","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:33:23.067151 kubelet[2739]: I0813 01:33:23.067094 2739 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-7b69676bd6-774cg" Aug 13 01:33:23.068000 kubelet[2739]: I0813 01:33:23.067339 2739 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-7b69676bd6-774cg"] Aug 13 01:33:23.193449 kubelet[2739]: I0813 01:33:23.192894 2739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/72291e38-fab0-4133-a546-c5eb46eabd9b-calico-apiserver-certs\") pod \"72291e38-fab0-4133-a546-c5eb46eabd9b\" (UID: \"72291e38-fab0-4133-a546-c5eb46eabd9b\") " Aug 13 01:33:23.193449 kubelet[2739]: I0813 01:33:23.192946 2739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbddt\" (UniqueName: \"kubernetes.io/projected/72291e38-fab0-4133-a546-c5eb46eabd9b-kube-api-access-wbddt\") pod \"72291e38-fab0-4133-a546-c5eb46eabd9b\" (UID: \"72291e38-fab0-4133-a546-c5eb46eabd9b\") " Aug 13 01:33:23.201170 kubelet[2739]: I0813 01:33:23.199094 2739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/72291e38-fab0-4133-a546-c5eb46eabd9b-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "72291e38-fab0-4133-a546-c5eb46eabd9b" (UID: "72291e38-fab0-4133-a546-c5eb46eabd9b"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:33:23.199866 systemd[1]: var-lib-kubelet-pods-72291e38\x2dfab0\x2d4133\x2da546\x2dc5eb46eabd9b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwbddt.mount: Deactivated successfully. Aug 13 01:33:23.200020 systemd[1]: var-lib-kubelet-pods-72291e38\x2dfab0\x2d4133\x2da546\x2dc5eb46eabd9b-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:33:23.203296 kubelet[2739]: I0813 01:33:23.203262 2739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/72291e38-fab0-4133-a546-c5eb46eabd9b-kube-api-access-wbddt" (OuterVolumeSpecName: "kube-api-access-wbddt") pod "72291e38-fab0-4133-a546-c5eb46eabd9b" (UID: "72291e38-fab0-4133-a546-c5eb46eabd9b"). InnerVolumeSpecName "kube-api-access-wbddt". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:33:23.293343 kubelet[2739]: I0813 01:33:23.293303 2739 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/72291e38-fab0-4133-a546-c5eb46eabd9b-calico-apiserver-certs\") on node \"172-234-27-175\" DevicePath \"\"" Aug 13 01:33:23.293343 kubelet[2739]: I0813 01:33:23.293332 2739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wbddt\" (UniqueName: \"kubernetes.io/projected/72291e38-fab0-4133-a546-c5eb46eabd9b-kube-api-access-wbddt\") on node \"172-234-27-175\" DevicePath \"\"" Aug 13 01:33:23.963772 systemd[1]: Removed slice kubepods-besteffort-pod72291e38_fab0_4133_a546_c5eb46eabd9b.slice - libcontainer container kubepods-besteffort-pod72291e38_fab0_4133_a546_c5eb46eabd9b.slice. Aug 13 01:33:24.068243 kubelet[2739]: I0813 01:33:24.068196 2739 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-7b69676bd6-774cg"] Aug 13 01:33:24.091081 kubelet[2739]: I0813 01:33:24.089648 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:33:24.091081 kubelet[2739]: I0813 01:33:24.089691 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:33:24.095403 kubelet[2739]: I0813 01:33:24.095257 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:33:24.116483 kubelet[2739]: I0813 01:33:24.116454 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:33:24.116776 kubelet[2739]: I0813 01:33:24.116748 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/whisker-549d8f9755-gqj56","calico-system/calico-kube-controllers-85fbc76f96-d5vf4","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","calico-system/csi-node-driver-7bj49","tigera-operator/tigera-operator-5bf8dfcb4-rkgkd","calico-system/calico-typha-79464475b5-bbrtw","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:33:24.123505 kubelet[2739]: I0813 01:33:24.123464 2739 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-system/whisker-549d8f9755-gqj56" Aug 13 01:33:24.123734 kubelet[2739]: I0813 01:33:24.123499 2739 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/whisker-549d8f9755-gqj56"] Aug 13 01:33:24.300439 kubelet[2739]: I0813 01:33:24.300029 2739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cc1cb31e-7164-4ad5-8b25-a666dc952f89-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "cc1cb31e-7164-4ad5-8b25-a666dc952f89" (UID: "cc1cb31e-7164-4ad5-8b25-a666dc952f89"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:33:24.301280 kubelet[2739]: I0813 01:33:24.299326 2739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc1cb31e-7164-4ad5-8b25-a666dc952f89-whisker-ca-bundle\") pod \"cc1cb31e-7164-4ad5-8b25-a666dc952f89\" (UID: \"cc1cb31e-7164-4ad5-8b25-a666dc952f89\") " Aug 13 01:33:24.301280 kubelet[2739]: I0813 01:33:24.300820 2739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hpgd5\" (UniqueName: \"kubernetes.io/projected/cc1cb31e-7164-4ad5-8b25-a666dc952f89-kube-api-access-hpgd5\") pod \"cc1cb31e-7164-4ad5-8b25-a666dc952f89\" (UID: \"cc1cb31e-7164-4ad5-8b25-a666dc952f89\") " Aug 13 01:33:24.301280 kubelet[2739]: I0813 01:33:24.300857 2739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc1cb31e-7164-4ad5-8b25-a666dc952f89-whisker-backend-key-pair\") pod \"cc1cb31e-7164-4ad5-8b25-a666dc952f89\" (UID: \"cc1cb31e-7164-4ad5-8b25-a666dc952f89\") " Aug 13 01:33:24.301280 kubelet[2739]: I0813 01:33:24.300960 2739 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cc1cb31e-7164-4ad5-8b25-a666dc952f89-whisker-ca-bundle\") on node \"172-234-27-175\" DevicePath \"\"" Aug 13 01:33:24.307747 systemd[1]: var-lib-kubelet-pods-cc1cb31e\x2d7164\x2d4ad5\x2d8b25\x2da666dc952f89-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhpgd5.mount: Deactivated successfully. Aug 13 01:33:24.309321 kubelet[2739]: I0813 01:33:24.309266 2739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc1cb31e-7164-4ad5-8b25-a666dc952f89-kube-api-access-hpgd5" (OuterVolumeSpecName: "kube-api-access-hpgd5") pod "cc1cb31e-7164-4ad5-8b25-a666dc952f89" (UID: "cc1cb31e-7164-4ad5-8b25-a666dc952f89"). InnerVolumeSpecName "kube-api-access-hpgd5". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:33:24.312071 systemd[1]: var-lib-kubelet-pods-cc1cb31e\x2d7164\x2d4ad5\x2d8b25\x2da666dc952f89-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:33:24.312489 kubelet[2739]: I0813 01:33:24.312070 2739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cc1cb31e-7164-4ad5-8b25-a666dc952f89-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "cc1cb31e-7164-4ad5-8b25-a666dc952f89" (UID: "cc1cb31e-7164-4ad5-8b25-a666dc952f89"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:33:24.401306 kubelet[2739]: I0813 01:33:24.401253 2739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hpgd5\" (UniqueName: \"kubernetes.io/projected/cc1cb31e-7164-4ad5-8b25-a666dc952f89-kube-api-access-hpgd5\") on node \"172-234-27-175\" DevicePath \"\"" Aug 13 01:33:24.401306 kubelet[2739]: I0813 01:33:24.401300 2739 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/cc1cb31e-7164-4ad5-8b25-a666dc952f89-whisker-backend-key-pair\") on node \"172-234-27-175\" DevicePath \"\"" Aug 13 01:33:24.809747 systemd[1]: Removed slice kubepods-besteffort-podcc1cb31e_7164_4ad5_8b25_a666dc952f89.slice - libcontainer container kubepods-besteffort-podcc1cb31e_7164_4ad5_8b25_a666dc952f89.slice. Aug 13 01:33:25.124359 kubelet[2739]: I0813 01:33:25.124198 2739 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-system/whisker-549d8f9755-gqj56"] Aug 13 01:33:25.145898 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2862764648.mount: Deactivated successfully. Aug 13 01:33:25.152264 containerd[1544]: time="2025-08-13T01:33:25.152227838Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2862764648: write /var/lib/containerd/tmpmounts/containerd-mount2862764648/usr/bin/calico-node: no space left on device" Aug 13 01:33:25.152727 containerd[1544]: time="2025-08-13T01:33:25.152624476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:33:25.153774 kubelet[2739]: I0813 01:33:25.153740 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:33:25.153774 kubelet[2739]: I0813 01:33:25.153773 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:33:25.154689 kubelet[2739]: E0813 01:33:25.154646 2739 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2862764648: write /var/lib/containerd/tmpmounts/containerd-mount2862764648/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:33:25.154756 kubelet[2739]: E0813 01:33:25.154687 2739 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2862764648: write /var/lib/containerd/tmpmounts/containerd-mount2862764648/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:33:25.155212 kubelet[2739]: E0813 01:33:25.154857 2739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8gp52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-5c47r_calico-system(7d03562e-9842-4425-9847-632615391bfb): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2862764648: write /var/lib/containerd/tmpmounts/containerd-mount2862764648/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:33:25.156287 kubelet[2739]: E0813 01:33:25.156250 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2862764648: write /var/lib/containerd/tmpmounts/containerd-mount2862764648/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-5c47r" podUID="7d03562e-9842-4425-9847-632615391bfb" Aug 13 01:33:25.159101 kubelet[2739]: I0813 01:33:25.159070 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:33:25.179183 kubelet[2739]: I0813 01:33:25.178423 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:33:25.179183 kubelet[2739]: I0813 01:33:25.178652 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-node-5c47r","calico-system/csi-node-driver-7bj49","tigera-operator/tigera-operator-5bf8dfcb4-rkgkd","calico-system/calico-typha-79464475b5-bbrtw","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:33:25.179183 kubelet[2739]: E0813 01:33:25.178681 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:33:25.179183 kubelet[2739]: E0813 01:33:25.178710 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:33:25.179183 kubelet[2739]: E0813 01:33:25.178718 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:33:25.179183 kubelet[2739]: E0813 01:33:25.178726 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:33:25.179183 kubelet[2739]: E0813 01:33:25.178732 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:25.179591 containerd[1544]: time="2025-08-13T01:33:25.179571158Z" level=info msg="StopContainer for \"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff\" with timeout 2 (s)" Aug 13 01:33:25.179963 containerd[1544]: time="2025-08-13T01:33:25.179947007Z" level=info msg="Stop container \"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff\" with signal terminated" Aug 13 01:33:25.191269 containerd[1544]: time="2025-08-13T01:33:25.191156587Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.191469 containerd[1544]: time="2025-08-13T01:33:25.191239027Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.191643 containerd[1544]: time="2025-08-13T01:33:25.191623967Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.191757 containerd[1544]: time="2025-08-13T01:33:25.191737917Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.191884 containerd[1544]: time="2025-08-13T01:33:25.191842376Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.191994 containerd[1544]: time="2025-08-13T01:33:25.191966736Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.192151 containerd[1544]: time="2025-08-13T01:33:25.192050726Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.192151 containerd[1544]: time="2025-08-13T01:33:25.192103616Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.192268 containerd[1544]: time="2025-08-13T01:33:25.192249445Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.192410 containerd[1544]: time="2025-08-13T01:33:25.192339635Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.192410 containerd[1544]: time="2025-08-13T01:33:25.192365475Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.192622 containerd[1544]: time="2025-08-13T01:33:25.192603984Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.192831 containerd[1544]: time="2025-08-13T01:33:25.192809133Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.193151 containerd[1544]: time="2025-08-13T01:33:25.193031453Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.193778 containerd[1544]: time="2025-08-13T01:33:25.193588222Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.193778 containerd[1544]: time="2025-08-13T01:33:25.193630252Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.193778 containerd[1544]: time="2025-08-13T01:33:25.193655732Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.193778 containerd[1544]: time="2025-08-13T01:33:25.193678332Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.193778 containerd[1544]: time="2025-08-13T01:33:25.193699792Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.194979 containerd[1544]: time="2025-08-13T01:33:25.194490029Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.194979 containerd[1544]: time="2025-08-13T01:33:25.194539089Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.194979 containerd[1544]: time="2025-08-13T01:33:25.194589679Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.194979 containerd[1544]: time="2025-08-13T01:33:25.194621249Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.194979 containerd[1544]: time="2025-08-13T01:33:25.194664919Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.195245 containerd[1544]: time="2025-08-13T01:33:25.194693619Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.195245 containerd[1544]: time="2025-08-13T01:33:25.194717759Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.195245 containerd[1544]: time="2025-08-13T01:33:25.194742159Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.195245 containerd[1544]: time="2025-08-13T01:33:25.194765358Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.195245 containerd[1544]: time="2025-08-13T01:33:25.194794668Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.195345 containerd[1544]: time="2025-08-13T01:33:25.194818048Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.195345 containerd[1544]: time="2025-08-13T01:33:25.194843868Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.195345 containerd[1544]: time="2025-08-13T01:33:25.194867508Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.195345 containerd[1544]: time="2025-08-13T01:33:25.194890608Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.195721 containerd[1544]: time="2025-08-13T01:33:25.195697647Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.195801 containerd[1544]: time="2025-08-13T01:33:25.195785186Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.195878 containerd[1544]: time="2025-08-13T01:33:25.195862386Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.197376 containerd[1544]: time="2025-08-13T01:33:25.197290392Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.197376 containerd[1544]: time="2025-08-13T01:33:25.197347762Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.197492 containerd[1544]: time="2025-08-13T01:33:25.197474962Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.197563 containerd[1544]: time="2025-08-13T01:33:25.197545922Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.197666 containerd[1544]: time="2025-08-13T01:33:25.197649242Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.197783 containerd[1544]: time="2025-08-13T01:33:25.197734871Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.197875 containerd[1544]: time="2025-08-13T01:33:25.197845911Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.197988 containerd[1544]: time="2025-08-13T01:33:25.197943571Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.197988 containerd[1544]: time="2025-08-13T01:33:25.197972511Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.198237 containerd[1544]: time="2025-08-13T01:33:25.198203840Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.198373 containerd[1544]: time="2025-08-13T01:33:25.198343930Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.198651 containerd[1544]: time="2025-08-13T01:33:25.198620499Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.199045 containerd[1544]: time="2025-08-13T01:33:25.198962528Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.199220 containerd[1544]: time="2025-08-13T01:33:25.199023617Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.199431 containerd[1544]: time="2025-08-13T01:33:25.199322106Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.199944 containerd[1544]: time="2025-08-13T01:33:25.199777416Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.200067 containerd[1544]: time="2025-08-13T01:33:25.200048365Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.201287 containerd[1544]: time="2025-08-13T01:33:25.201266502Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.201526 containerd[1544]: time="2025-08-13T01:33:25.201495282Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.201620 containerd[1544]: time="2025-08-13T01:33:25.201604182Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.201811 containerd[1544]: time="2025-08-13T01:33:25.201793771Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.202408 containerd[1544]: time="2025-08-13T01:33:25.202348629Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.202522 containerd[1544]: time="2025-08-13T01:33:25.202504589Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.202768 containerd[1544]: time="2025-08-13T01:33:25.202738198Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.203344 containerd[1544]: time="2025-08-13T01:33:25.203291816Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.203565 containerd[1544]: time="2025-08-13T01:33:25.203534977Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.204311 containerd[1544]: time="2025-08-13T01:33:25.204276454Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.204420 containerd[1544]: time="2025-08-13T01:33:25.204402994Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.204525 containerd[1544]: time="2025-08-13T01:33:25.204497254Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.204736 containerd[1544]: time="2025-08-13T01:33:25.204718063Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.205071 containerd[1544]: time="2025-08-13T01:33:25.205051142Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.205278 containerd[1544]: time="2025-08-13T01:33:25.205194622Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.205661 containerd[1544]: time="2025-08-13T01:33:25.205387041Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.205927 containerd[1544]: time="2025-08-13T01:33:25.205829251Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.205927 containerd[1544]: time="2025-08-13T01:33:25.205894440Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.206219 containerd[1544]: time="2025-08-13T01:33:25.206193469Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.206623 containerd[1544]: time="2025-08-13T01:33:25.206588178Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.206967 containerd[1544]: time="2025-08-13T01:33:25.206709528Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.207072 containerd[1544]: time="2025-08-13T01:33:25.207052357Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.207244 containerd[1544]: time="2025-08-13T01:33:25.207205076Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.207441 containerd[1544]: time="2025-08-13T01:33:25.207406377Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.207759 containerd[1544]: time="2025-08-13T01:33:25.207728666Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.208290 containerd[1544]: time="2025-08-13T01:33:25.208227624Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.208510 containerd[1544]: time="2025-08-13T01:33:25.208468374Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.208888 containerd[1544]: time="2025-08-13T01:33:25.208850662Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.209326 containerd[1544]: time="2025-08-13T01:33:25.209286571Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.210064 containerd[1544]: time="2025-08-13T01:33:25.210035820Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.210204 containerd[1544]: time="2025-08-13T01:33:25.210182839Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.210287 containerd[1544]: time="2025-08-13T01:33:25.210269769Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.210368 containerd[1544]: time="2025-08-13T01:33:25.210351219Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.210439 containerd[1544]: time="2025-08-13T01:33:25.210423929Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.210551 containerd[1544]: time="2025-08-13T01:33:25.210521328Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.210633 containerd[1544]: time="2025-08-13T01:33:25.210617068Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.210703 containerd[1544]: time="2025-08-13T01:33:25.210688758Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.257593 containerd[1544]: time="2025-08-13T01:33:25.257560998Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.258710 containerd[1544]: time="2025-08-13T01:33:25.258498325Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.258710 containerd[1544]: time="2025-08-13T01:33:25.258634415Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.258710 containerd[1544]: time="2025-08-13T01:33:25.258666545Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.258710 containerd[1544]: time="2025-08-13T01:33:25.258691465Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.258950 containerd[1544]: time="2025-08-13T01:33:25.258915304Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.259643 containerd[1544]: time="2025-08-13T01:33:25.259582433Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.260917 containerd[1544]: time="2025-08-13T01:33:25.260829889Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.260917 containerd[1544]: time="2025-08-13T01:33:25.260886739Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.261062 containerd[1544]: time="2025-08-13T01:33:25.261044309Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log\"" error="write /var/log/pods/tigera-operator_tigera-operator-5bf8dfcb4-rkgkd_880ca925-9f4c-4ae5-8b8f-68c734e86586/tigera-operator/0.log: no space left on device" Aug 13 01:33:25.265414 systemd[1]: cri-containerd-df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff.scope: Deactivated successfully. Aug 13 01:33:25.265750 systemd[1]: cri-containerd-df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff.scope: Consumed 3.731s CPU time, 92.2M memory peak. Aug 13 01:33:25.269695 containerd[1544]: time="2025-08-13T01:33:25.269656487Z" level=info msg="TaskExit event in podsandbox handler container_id:\"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff\" id:\"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff\" pid:3059 exited_at:{seconds:1755048805 nanos:267842982}" Aug 13 01:33:25.269823 containerd[1544]: time="2025-08-13T01:33:25.269799937Z" level=info msg="received exit event container_id:\"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff\" id:\"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff\" pid:3059 exited_at:{seconds:1755048805 nanos:267842982}" Aug 13 01:33:25.291501 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff-rootfs.mount: Deactivated successfully. Aug 13 01:33:25.297559 containerd[1544]: time="2025-08-13T01:33:25.297535056Z" level=info msg="StopContainer for \"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff\" returns successfully" Aug 13 01:33:25.298332 containerd[1544]: time="2025-08-13T01:33:25.298308463Z" level=info msg="StopPodSandbox for \"668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0\"" Aug 13 01:33:25.298496 containerd[1544]: time="2025-08-13T01:33:25.298359003Z" level=info msg="Container to stop \"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:33:25.305758 systemd[1]: cri-containerd-668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0.scope: Deactivated successfully. Aug 13 01:33:25.307745 containerd[1544]: time="2025-08-13T01:33:25.306579922Z" level=info msg="TaskExit event in podsandbox handler container_id:\"668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0\" id:\"668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0\" pid:2879 exit_status:137 exited_at:{seconds:1755048805 nanos:305965764}" Aug 13 01:33:25.338491 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0-rootfs.mount: Deactivated successfully. Aug 13 01:33:25.340874 containerd[1544]: time="2025-08-13T01:33:25.340702215Z" level=info msg="shim disconnected" id=668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0 namespace=k8s.io Aug 13 01:33:25.340874 containerd[1544]: time="2025-08-13T01:33:25.340789384Z" level=warning msg="cleaning up after shim disconnected" id=668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0 namespace=k8s.io Aug 13 01:33:25.340874 containerd[1544]: time="2025-08-13T01:33:25.340798044Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:33:25.352450 containerd[1544]: time="2025-08-13T01:33:25.352182176Z" level=info msg="received exit event sandbox_id:\"668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0\" exit_status:137 exited_at:{seconds:1755048805 nanos:305965764}" Aug 13 01:33:25.354117 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0-shm.mount: Deactivated successfully. Aug 13 01:33:25.354390 containerd[1544]: time="2025-08-13T01:33:25.354369390Z" level=info msg="TearDown network for sandbox \"668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0\" successfully" Aug 13 01:33:25.354653 containerd[1544]: time="2025-08-13T01:33:25.354469010Z" level=info msg="StopPodSandbox for \"668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0\" returns successfully" Aug 13 01:33:25.370571 kubelet[2739]: I0813 01:33:25.369378 2739 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-5bf8dfcb4-rkgkd" Aug 13 01:33:25.370571 kubelet[2739]: I0813 01:33:25.369400 2739 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-5bf8dfcb4-rkgkd"] Aug 13 01:33:25.401757 kubelet[2739]: I0813 01:33:25.401657 2739 kubelet.go:2306] "Pod admission denied" podUID="5da6816b-1674-48ae-a5fa-a8147f9961e1" pod="tigera-operator/tigera-operator-5bf8dfcb4-8slbp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:25.434763 kubelet[2739]: I0813 01:33:25.434721 2739 kubelet.go:2306] "Pod admission denied" podUID="7cf4ea00-89b9-4a31-a39b-dce414dd9476" pod="tigera-operator/tigera-operator-5bf8dfcb4-pdfww" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:25.462792 kubelet[2739]: I0813 01:33:25.462741 2739 kubelet.go:2306] "Pod admission denied" podUID="8a8117f5-4ca5-46db-b6c9-fd8806609c20" pod="tigera-operator/tigera-operator-5bf8dfcb4-zk6kr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:25.484671 kubelet[2739]: I0813 01:33:25.484512 2739 kubelet.go:2306] "Pod admission denied" podUID="b94ba56e-04ec-4bb4-acf0-4b86e046d63b" pod="tigera-operator/tigera-operator-5bf8dfcb4-cdc9p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:25.503579 kubelet[2739]: I0813 01:33:25.503538 2739 kubelet.go:2306] "Pod admission denied" podUID="90324f44-b921-4adf-9684-a73922918e23" pod="tigera-operator/tigera-operator-5bf8dfcb4-5fclw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:25.512473 kubelet[2739]: I0813 01:33:25.512441 2739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/880ca925-9f4c-4ae5-8b8f-68c734e86586-var-lib-calico\") pod \"880ca925-9f4c-4ae5-8b8f-68c734e86586\" (UID: \"880ca925-9f4c-4ae5-8b8f-68c734e86586\") " Aug 13 01:33:25.512541 kubelet[2739]: I0813 01:33:25.512485 2739 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-thg2w\" (UniqueName: \"kubernetes.io/projected/880ca925-9f4c-4ae5-8b8f-68c734e86586-kube-api-access-thg2w\") pod \"880ca925-9f4c-4ae5-8b8f-68c734e86586\" (UID: \"880ca925-9f4c-4ae5-8b8f-68c734e86586\") " Aug 13 01:33:25.513194 kubelet[2739]: I0813 01:33:25.513123 2739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/880ca925-9f4c-4ae5-8b8f-68c734e86586-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "880ca925-9f4c-4ae5-8b8f-68c734e86586" (UID: "880ca925-9f4c-4ae5-8b8f-68c734e86586"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:33:25.518192 kubelet[2739]: I0813 01:33:25.518106 2739 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/880ca925-9f4c-4ae5-8b8f-68c734e86586-kube-api-access-thg2w" (OuterVolumeSpecName: "kube-api-access-thg2w") pod "880ca925-9f4c-4ae5-8b8f-68c734e86586" (UID: "880ca925-9f4c-4ae5-8b8f-68c734e86586"). InnerVolumeSpecName "kube-api-access-thg2w". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:33:25.520029 systemd[1]: var-lib-kubelet-pods-880ca925\x2d9f4c\x2d4ae5\x2d8b8f\x2d68c734e86586-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dthg2w.mount: Deactivated successfully. Aug 13 01:33:25.524656 kubelet[2739]: I0813 01:33:25.524614 2739 kubelet.go:2306] "Pod admission denied" podUID="9e1bc236-dbaa-4308-9c55-c84cedf63fc2" pod="tigera-operator/tigera-operator-5bf8dfcb4-lpxp5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:25.547598 kubelet[2739]: I0813 01:33:25.547566 2739 kubelet.go:2306] "Pod admission denied" podUID="f5cb9b10-1a71-47cc-afc7-58295457bfda" pod="tigera-operator/tigera-operator-5bf8dfcb4-fx2p5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:25.559313 kubelet[2739]: I0813 01:33:25.559152 2739 kubelet.go:2306] "Pod admission denied" podUID="84a51c9c-13d9-4663-9a5b-0ed900a00237" pod="tigera-operator/tigera-operator-5bf8dfcb4-qvdm2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:25.594267 kubelet[2739]: I0813 01:33:25.594248 2739 kubelet.go:2306] "Pod admission denied" podUID="ac1fcfc7-c46c-4c86-8028-b0fbf1ef9a96" pod="tigera-operator/tigera-operator-5bf8dfcb4-5566k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:25.612829 kubelet[2739]: I0813 01:33:25.612775 2739 reconciler_common.go:293] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/880ca925-9f4c-4ae5-8b8f-68c734e86586-var-lib-calico\") on node \"172-234-27-175\" DevicePath \"\"" Aug 13 01:33:25.612829 kubelet[2739]: I0813 01:33:25.612802 2739 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-thg2w\" (UniqueName: \"kubernetes.io/projected/880ca925-9f4c-4ae5-8b8f-68c734e86586-kube-api-access-thg2w\") on node \"172-234-27-175\" DevicePath \"\"" Aug 13 01:33:25.750426 kubelet[2739]: I0813 01:33:25.749939 2739 kubelet.go:2306] "Pod admission denied" podUID="45ec5419-1006-44f5-8e72-49942c7af8a2" pod="tigera-operator/tigera-operator-5bf8dfcb4-ctmbt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:25.896263 kubelet[2739]: I0813 01:33:25.896217 2739 kubelet.go:2306] "Pod admission denied" podUID="fe726f69-85c9-4461-8826-62a0ef6b136a" pod="tigera-operator/tigera-operator-5bf8dfcb4-tzmms" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:25.964903 kubelet[2739]: I0813 01:33:25.964670 2739 scope.go:117] "RemoveContainer" containerID="df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff" Aug 13 01:33:25.967422 containerd[1544]: time="2025-08-13T01:33:25.967390149Z" level=info msg="RemoveContainer for \"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff\"" Aug 13 01:33:25.973120 containerd[1544]: time="2025-08-13T01:33:25.973077725Z" level=info msg="RemoveContainer for \"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff\" returns successfully" Aug 13 01:33:25.973618 systemd[1]: Removed slice kubepods-besteffort-pod880ca925_9f4c_4ae5_8b8f_68c734e86586.slice - libcontainer container kubepods-besteffort-pod880ca925_9f4c_4ae5_8b8f_68c734e86586.slice. Aug 13 01:33:25.974289 kubelet[2739]: I0813 01:33:25.974051 2739 scope.go:117] "RemoveContainer" containerID="df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff" Aug 13 01:33:25.973716 systemd[1]: kubepods-besteffort-pod880ca925_9f4c_4ae5_8b8f_68c734e86586.slice: Consumed 3.761s CPU time, 92.4M memory peak. Aug 13 01:33:25.975157 containerd[1544]: time="2025-08-13T01:33:25.975088989Z" level=error msg="ContainerStatus for \"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff\": not found" Aug 13 01:33:25.975531 kubelet[2739]: E0813 01:33:25.975405 2739 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff\": not found" containerID="df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff" Aug 13 01:33:25.975531 kubelet[2739]: I0813 01:33:25.975436 2739 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff"} err="failed to get container status \"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff\": rpc error: code = NotFound desc = an error occurred when try to find container \"df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff\": not found" Aug 13 01:33:26.048598 kubelet[2739]: I0813 01:33:26.048182 2739 kubelet.go:2306] "Pod admission denied" podUID="99f651df-7163-4f85-8114-14a98ccd8d6f" pod="tigera-operator/tigera-operator-5bf8dfcb4-nlq92" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:26.197939 kubelet[2739]: I0813 01:33:26.197905 2739 kubelet.go:2306] "Pod admission denied" podUID="c9bb3ed9-a957-4386-85fc-72fa8b76b496" pod="tigera-operator/tigera-operator-5bf8dfcb4-tm8hc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:26.347549 kubelet[2739]: I0813 01:33:26.346370 2739 kubelet.go:2306] "Pod admission denied" podUID="0bf61960-2e06-430b-8f5a-849cd180b361" pod="tigera-operator/tigera-operator-5bf8dfcb4-bl9gr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:26.369912 kubelet[2739]: I0813 01:33:26.369867 2739 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-5bf8dfcb4-rkgkd"] Aug 13 01:33:26.496238 kubelet[2739]: I0813 01:33:26.496203 2739 kubelet.go:2306] "Pod admission denied" podUID="e8a08b66-a344-4042-a2ae-8338d63d2ae1" pod="tigera-operator/tigera-operator-5bf8dfcb4-dw7bf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:26.648752 kubelet[2739]: I0813 01:33:26.648589 2739 kubelet.go:2306] "Pod admission denied" podUID="53e0c329-5763-4433-bba9-ac0072747fbe" pod="tigera-operator/tigera-operator-5bf8dfcb4-d9z5w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:26.801690 kubelet[2739]: I0813 01:33:26.801615 2739 kubelet.go:2306] "Pod admission denied" podUID="4f72cc74-a549-4b35-b1e1-8ea8cca951d3" pod="tigera-operator/tigera-operator-5bf8dfcb4-6m49g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:26.950553 kubelet[2739]: I0813 01:33:26.950233 2739 kubelet.go:2306] "Pod admission denied" podUID="f6259ddd-8c8c-471f-a027-e06dd6fd0bc3" pod="tigera-operator/tigera-operator-5bf8dfcb4-fk4s8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:27.096501 kubelet[2739]: I0813 01:33:27.096432 2739 kubelet.go:2306] "Pod admission denied" podUID="7b7320ab-6a3f-4e31-91cb-9a62bbc5c136" pod="tigera-operator/tigera-operator-5bf8dfcb4-2w65p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:27.247046 kubelet[2739]: I0813 01:33:27.247008 2739 kubelet.go:2306] "Pod admission denied" podUID="2029006e-7408-42fd-b816-68340081b963" pod="tigera-operator/tigera-operator-5bf8dfcb4-p9jrs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:27.397689 kubelet[2739]: I0813 01:33:27.397635 2739 kubelet.go:2306] "Pod admission denied" podUID="0b42cdce-14df-47ad-92f0-c4009fcbd14c" pod="tigera-operator/tigera-operator-5bf8dfcb4-kcp88" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:27.500339 kubelet[2739]: I0813 01:33:27.500182 2739 kubelet.go:2306] "Pod admission denied" podUID="597f8dfa-124f-4bb7-a007-51ff96d8de6f" pod="tigera-operator/tigera-operator-5bf8dfcb4-vsthc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:27.649484 kubelet[2739]: I0813 01:33:27.649429 2739 kubelet.go:2306] "Pod admission denied" podUID="6b9f5a47-a07f-48be-a4fe-f19db80007a4" pod="tigera-operator/tigera-operator-5bf8dfcb4-8f4nm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:27.800838 kubelet[2739]: I0813 01:33:27.798871 2739 kubelet.go:2306] "Pod admission denied" podUID="f72004e1-f026-4090-a080-8f2f9d46cab6" pod="tigera-operator/tigera-operator-5bf8dfcb4-9zhp8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:27.951151 kubelet[2739]: I0813 01:33:27.950639 2739 kubelet.go:2306] "Pod admission denied" podUID="199deb52-3dc8-4d6a-bba7-f5f523b1944c" pod="tigera-operator/tigera-operator-5bf8dfcb4-c7mdr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:28.150913 kubelet[2739]: I0813 01:33:28.149675 2739 kubelet.go:2306] "Pod admission denied" podUID="d7b57acc-145a-44b2-b586-18d76a7be62b" pod="tigera-operator/tigera-operator-5bf8dfcb4-bx4wd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:28.248907 kubelet[2739]: I0813 01:33:28.248848 2739 kubelet.go:2306] "Pod admission denied" podUID="0254bf80-62e1-4262-b103-76f84a686a95" pod="tigera-operator/tigera-operator-5bf8dfcb4-xr2b7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:28.348036 kubelet[2739]: I0813 01:33:28.347996 2739 kubelet.go:2306] "Pod admission denied" podUID="fbc60520-ad17-4866-af37-067570e10c10" pod="tigera-operator/tigera-operator-5bf8dfcb4-66kfg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:28.448671 kubelet[2739]: I0813 01:33:28.447494 2739 kubelet.go:2306] "Pod admission denied" podUID="ae8aa244-ba8d-4a5d-bbf3-81ebe7253d02" pod="tigera-operator/tigera-operator-5bf8dfcb4-qgnzt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:28.549056 kubelet[2739]: I0813 01:33:28.548999 2739 kubelet.go:2306] "Pod admission denied" podUID="c6a86cbe-5eef-4823-ad11-f9d980ce169f" pod="tigera-operator/tigera-operator-5bf8dfcb4-vnf6j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:28.647098 kubelet[2739]: I0813 01:33:28.647034 2739 kubelet.go:2306] "Pod admission denied" podUID="6bd8b9fa-b03a-4194-b7c3-78d16db016e4" pod="tigera-operator/tigera-operator-5bf8dfcb4-jq8tn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:28.746173 kubelet[2739]: I0813 01:33:28.745787 2739 kubelet.go:2306] "Pod admission denied" podUID="81199701-5ce6-4ac5-ab0d-e34eb5928c94" pod="tigera-operator/tigera-operator-5bf8dfcb4-5zcg6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:28.852996 kubelet[2739]: I0813 01:33:28.852936 2739 kubelet.go:2306] "Pod admission denied" podUID="8c50cb66-acea-4e57-af4a-a1ec28dab787" pod="tigera-operator/tigera-operator-5bf8dfcb4-lmtzd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:28.949953 kubelet[2739]: I0813 01:33:28.949899 2739 kubelet.go:2306] "Pod admission denied" podUID="9d525e67-a5a2-498e-b164-7495b065824a" pod="tigera-operator/tigera-operator-5bf8dfcb4-qp7pm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:29.151457 kubelet[2739]: I0813 01:33:29.151301 2739 kubelet.go:2306] "Pod admission denied" podUID="6e3f7adf-d2fc-4e68-949a-ce7e22892f70" pod="tigera-operator/tigera-operator-5bf8dfcb4-mbbl2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:29.250369 kubelet[2739]: I0813 01:33:29.249754 2739 kubelet.go:2306] "Pod admission denied" podUID="8515d098-edf8-4271-91db-1b0332275148" pod="tigera-operator/tigera-operator-5bf8dfcb4-6gn7f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:29.351083 kubelet[2739]: I0813 01:33:29.351015 2739 kubelet.go:2306] "Pod admission denied" podUID="8b3e29d5-07c8-488e-8b60-987385c24e75" pod="tigera-operator/tigera-operator-5bf8dfcb4-d6qvv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:29.547345 kubelet[2739]: I0813 01:33:29.547281 2739 kubelet.go:2306] "Pod admission denied" podUID="de842fc4-bfa1-44ff-a3f1-d50607816032" pod="tigera-operator/tigera-operator-5bf8dfcb4-6q7xd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:29.648267 kubelet[2739]: I0813 01:33:29.648212 2739 kubelet.go:2306] "Pod admission denied" podUID="2b738871-d646-4d8d-9e21-9f4f48d629c6" pod="tigera-operator/tigera-operator-5bf8dfcb4-jz5ch" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:29.698245 kubelet[2739]: I0813 01:33:29.698205 2739 kubelet.go:2306] "Pod admission denied" podUID="66f80678-f96d-4227-920a-7a6c87441365" pod="tigera-operator/tigera-operator-5bf8dfcb4-jnb87" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:29.799650 kubelet[2739]: I0813 01:33:29.799491 2739 kubelet.go:2306] "Pod admission denied" podUID="029d0669-4aa0-4eaa-9f76-ab243f05228c" pod="tigera-operator/tigera-operator-5bf8dfcb4-l6prh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:29.900245 kubelet[2739]: I0813 01:33:29.900183 2739 kubelet.go:2306] "Pod admission denied" podUID="14fef199-22b4-4f5d-b248-3cef836d248d" pod="tigera-operator/tigera-operator-5bf8dfcb4-mhch5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:29.996821 kubelet[2739]: I0813 01:33:29.996780 2739 kubelet.go:2306] "Pod admission denied" podUID="397df52b-8ffc-4973-81d9-ef6e94b2a97d" pod="tigera-operator/tigera-operator-5bf8dfcb4-5zxmw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:30.102437 kubelet[2739]: I0813 01:33:30.101787 2739 kubelet.go:2306] "Pod admission denied" podUID="f19a6593-7ff7-4809-905a-83c38d093a14" pod="tigera-operator/tigera-operator-5bf8dfcb4-gk6qv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:30.152239 kubelet[2739]: I0813 01:33:30.152158 2739 kubelet.go:2306] "Pod admission denied" podUID="535b9faa-1ec5-4fca-becf-0e25091f8d7c" pod="tigera-operator/tigera-operator-5bf8dfcb4-t9ghm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:30.245780 kubelet[2739]: I0813 01:33:30.245632 2739 kubelet.go:2306] "Pod admission denied" podUID="6add3ced-a49d-4b31-ad9f-8561d59d7372" pod="tigera-operator/tigera-operator-5bf8dfcb4-gxb47" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:30.347219 kubelet[2739]: I0813 01:33:30.347174 2739 kubelet.go:2306] "Pod admission denied" podUID="4f4d1fc2-a80e-4be4-9cd1-0a495d655c55" pod="tigera-operator/tigera-operator-5bf8dfcb4-zlmqc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:30.448152 kubelet[2739]: I0813 01:33:30.447665 2739 kubelet.go:2306] "Pod admission denied" podUID="d1b3da7f-e985-4080-bbff-535ca13f0602" pod="tigera-operator/tigera-operator-5bf8dfcb4-bn5ls" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:30.546036 kubelet[2739]: I0813 01:33:30.546007 2739 kubelet.go:2306] "Pod admission denied" podUID="ba87b620-1249-4ef7-b62a-167050a8b916" pod="tigera-operator/tigera-operator-5bf8dfcb4-gl2p4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:30.647498 kubelet[2739]: I0813 01:33:30.647462 2739 kubelet.go:2306] "Pod admission denied" podUID="d463793c-449f-4a7a-9227-5189d1ef2c2b" pod="tigera-operator/tigera-operator-5bf8dfcb4-ft74b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:30.752775 kubelet[2739]: I0813 01:33:30.752713 2739 kubelet.go:2306] "Pod admission denied" podUID="c989921b-ee5d-4377-8239-de7184da107d" pod="tigera-operator/tigera-operator-5bf8dfcb4-9gz5x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:30.802605 kubelet[2739]: I0813 01:33:30.802569 2739 kubelet.go:2306] "Pod admission denied" podUID="129d7b96-2063-48e5-928f-f1de5e8c8dd6" pod="tigera-operator/tigera-operator-5bf8dfcb4-dh7k8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:30.898414 kubelet[2739]: I0813 01:33:30.898355 2739 kubelet.go:2306] "Pod admission denied" podUID="c8a165bd-bd5d-490e-8e15-a7c8c688b905" pod="tigera-operator/tigera-operator-5bf8dfcb4-87zjq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:31.099071 kubelet[2739]: I0813 01:33:31.098953 2739 kubelet.go:2306] "Pod admission denied" podUID="858c954c-e87e-4921-9f09-d2434b5b6d63" pod="tigera-operator/tigera-operator-5bf8dfcb4-s872c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:31.200486 kubelet[2739]: I0813 01:33:31.200360 2739 kubelet.go:2306] "Pod admission denied" podUID="5fae467c-247d-4e4a-a7f8-d410f329a9a1" pod="tigera-operator/tigera-operator-5bf8dfcb4-6b5tk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:31.327730 kubelet[2739]: I0813 01:33:31.327655 2739 kubelet.go:2306] "Pod admission denied" podUID="defe851e-940b-44dc-b772-e03901235321" pod="tigera-operator/tigera-operator-5bf8dfcb4-fdm9v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:31.498827 kubelet[2739]: I0813 01:33:31.498783 2739 kubelet.go:2306] "Pod admission denied" podUID="482ff61e-41f6-4828-bed8-0ec4f66852a9" pod="tigera-operator/tigera-operator-5bf8dfcb4-5gh4g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:31.602308 kubelet[2739]: I0813 01:33:31.602259 2739 kubelet.go:2306] "Pod admission denied" podUID="b92836b0-33ac-4a9d-b93b-fcf34944c6c3" pod="tigera-operator/tigera-operator-5bf8dfcb4-26rhf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:31.698307 kubelet[2739]: I0813 01:33:31.698278 2739 kubelet.go:2306] "Pod admission denied" podUID="e3bfdbc5-efad-48c3-95c8-3f0186803e8f" pod="tigera-operator/tigera-operator-5bf8dfcb4-66gz6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:31.797858 containerd[1544]: time="2025-08-13T01:33:31.797628617Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,}" Aug 13 01:33:31.799165 kubelet[2739]: I0813 01:33:31.798429 2739 kubelet.go:2306] "Pod admission denied" podUID="94e2858c-1826-4293-8d37-3bfa8207d856" pod="tigera-operator/tigera-operator-5bf8dfcb4-p6sfd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:31.853066 containerd[1544]: time="2025-08-13T01:33:31.852992710Z" level=error msg="Failed to destroy network for sandbox \"cae77c20c4667fe6ee56157be628a56cae70b5db225f25bd37f095a93fa6d4d9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:31.856335 systemd[1]: run-netns-cni\x2dd833bf4d\x2d4e33\x2d6020\x2d707a\x2d24a369aba3b9.mount: Deactivated successfully. Aug 13 01:33:31.857931 containerd[1544]: time="2025-08-13T01:33:31.857595103Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cae77c20c4667fe6ee56157be628a56cae70b5db225f25bd37f095a93fa6d4d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:31.859731 kubelet[2739]: E0813 01:33:31.859390 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cae77c20c4667fe6ee56157be628a56cae70b5db225f25bd37f095a93fa6d4d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:31.859731 kubelet[2739]: E0813 01:33:31.859440 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cae77c20c4667fe6ee56157be628a56cae70b5db225f25bd37f095a93fa6d4d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:33:31.859731 kubelet[2739]: E0813 01:33:31.859480 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cae77c20c4667fe6ee56157be628a56cae70b5db225f25bd37f095a93fa6d4d9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:33:31.859731 kubelet[2739]: E0813 01:33:31.859560 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cae77c20c4667fe6ee56157be628a56cae70b5db225f25bd37f095a93fa6d4d9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:33:31.897951 kubelet[2739]: I0813 01:33:31.897914 2739 kubelet.go:2306] "Pod admission denied" podUID="4f874c16-7af2-4ac8-ad2f-29e388b4c681" pod="tigera-operator/tigera-operator-5bf8dfcb4-t2c65" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:32.098211 kubelet[2739]: I0813 01:33:32.097841 2739 kubelet.go:2306] "Pod admission denied" podUID="5c45f83d-fd5a-4203-b048-4f6a84914c76" pod="tigera-operator/tigera-operator-5bf8dfcb4-krjv6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:32.196403 kubelet[2739]: I0813 01:33:32.196370 2739 kubelet.go:2306] "Pod admission denied" podUID="821f9811-a07f-4cb3-914f-35aaad1b96a0" pod="tigera-operator/tigera-operator-5bf8dfcb4-r4z45" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:32.296516 kubelet[2739]: I0813 01:33:32.296478 2739 kubelet.go:2306] "Pod admission denied" podUID="abd4ef9f-6914-4541-8a5e-04cbfbaaee77" pod="tigera-operator/tigera-operator-5bf8dfcb4-2t2zt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:32.398059 kubelet[2739]: I0813 01:33:32.397782 2739 kubelet.go:2306] "Pod admission denied" podUID="a4677a36-8ac7-485c-97bc-eb41cd6baf16" pod="tigera-operator/tigera-operator-5bf8dfcb4-9t22k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:32.447891 kubelet[2739]: I0813 01:33:32.447731 2739 kubelet.go:2306] "Pod admission denied" podUID="4b346f48-0b19-45b0-a60f-adbac69d165b" pod="tigera-operator/tigera-operator-5bf8dfcb4-qr4t9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:32.550072 kubelet[2739]: I0813 01:33:32.550017 2739 kubelet.go:2306] "Pod admission denied" podUID="ec1dd38e-9eb0-4903-8fb1-8b21808a3b3f" pod="tigera-operator/tigera-operator-5bf8dfcb4-m7f9b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:32.650769 kubelet[2739]: I0813 01:33:32.649681 2739 kubelet.go:2306] "Pod admission denied" podUID="fc698b63-2676-4dc0-b4f2-ed2de9f4e530" pod="tigera-operator/tigera-operator-5bf8dfcb4-6nhf9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:32.749488 kubelet[2739]: I0813 01:33:32.749453 2739 kubelet.go:2306] "Pod admission denied" podUID="64a53a53-43a1-40df-96c7-e9ec5534b0aa" pod="tigera-operator/tigera-operator-5bf8dfcb4-gfvr6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:32.947186 kubelet[2739]: I0813 01:33:32.947036 2739 kubelet.go:2306] "Pod admission denied" podUID="39c5a299-c956-45da-bc38-0fcc3043a8b0" pod="tigera-operator/tigera-operator-5bf8dfcb4-9npnl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:33.049864 kubelet[2739]: I0813 01:33:33.049598 2739 kubelet.go:2306] "Pod admission denied" podUID="f01d3991-5401-4851-8e75-852f49780013" pod="tigera-operator/tigera-operator-5bf8dfcb4-s2gss" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:33.150197 kubelet[2739]: I0813 01:33:33.150153 2739 kubelet.go:2306] "Pod admission denied" podUID="84455b6b-ab41-4aab-8692-80dd6847f024" pod="tigera-operator/tigera-operator-5bf8dfcb4-b4vj5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:33.247918 kubelet[2739]: I0813 01:33:33.247869 2739 kubelet.go:2306] "Pod admission denied" podUID="0b869a32-ad1b-4b20-ac0e-13dccbe2418a" pod="tigera-operator/tigera-operator-5bf8dfcb4-j474v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:33.349047 kubelet[2739]: I0813 01:33:33.348835 2739 kubelet.go:2306] "Pod admission denied" podUID="62c50781-1d2c-4e92-ab1b-29a1094278a8" pod="tigera-operator/tigera-operator-5bf8dfcb4-88kt7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:33.556018 kubelet[2739]: I0813 01:33:33.555705 2739 kubelet.go:2306] "Pod admission denied" podUID="f3b4b3f4-4fef-4f71-8426-a6c19b3e88e1" pod="tigera-operator/tigera-operator-5bf8dfcb4-kp6bl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:33.648121 kubelet[2739]: I0813 01:33:33.648083 2739 kubelet.go:2306] "Pod admission denied" podUID="1a184ae0-7585-4e50-ad4a-167d2d1daefe" pod="tigera-operator/tigera-operator-5bf8dfcb4-5kqqd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:33.699430 kubelet[2739]: I0813 01:33:33.699358 2739 kubelet.go:2306] "Pod admission denied" podUID="3bf54f85-2991-45b6-b188-7454c6a56bbb" pod="tigera-operator/tigera-operator-5bf8dfcb4-qxnqq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:33.798121 kubelet[2739]: I0813 01:33:33.798050 2739 kubelet.go:2306] "Pod admission denied" podUID="1e3f4887-44ca-4dd4-a6a2-22147abe2c10" pod="tigera-operator/tigera-operator-5bf8dfcb4-c8fwq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:33.800827 kubelet[2739]: E0813 01:33:33.799466 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:33:33.801407 containerd[1544]: time="2025-08-13T01:33:33.801376376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-994jv,Uid:3bd5a2e2-42ee-4e27-a412-724b2f0527b4,Namespace:kube-system,Attempt:0,}" Aug 13 01:33:33.802425 containerd[1544]: time="2025-08-13T01:33:33.802169094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,}" Aug 13 01:33:33.881177 containerd[1544]: time="2025-08-13T01:33:33.878875042Z" level=error msg="Failed to destroy network for sandbox \"232fde868021789102f912ee64d40c6deb4c4051aa7509e831634b037cfcc1cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:33.880945 systemd[1]: run-netns-cni\x2d2fd7051b\x2dae8e\x2dc984\x2d8a06\x2de0e9c3dc521c.mount: Deactivated successfully. Aug 13 01:33:33.882744 containerd[1544]: time="2025-08-13T01:33:33.882695297Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"232fde868021789102f912ee64d40c6deb4c4051aa7509e831634b037cfcc1cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:33.884120 kubelet[2739]: E0813 01:33:33.883059 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232fde868021789102f912ee64d40c6deb4c4051aa7509e831634b037cfcc1cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:33.884120 kubelet[2739]: E0813 01:33:33.884186 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232fde868021789102f912ee64d40c6deb4c4051aa7509e831634b037cfcc1cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:33.884120 kubelet[2739]: E0813 01:33:33.884251 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"232fde868021789102f912ee64d40c6deb4c4051aa7509e831634b037cfcc1cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:33.884120 kubelet[2739]: E0813 01:33:33.884295 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"232fde868021789102f912ee64d40c6deb4c4051aa7509e831634b037cfcc1cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7bj49" podUID="4e5845c9-626c-4c83-900a-0da0bae2daed" Aug 13 01:33:33.884828 containerd[1544]: time="2025-08-13T01:33:33.884796784Z" level=error msg="Failed to destroy network for sandbox \"40115aee3f4100c45de87ab0b849c2ec001b50ea9f9d89f89bac2df4876f196f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:33.887521 containerd[1544]: time="2025-08-13T01:33:33.887363781Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-994jv,Uid:3bd5a2e2-42ee-4e27-a412-724b2f0527b4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40115aee3f4100c45de87ab0b849c2ec001b50ea9f9d89f89bac2df4876f196f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:33.887862 kubelet[2739]: E0813 01:33:33.887762 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40115aee3f4100c45de87ab0b849c2ec001b50ea9f9d89f89bac2df4876f196f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:33.887915 kubelet[2739]: E0813 01:33:33.887897 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40115aee3f4100c45de87ab0b849c2ec001b50ea9f9d89f89bac2df4876f196f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:33:33.887949 kubelet[2739]: E0813 01:33:33.887915 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40115aee3f4100c45de87ab0b849c2ec001b50ea9f9d89f89bac2df4876f196f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:33:33.889261 kubelet[2739]: E0813 01:33:33.888069 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-994jv_kube-system(3bd5a2e2-42ee-4e27-a412-724b2f0527b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-994jv_kube-system(3bd5a2e2-42ee-4e27-a412-724b2f0527b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40115aee3f4100c45de87ab0b849c2ec001b50ea9f9d89f89bac2df4876f196f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-994jv" podUID="3bd5a2e2-42ee-4e27-a412-724b2f0527b4" Aug 13 01:33:33.888308 systemd[1]: run-netns-cni\x2ddddcc8ed\x2d696b\x2d29d0\x2d3a09\x2d09c3a4cd493c.mount: Deactivated successfully. Aug 13 01:33:33.902305 kubelet[2739]: I0813 01:33:33.902254 2739 kubelet.go:2306] "Pod admission denied" podUID="0b58bdd2-c014-46fa-91e4-24293c87933b" pod="tigera-operator/tigera-operator-5bf8dfcb4-4h5tz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:33.995876 kubelet[2739]: I0813 01:33:33.995836 2739 kubelet.go:2306] "Pod admission denied" podUID="173b10c8-fa0b-4a30-8cc3-044288f61f79" pod="tigera-operator/tigera-operator-5bf8dfcb4-gqjkm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:34.201373 kubelet[2739]: I0813 01:33:34.200542 2739 kubelet.go:2306] "Pod admission denied" podUID="342d0f38-b153-48cd-86cc-dac63b0d9c9a" pod="tigera-operator/tigera-operator-5bf8dfcb4-5zg5m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:34.298117 kubelet[2739]: I0813 01:33:34.298083 2739 kubelet.go:2306] "Pod admission denied" podUID="977a51a4-9f7d-4b79-a421-2f6763f34f12" pod="tigera-operator/tigera-operator-5bf8dfcb4-6dk9d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:34.398422 kubelet[2739]: I0813 01:33:34.398383 2739 kubelet.go:2306] "Pod admission denied" podUID="d4035f1f-d282-4a7a-959f-aa0ed67b12eb" pod="tigera-operator/tigera-operator-5bf8dfcb4-mgx69" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:34.497331 kubelet[2739]: I0813 01:33:34.497293 2739 kubelet.go:2306] "Pod admission denied" podUID="e9f12963-ec69-4838-9d79-6c96ccc2e4cc" pod="tigera-operator/tigera-operator-5bf8dfcb4-7fsfl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:34.598926 kubelet[2739]: I0813 01:33:34.598873 2739 kubelet.go:2306] "Pod admission denied" podUID="b9fa6cca-6ac2-4059-a354-35ff6ea98504" pod="tigera-operator/tigera-operator-5bf8dfcb4-js8dp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:34.800533 kubelet[2739]: E0813 01:33:34.799442 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:33:34.804662 containerd[1544]: time="2025-08-13T01:33:34.800258061Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5v9,Uid:9cb65184-4613-43ed-9fa1-0cf23f1e0e56,Namespace:kube-system,Attempt:0,}" Aug 13 01:33:34.804917 kubelet[2739]: I0813 01:33:34.803151 2739 kubelet.go:2306] "Pod admission denied" podUID="85b43390-ade7-4a2e-b18a-4007bcc5b4cf" pod="tigera-operator/tigera-operator-5bf8dfcb4-ch2sk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:34.868555 containerd[1544]: time="2025-08-13T01:33:34.868495269Z" level=error msg="Failed to destroy network for sandbox \"11a1381ecf174ded28cd33fb69f5250389f143898f0069b372859f9b2de50a2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:34.870478 containerd[1544]: time="2025-08-13T01:33:34.870382616Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5v9,Uid:9cb65184-4613-43ed-9fa1-0cf23f1e0e56,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"11a1381ecf174ded28cd33fb69f5250389f143898f0069b372859f9b2de50a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:34.871427 kubelet[2739]: E0813 01:33:34.871382 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11a1381ecf174ded28cd33fb69f5250389f143898f0069b372859f9b2de50a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:34.871485 kubelet[2739]: E0813 01:33:34.871445 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11a1381ecf174ded28cd33fb69f5250389f143898f0069b372859f9b2de50a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:33:34.871485 kubelet[2739]: E0813 01:33:34.871467 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"11a1381ecf174ded28cd33fb69f5250389f143898f0069b372859f9b2de50a2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:33:34.871540 kubelet[2739]: E0813 01:33:34.871506 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mx5v9_kube-system(9cb65184-4613-43ed-9fa1-0cf23f1e0e56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mx5v9_kube-system(9cb65184-4613-43ed-9fa1-0cf23f1e0e56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"11a1381ecf174ded28cd33fb69f5250389f143898f0069b372859f9b2de50a2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mx5v9" podUID="9cb65184-4613-43ed-9fa1-0cf23f1e0e56" Aug 13 01:33:34.873740 systemd[1]: run-netns-cni\x2d5fdf2608\x2d978e\x2d8358\x2d3004\x2d51b6c24bef82.mount: Deactivated successfully. Aug 13 01:33:34.899585 kubelet[2739]: I0813 01:33:34.899543 2739 kubelet.go:2306] "Pod admission denied" podUID="5ca0b860-9847-4242-b95a-045af7010e11" pod="tigera-operator/tigera-operator-5bf8dfcb4-8256q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:34.948734 kubelet[2739]: I0813 01:33:34.948683 2739 kubelet.go:2306] "Pod admission denied" podUID="7851749c-8b88-416e-8592-d6b188738df4" pod="tigera-operator/tigera-operator-5bf8dfcb4-x2q9r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:35.053202 kubelet[2739]: I0813 01:33:35.052952 2739 kubelet.go:2306] "Pod admission denied" podUID="4072154e-0896-4355-bda5-2f185fe174f6" pod="tigera-operator/tigera-operator-5bf8dfcb4-bfpzs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:35.149028 kubelet[2739]: I0813 01:33:35.148975 2739 kubelet.go:2306] "Pod admission denied" podUID="8f40ed1b-4d62-4ac1-913c-3b1714fce5ce" pod="tigera-operator/tigera-operator-5bf8dfcb4-jndct" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:35.249569 kubelet[2739]: I0813 01:33:35.249380 2739 kubelet.go:2306] "Pod admission denied" podUID="d7298a27-0ef1-462e-9de6-bf408628f44d" pod="tigera-operator/tigera-operator-5bf8dfcb4-8kgf5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:35.352481 kubelet[2739]: I0813 01:33:35.351407 2739 kubelet.go:2306] "Pod admission denied" podUID="d57e28ff-444b-4795-94e3-505272ffcee5" pod="tigera-operator/tigera-operator-5bf8dfcb4-kr2zk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:35.452428 kubelet[2739]: I0813 01:33:35.452381 2739 kubelet.go:2306] "Pod admission denied" podUID="8f92105d-d49f-412c-97d9-d1fd062c125e" pod="tigera-operator/tigera-operator-5bf8dfcb4-4hwt6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:35.550742 kubelet[2739]: I0813 01:33:35.550699 2739 kubelet.go:2306] "Pod admission denied" podUID="7fee0622-abda-4834-bf33-5c78c26419ba" pod="tigera-operator/tigera-operator-5bf8dfcb4-ds5kt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:35.599342 kubelet[2739]: I0813 01:33:35.599164 2739 kubelet.go:2306] "Pod admission denied" podUID="987f53b4-b9c4-475b-aec1-5655a5f4a764" pod="tigera-operator/tigera-operator-5bf8dfcb4-jprkg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:35.700610 kubelet[2739]: I0813 01:33:35.700492 2739 kubelet.go:2306] "Pod admission denied" podUID="dc63e63d-c33c-46fe-a2fa-aefeb17d39d5" pod="tigera-operator/tigera-operator-5bf8dfcb4-kx7k7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:35.901419 kubelet[2739]: I0813 01:33:35.901378 2739 kubelet.go:2306] "Pod admission denied" podUID="13bfb5ad-45ea-46ef-b0a5-9840c3bd8bca" pod="tigera-operator/tigera-operator-5bf8dfcb4-pj6pw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:35.999043 kubelet[2739]: I0813 01:33:35.999008 2739 kubelet.go:2306] "Pod admission denied" podUID="193a2008-3fe8-4469-a63b-c0e161520e77" pod="tigera-operator/tigera-operator-5bf8dfcb4-v5xsr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:36.104613 kubelet[2739]: I0813 01:33:36.104573 2739 kubelet.go:2306] "Pod admission denied" podUID="389c3ee0-2151-4d7d-9945-1e46c4b25a93" pod="tigera-operator/tigera-operator-5bf8dfcb4-2xmxk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:36.197958 kubelet[2739]: I0813 01:33:36.197908 2739 kubelet.go:2306] "Pod admission denied" podUID="7e9e865f-210b-42d5-8327-d393981c0a40" pod="tigera-operator/tigera-operator-5bf8dfcb4-vcgqp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:36.247883 kubelet[2739]: I0813 01:33:36.247854 2739 kubelet.go:2306] "Pod admission denied" podUID="c2b481d8-7503-4d22-9b32-007899943b60" pod="tigera-operator/tigera-operator-5bf8dfcb4-qfp67" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:36.352472 kubelet[2739]: I0813 01:33:36.352345 2739 kubelet.go:2306] "Pod admission denied" podUID="2f0d7eb7-e687-4fb7-86bc-cd8829eec8df" pod="tigera-operator/tigera-operator-5bf8dfcb4-qnx22" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:36.398466 kubelet[2739]: I0813 01:33:36.398439 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:33:36.398607 kubelet[2739]: I0813 01:33:36.398588 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:33:36.401148 containerd[1544]: time="2025-08-13T01:33:36.400959799Z" level=info msg="StopPodSandbox for \"668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0\"" Aug 13 01:33:36.402108 containerd[1544]: time="2025-08-13T01:33:36.401536489Z" level=info msg="TearDown network for sandbox \"668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0\" successfully" Aug 13 01:33:36.402108 containerd[1544]: time="2025-08-13T01:33:36.401551769Z" level=info msg="StopPodSandbox for \"668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0\" returns successfully" Aug 13 01:33:36.402419 containerd[1544]: time="2025-08-13T01:33:36.402378638Z" level=info msg="RemovePodSandbox for \"668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0\"" Aug 13 01:33:36.402419 containerd[1544]: time="2025-08-13T01:33:36.402400678Z" level=info msg="Forcibly stopping sandbox \"668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0\"" Aug 13 01:33:36.402466 containerd[1544]: time="2025-08-13T01:33:36.402450798Z" level=info msg="TearDown network for sandbox \"668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0\" successfully" Aug 13 01:33:36.403888 containerd[1544]: time="2025-08-13T01:33:36.403861047Z" level=info msg="Ensure that sandbox 668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0 in task-service has been cleanup successfully" Aug 13 01:33:36.406227 containerd[1544]: time="2025-08-13T01:33:36.406184804Z" level=info msg="RemovePodSandbox \"668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0\" returns successfully" Aug 13 01:33:36.407000 kubelet[2739]: I0813 01:33:36.406592 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:33:36.414636 kubelet[2739]: I0813 01:33:36.414622 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:33:36.414706 kubelet[2739]: I0813 01:33:36.414688 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-node-5c47r","calico-system/csi-node-driver-7bj49","calico-system/calico-typha-79464475b5-bbrtw","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:33:36.414786 kubelet[2739]: E0813 01:33:36.414714 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:33:36.414786 kubelet[2739]: E0813 01:33:36.414723 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:33:36.414786 kubelet[2739]: E0813 01:33:36.414730 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:33:36.414786 kubelet[2739]: E0813 01:33:36.414736 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:33:36.414786 kubelet[2739]: E0813 01:33:36.414742 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:36.414786 kubelet[2739]: E0813 01:33:36.414753 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:33:36.414786 kubelet[2739]: E0813 01:33:36.414761 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:33:36.414786 kubelet[2739]: E0813 01:33:36.414769 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:33:36.414786 kubelet[2739]: E0813 01:33:36.414777 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:33:36.414786 kubelet[2739]: E0813 01:33:36.414784 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:33:36.414966 kubelet[2739]: I0813 01:33:36.414793 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:33:36.447120 kubelet[2739]: I0813 01:33:36.447088 2739 kubelet.go:2306] "Pod admission denied" podUID="12d760a3-4e01-4be2-bbd5-c543775d892b" pod="tigera-operator/tigera-operator-5bf8dfcb4-v2mqz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:36.500588 kubelet[2739]: I0813 01:33:36.500528 2739 kubelet.go:2306] "Pod admission denied" podUID="7184e35e-9fc7-4630-814e-ebb2cba9eed7" pod="tigera-operator/tigera-operator-5bf8dfcb4-xhwj7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:36.609244 kubelet[2739]: I0813 01:33:36.608432 2739 kubelet.go:2306] "Pod admission denied" podUID="059a9247-f1bc-4783-9392-42d6b58ec472" pod="tigera-operator/tigera-operator-5bf8dfcb4-brfcv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:36.700122 kubelet[2739]: I0813 01:33:36.700055 2739 kubelet.go:2306] "Pod admission denied" podUID="dde178d0-6726-410f-adc6-e9fd5c9a6aea" pod="tigera-operator/tigera-operator-5bf8dfcb4-hqckb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:36.749423 kubelet[2739]: I0813 01:33:36.749343 2739 kubelet.go:2306] "Pod admission denied" podUID="2956054c-061e-4da2-b414-a9fc18161c4c" pod="tigera-operator/tigera-operator-5bf8dfcb4-wq55g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:36.849851 kubelet[2739]: I0813 01:33:36.849808 2739 kubelet.go:2306] "Pod admission denied" podUID="23b46fb9-c3b4-401f-bb96-749659b905d1" pod="tigera-operator/tigera-operator-5bf8dfcb4-5pkph" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:36.950610 kubelet[2739]: I0813 01:33:36.950502 2739 kubelet.go:2306] "Pod admission denied" podUID="c6483d46-13cf-4388-9ca7-6d9f41643509" pod="tigera-operator/tigera-operator-5bf8dfcb4-jcmfh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:37.051977 kubelet[2739]: I0813 01:33:37.051897 2739 kubelet.go:2306] "Pod admission denied" podUID="32c7c70c-fafb-46de-9764-4a1c6cfab9db" pod="tigera-operator/tigera-operator-5bf8dfcb4-wdc4w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:37.153206 kubelet[2739]: I0813 01:33:37.152113 2739 kubelet.go:2306] "Pod admission denied" podUID="d943e585-c982-477f-85be-4139f6a081b0" pod="tigera-operator/tigera-operator-5bf8dfcb4-6l799" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:37.250258 kubelet[2739]: I0813 01:33:37.250211 2739 kubelet.go:2306] "Pod admission denied" podUID="c4bc4d1e-fead-406e-a89f-6367d644d4d9" pod="tigera-operator/tigera-operator-5bf8dfcb4-rvc4h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:37.350207 kubelet[2739]: I0813 01:33:37.350061 2739 kubelet.go:2306] "Pod admission denied" podUID="bbbcf809-e926-469a-b5fd-c9fa4af79613" pod="tigera-operator/tigera-operator-5bf8dfcb4-fkfq8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:37.447087 kubelet[2739]: I0813 01:33:37.447047 2739 kubelet.go:2306] "Pod admission denied" podUID="c755ed83-11b4-4249-99e8-d91b89b6f011" pod="tigera-operator/tigera-operator-5bf8dfcb4-n5nqf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:37.648872 kubelet[2739]: I0813 01:33:37.648507 2739 kubelet.go:2306] "Pod admission denied" podUID="c17f4478-0cf3-42ce-9b54-2971a51316dc" pod="tigera-operator/tigera-operator-5bf8dfcb4-9n6mr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:37.749428 kubelet[2739]: I0813 01:33:37.749382 2739 kubelet.go:2306] "Pod admission denied" podUID="e1baa8ba-ee77-4590-b814-1409c9f4b780" pod="tigera-operator/tigera-operator-5bf8dfcb4-tkwvt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:37.801813 kubelet[2739]: E0813 01:33:37.801645 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-5c47r" podUID="7d03562e-9842-4425-9847-632615391bfb" Aug 13 01:33:37.849660 kubelet[2739]: I0813 01:33:37.849625 2739 kubelet.go:2306] "Pod admission denied" podUID="9cf6ed80-2a52-4303-a380-8ff29f0cad72" pod="tigera-operator/tigera-operator-5bf8dfcb4-zxllh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:38.049745 kubelet[2739]: I0813 01:33:38.049695 2739 kubelet.go:2306] "Pod admission denied" podUID="474d9011-a575-46a8-ac34-354f7d1cfde9" pod="tigera-operator/tigera-operator-5bf8dfcb4-jt9v7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:38.148629 kubelet[2739]: I0813 01:33:38.148559 2739 kubelet.go:2306] "Pod admission denied" podUID="1fea0e52-fdb3-471b-9b6d-54b7270a4290" pod="tigera-operator/tigera-operator-5bf8dfcb4-2ddmg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:38.251495 kubelet[2739]: I0813 01:33:38.251452 2739 kubelet.go:2306] "Pod admission denied" podUID="c81de273-0ece-49c7-bd5a-70a210321670" pod="tigera-operator/tigera-operator-5bf8dfcb4-89v5f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:38.349797 kubelet[2739]: I0813 01:33:38.349707 2739 kubelet.go:2306] "Pod admission denied" podUID="ab89d750-ce1f-494f-8c84-62343caea774" pod="tigera-operator/tigera-operator-5bf8dfcb4-qj4fz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:38.396756 kubelet[2739]: I0813 01:33:38.396720 2739 kubelet.go:2306] "Pod admission denied" podUID="db5e7775-bbfb-41f0-89f8-765c307e559f" pod="tigera-operator/tigera-operator-5bf8dfcb4-vcx9m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:38.501385 kubelet[2739]: I0813 01:33:38.501345 2739 kubelet.go:2306] "Pod admission denied" podUID="3ff20669-1928-43a7-93cc-035d1d0c75df" pod="tigera-operator/tigera-operator-5bf8dfcb4-twgtl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:38.699791 kubelet[2739]: I0813 01:33:38.699444 2739 kubelet.go:2306] "Pod admission denied" podUID="0e5970fa-4518-44e2-b9e6-ff24cf864e13" pod="tigera-operator/tigera-operator-5bf8dfcb4-slwgk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:38.803048 kubelet[2739]: I0813 01:33:38.803014 2739 kubelet.go:2306] "Pod admission denied" podUID="32e1c199-81db-495e-af34-9da4d876534b" pod="tigera-operator/tigera-operator-5bf8dfcb4-fh5nc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:38.847570 kubelet[2739]: I0813 01:33:38.847509 2739 kubelet.go:2306] "Pod admission denied" podUID="3c992767-448e-4f13-958d-e7af35211599" pod="tigera-operator/tigera-operator-5bf8dfcb4-pzhrt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:38.949576 kubelet[2739]: I0813 01:33:38.949540 2739 kubelet.go:2306] "Pod admission denied" podUID="99e1ad1a-db05-4b0c-97ed-2f2284a77a18" pod="tigera-operator/tigera-operator-5bf8dfcb4-6tv6s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:39.051602 kubelet[2739]: I0813 01:33:39.051557 2739 kubelet.go:2306] "Pod admission denied" podUID="f9237b17-e512-40ed-851c-8ac5e247bf63" pod="tigera-operator/tigera-operator-5bf8dfcb4-wxfsh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:39.152021 kubelet[2739]: I0813 01:33:39.151991 2739 kubelet.go:2306] "Pod admission denied" podUID="68de4532-c40a-4c68-91ee-732e6bf48cd7" pod="tigera-operator/tigera-operator-5bf8dfcb4-ktctw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:39.350315 kubelet[2739]: I0813 01:33:39.350198 2739 kubelet.go:2306] "Pod admission denied" podUID="23b8111c-0bc7-431e-b7a5-431347eb148c" pod="tigera-operator/tigera-operator-5bf8dfcb4-bxt7f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:39.450542 kubelet[2739]: I0813 01:33:39.450448 2739 kubelet.go:2306] "Pod admission denied" podUID="c4cdd2f3-a526-47dd-b88b-b2b5f91a9941" pod="tigera-operator/tigera-operator-5bf8dfcb4-6fgcs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:39.552588 kubelet[2739]: I0813 01:33:39.552533 2739 kubelet.go:2306] "Pod admission denied" podUID="c80228a9-d6b8-40d4-9e5b-4fe9457c6d8e" pod="tigera-operator/tigera-operator-5bf8dfcb4-bkws8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:39.749974 kubelet[2739]: I0813 01:33:39.749930 2739 kubelet.go:2306] "Pod admission denied" podUID="a9e0de14-3d19-4a8d-8e17-a282260e9932" pod="tigera-operator/tigera-operator-5bf8dfcb4-dwds4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:39.853064 kubelet[2739]: I0813 01:33:39.853016 2739 kubelet.go:2306] "Pod admission denied" podUID="01dced75-d84e-4d86-b285-0d51b1e6651f" pod="tigera-operator/tigera-operator-5bf8dfcb4-6nxr7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:39.899499 kubelet[2739]: I0813 01:33:39.899465 2739 kubelet.go:2306] "Pod admission denied" podUID="0137d7b2-8183-450f-8041-b8c9b53c19d5" pod="tigera-operator/tigera-operator-5bf8dfcb4-kfw8c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:39.999687 kubelet[2739]: I0813 01:33:39.999626 2739 kubelet.go:2306] "Pod admission denied" podUID="c235d61d-249c-4386-9a34-82985b1965b1" pod="tigera-operator/tigera-operator-5bf8dfcb4-vbjbp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:40.101264 kubelet[2739]: I0813 01:33:40.101125 2739 kubelet.go:2306] "Pod admission denied" podUID="1c332a90-e349-4d9a-87c2-657e55eedcb7" pod="tigera-operator/tigera-operator-5bf8dfcb4-4j7xw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:40.200984 kubelet[2739]: I0813 01:33:40.200950 2739 kubelet.go:2306] "Pod admission denied" podUID="7f6824ea-63ee-45be-93b7-cd2124d1556a" pod="tigera-operator/tigera-operator-5bf8dfcb4-7bgcm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:40.301189 kubelet[2739]: I0813 01:33:40.301147 2739 kubelet.go:2306] "Pod admission denied" podUID="b65c4956-f8af-4270-ae55-5b570bf9a447" pod="tigera-operator/tigera-operator-5bf8dfcb4-m5dx9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:40.400498 kubelet[2739]: I0813 01:33:40.400375 2739 kubelet.go:2306] "Pod admission denied" podUID="c1fde52e-0934-4348-bf04-29c9f428c9dc" pod="tigera-operator/tigera-operator-5bf8dfcb4-qmmr6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:40.499046 kubelet[2739]: I0813 01:33:40.499010 2739 kubelet.go:2306] "Pod admission denied" podUID="88f18961-a1e3-440d-a1a6-a2575e3f7bf3" pod="tigera-operator/tigera-operator-5bf8dfcb4-m5jt2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:40.597944 kubelet[2739]: I0813 01:33:40.597893 2739 kubelet.go:2306] "Pod admission denied" podUID="ef69ff16-4a79-48bf-97fd-d89800316d2a" pod="tigera-operator/tigera-operator-5bf8dfcb4-h2p89" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:40.700117 kubelet[2739]: I0813 01:33:40.699981 2739 kubelet.go:2306] "Pod admission denied" podUID="351f839f-e462-408f-b8df-d31e04e19332" pod="tigera-operator/tigera-operator-5bf8dfcb4-4pgzd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:40.748493 kubelet[2739]: I0813 01:33:40.748338 2739 kubelet.go:2306] "Pod admission denied" podUID="9348f888-1a40-4032-ac3d-f2cfb9fde2a7" pod="tigera-operator/tigera-operator-5bf8dfcb4-p6qfb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:40.848053 kubelet[2739]: I0813 01:33:40.848018 2739 kubelet.go:2306] "Pod admission denied" podUID="618f8d2d-8f1b-4eb4-9750-97a0021f16ee" pod="tigera-operator/tigera-operator-5bf8dfcb4-854n6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:40.948328 kubelet[2739]: I0813 01:33:40.948285 2739 kubelet.go:2306] "Pod admission denied" podUID="453fce2c-5656-4290-adda-d648bf7d1878" pod="tigera-operator/tigera-operator-5bf8dfcb4-kzs99" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:41.048750 kubelet[2739]: I0813 01:33:41.048713 2739 kubelet.go:2306] "Pod admission denied" podUID="a8c82037-34b5-4788-a6b8-a54713781856" pod="tigera-operator/tigera-operator-5bf8dfcb4-gxppw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:41.152869 kubelet[2739]: I0813 01:33:41.152825 2739 kubelet.go:2306] "Pod admission denied" podUID="38b7b27b-3b81-4355-b0cf-318b1a092275" pod="tigera-operator/tigera-operator-5bf8dfcb4-fz5n2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:41.251256 kubelet[2739]: I0813 01:33:41.251194 2739 kubelet.go:2306] "Pod admission denied" podUID="3b44562b-d790-4df6-b013-a2b69ccce536" pod="tigera-operator/tigera-operator-5bf8dfcb4-2c4ct" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:41.352830 kubelet[2739]: I0813 01:33:41.352709 2739 kubelet.go:2306] "Pod admission denied" podUID="3ba8dc9e-4ccf-4d67-a5f7-c1fb56baeae2" pod="tigera-operator/tigera-operator-5bf8dfcb4-hrpzg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:41.400662 kubelet[2739]: I0813 01:33:41.400631 2739 kubelet.go:2306] "Pod admission denied" podUID="ec45701d-7c8a-4c62-879e-d21f33ed6b1a" pod="tigera-operator/tigera-operator-5bf8dfcb4-9698l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:41.502296 kubelet[2739]: I0813 01:33:41.502256 2739 kubelet.go:2306] "Pod admission denied" podUID="34dda8a9-125d-49e3-b5dd-83f7388917b0" pod="tigera-operator/tigera-operator-5bf8dfcb4-b9bms" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:41.599874 kubelet[2739]: I0813 01:33:41.599640 2739 kubelet.go:2306] "Pod admission denied" podUID="78c5248e-6bb3-46ac-9432-2a9e7fbf608e" pod="tigera-operator/tigera-operator-5bf8dfcb4-lwtrx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:41.700944 kubelet[2739]: I0813 01:33:41.700848 2739 kubelet.go:2306] "Pod admission denied" podUID="dbc0e0f6-56c3-4faa-a320-eb0ac59c1137" pod="tigera-operator/tigera-operator-5bf8dfcb4-rfwn6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:41.802521 kubelet[2739]: I0813 01:33:41.802299 2739 kubelet.go:2306] "Pod admission denied" podUID="3379ef10-b2b2-4013-b2c4-d8c62f0cbbcd" pod="tigera-operator/tigera-operator-5bf8dfcb4-2zghq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:41.846671 kubelet[2739]: I0813 01:33:41.846646 2739 kubelet.go:2306] "Pod admission denied" podUID="b8a358d6-5dc4-40a1-8693-b5ea350eeea3" pod="tigera-operator/tigera-operator-5bf8dfcb4-9dtjj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:41.949064 kubelet[2739]: I0813 01:33:41.949023 2739 kubelet.go:2306] "Pod admission denied" podUID="123dea6d-3166-4e01-9a7b-4ad18fed37f4" pod="tigera-operator/tigera-operator-5bf8dfcb4-5djhg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:42.051544 kubelet[2739]: I0813 01:33:42.051510 2739 kubelet.go:2306] "Pod admission denied" podUID="9d3ffdc3-0094-4074-9b61-760a9ecf9718" pod="tigera-operator/tigera-operator-5bf8dfcb4-bjqzq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:42.151666 kubelet[2739]: I0813 01:33:42.151627 2739 kubelet.go:2306] "Pod admission denied" podUID="27b0ed76-ee52-4f66-bc94-d20865268243" pod="tigera-operator/tigera-operator-5bf8dfcb4-n4d9b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:42.250335 kubelet[2739]: I0813 01:33:42.250285 2739 kubelet.go:2306] "Pod admission denied" podUID="51e0a852-95a3-49b3-8f80-a68201768320" pod="tigera-operator/tigera-operator-5bf8dfcb4-nnbcr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:42.351370 kubelet[2739]: I0813 01:33:42.351245 2739 kubelet.go:2306] "Pod admission denied" podUID="2b18aa13-f9b4-463b-95ba-09e2916e0029" pod="tigera-operator/tigera-operator-5bf8dfcb4-7864f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:42.451773 kubelet[2739]: I0813 01:33:42.451731 2739 kubelet.go:2306] "Pod admission denied" podUID="8bce16b9-e120-42f0-ade6-bd5eae0960ab" pod="tigera-operator/tigera-operator-5bf8dfcb4-l7p9h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:42.503360 kubelet[2739]: I0813 01:33:42.503321 2739 kubelet.go:2306] "Pod admission denied" podUID="898a5dc8-08c0-4337-995b-5ada312a39d7" pod="tigera-operator/tigera-operator-5bf8dfcb4-dstbf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:42.607032 kubelet[2739]: I0813 01:33:42.605842 2739 kubelet.go:2306] "Pod admission denied" podUID="5aa79b71-7589-45ce-bde8-f5b322fd3c67" pod="tigera-operator/tigera-operator-5bf8dfcb4-lqd4k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:42.704357 kubelet[2739]: I0813 01:33:42.704305 2739 kubelet.go:2306] "Pod admission denied" podUID="ff1d5548-211d-4618-9cde-d1dce9c9bc0b" pod="tigera-operator/tigera-operator-5bf8dfcb4-zwwmp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:42.800298 kubelet[2739]: I0813 01:33:42.800260 2739 kubelet.go:2306] "Pod admission denied" podUID="516cc572-dded-438b-a462-775c61480ed4" pod="tigera-operator/tigera-operator-5bf8dfcb4-87h4m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:43.002479 kubelet[2739]: I0813 01:33:43.002439 2739 kubelet.go:2306] "Pod admission denied" podUID="093a9e10-5c47-4ecb-99d8-094ea5b03e11" pod="tigera-operator/tigera-operator-5bf8dfcb4-mlfjm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:43.099108 kubelet[2739]: I0813 01:33:43.098771 2739 kubelet.go:2306] "Pod admission denied" podUID="2d265e2a-25c4-4ce3-86de-0d8e129c53d9" pod="tigera-operator/tigera-operator-5bf8dfcb4-htzr4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:43.203944 kubelet[2739]: I0813 01:33:43.203892 2739 kubelet.go:2306] "Pod admission denied" podUID="f990c26f-8123-415b-b563-5fc160cc54e9" pod="tigera-operator/tigera-operator-5bf8dfcb4-pgbb5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:43.303333 kubelet[2739]: I0813 01:33:43.303214 2739 kubelet.go:2306] "Pod admission denied" podUID="cb024de6-2ac9-4453-be8c-d8ea28bac121" pod="tigera-operator/tigera-operator-5bf8dfcb4-6rld5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:43.351836 kubelet[2739]: I0813 01:33:43.351255 2739 kubelet.go:2306] "Pod admission denied" podUID="f26d3d75-845f-412e-be0a-de783192cf02" pod="tigera-operator/tigera-operator-5bf8dfcb4-x7trb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:43.453928 kubelet[2739]: I0813 01:33:43.453877 2739 kubelet.go:2306] "Pod admission denied" podUID="1b2294b8-2003-431b-9ba9-b1aa6c0cb8be" pod="tigera-operator/tigera-operator-5bf8dfcb4-k8r68" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:43.550708 kubelet[2739]: I0813 01:33:43.550657 2739 kubelet.go:2306] "Pod admission denied" podUID="45cb5130-9236-429a-9aa2-8a3e40832e1a" pod="tigera-operator/tigera-operator-5bf8dfcb4-nnrzd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:43.650172 kubelet[2739]: I0813 01:33:43.649919 2739 kubelet.go:2306] "Pod admission denied" podUID="7516563a-667a-431f-ad0e-b025bdad9faa" pod="tigera-operator/tigera-operator-5bf8dfcb4-jlj8t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:43.750040 kubelet[2739]: I0813 01:33:43.750008 2739 kubelet.go:2306] "Pod admission denied" podUID="46e7a937-e735-4afa-a312-c88e27410664" pod="tigera-operator/tigera-operator-5bf8dfcb4-gjwpv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:43.798154 containerd[1544]: time="2025-08-13T01:33:43.798095279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,}" Aug 13 01:33:43.854523 kubelet[2739]: I0813 01:33:43.854471 2739 kubelet.go:2306] "Pod admission denied" podUID="c8a35a19-e9c8-4f22-8984-0f82f12f7214" pod="tigera-operator/tigera-operator-5bf8dfcb4-fvbkd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:43.854688 containerd[1544]: time="2025-08-13T01:33:43.854600434Z" level=error msg="Failed to destroy network for sandbox \"b5333f9375b58a1999720d0dc0cfd8527348c73ae50c2098a8040f08a9fc1ac7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:43.858877 systemd[1]: run-netns-cni\x2db7f1e940\x2d623d\x2d18f1\x2d1e76\x2db44576629e7e.mount: Deactivated successfully. Aug 13 01:33:43.862082 containerd[1544]: time="2025-08-13T01:33:43.862018970Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5333f9375b58a1999720d0dc0cfd8527348c73ae50c2098a8040f08a9fc1ac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:43.863328 kubelet[2739]: E0813 01:33:43.863291 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5333f9375b58a1999720d0dc0cfd8527348c73ae50c2098a8040f08a9fc1ac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:43.863399 kubelet[2739]: E0813 01:33:43.863343 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5333f9375b58a1999720d0dc0cfd8527348c73ae50c2098a8040f08a9fc1ac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:33:43.863399 kubelet[2739]: E0813 01:33:43.863363 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b5333f9375b58a1999720d0dc0cfd8527348c73ae50c2098a8040f08a9fc1ac7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:33:43.863496 kubelet[2739]: E0813 01:33:43.863396 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b5333f9375b58a1999720d0dc0cfd8527348c73ae50c2098a8040f08a9fc1ac7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:33:43.950606 kubelet[2739]: I0813 01:33:43.950493 2739 kubelet.go:2306] "Pod admission denied" podUID="2282c13f-2a98-4eb5-ae94-2bb1f5a00b55" pod="tigera-operator/tigera-operator-5bf8dfcb4-pp6b5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:44.050735 kubelet[2739]: I0813 01:33:44.050687 2739 kubelet.go:2306] "Pod admission denied" podUID="2079d1ad-9b50-4169-b400-d8d3cc599b59" pod="tigera-operator/tigera-operator-5bf8dfcb4-l88fn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:44.250561 kubelet[2739]: I0813 01:33:44.250509 2739 kubelet.go:2306] "Pod admission denied" podUID="d0b702e8-3307-48df-986d-0b6c198a7f7e" pod="tigera-operator/tigera-operator-5bf8dfcb4-5jbln" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:44.354757 kubelet[2739]: I0813 01:33:44.354706 2739 kubelet.go:2306] "Pod admission denied" podUID="611d9e3d-660a-4b85-be15-30d327c8c8a4" pod="tigera-operator/tigera-operator-5bf8dfcb4-52jnc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:44.455556 kubelet[2739]: I0813 01:33:44.455281 2739 kubelet.go:2306] "Pod admission denied" podUID="c15262d3-267a-4aa6-a8b5-ef9cdcbfd521" pod="tigera-operator/tigera-operator-5bf8dfcb4-98qjg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:44.558236 kubelet[2739]: I0813 01:33:44.557071 2739 kubelet.go:2306] "Pod admission denied" podUID="1463b71d-646b-4801-926c-b813cd49a1f4" pod="tigera-operator/tigera-operator-5bf8dfcb4-5cgwz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:44.650242 kubelet[2739]: I0813 01:33:44.650195 2739 kubelet.go:2306] "Pod admission denied" podUID="0fb6c4c0-5316-4129-8f5d-1c5a929a0518" pod="tigera-operator/tigera-operator-5bf8dfcb4-wtw5k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:44.749224 kubelet[2739]: I0813 01:33:44.749171 2739 kubelet.go:2306] "Pod admission denied" podUID="3b1c98a1-b5a4-4a33-99d4-fcaeb9c186d6" pod="tigera-operator/tigera-operator-5bf8dfcb4-4zjcv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:44.851187 kubelet[2739]: I0813 01:33:44.850613 2739 kubelet.go:2306] "Pod admission denied" podUID="55694b1c-c3bf-4faf-935d-ce10f85cc3ca" pod="tigera-operator/tigera-operator-5bf8dfcb4-6mpvm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:44.949468 kubelet[2739]: I0813 01:33:44.949439 2739 kubelet.go:2306] "Pod admission denied" podUID="0d1576c8-60ac-460b-9ffc-d56686a7c44f" pod="tigera-operator/tigera-operator-5bf8dfcb4-bb82p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:45.049708 kubelet[2739]: I0813 01:33:45.049663 2739 kubelet.go:2306] "Pod admission denied" podUID="630ca9ae-6ad2-454e-9ea7-ec8d8585b89d" pod="tigera-operator/tigera-operator-5bf8dfcb4-7hnwv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:45.153363 kubelet[2739]: I0813 01:33:45.152894 2739 kubelet.go:2306] "Pod admission denied" podUID="945d2b8f-2a13-4f0e-9936-4382fc46a8f5" pod="tigera-operator/tigera-operator-5bf8dfcb4-lj455" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:45.248552 kubelet[2739]: I0813 01:33:45.248507 2739 kubelet.go:2306] "Pod admission denied" podUID="af473e37-2f5d-4aa2-b67b-337e7a170ed3" pod="tigera-operator/tigera-operator-5bf8dfcb4-sp8bn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:45.454510 kubelet[2739]: I0813 01:33:45.454182 2739 kubelet.go:2306] "Pod admission denied" podUID="d718593d-b660-4777-b76f-ed28c427c1ad" pod="tigera-operator/tigera-operator-5bf8dfcb4-tj65j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:45.549480 kubelet[2739]: I0813 01:33:45.549445 2739 kubelet.go:2306] "Pod admission denied" podUID="82f93e6e-245f-404e-8dd8-c6b3a6e4ee30" pod="tigera-operator/tigera-operator-5bf8dfcb4-pspnl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:45.597630 kubelet[2739]: I0813 01:33:45.597604 2739 kubelet.go:2306] "Pod admission denied" podUID="90256d8a-f71b-4607-ab8f-265346988885" pod="tigera-operator/tigera-operator-5bf8dfcb4-9jjpx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:45.700740 kubelet[2739]: I0813 01:33:45.700699 2739 kubelet.go:2306] "Pod admission denied" podUID="4bf19d47-667e-4cc3-b64a-0f21ea862f1e" pod="tigera-operator/tigera-operator-5bf8dfcb4-cqtmd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:45.798579 kubelet[2739]: I0813 01:33:45.798550 2739 kubelet.go:2306] "Pod admission denied" podUID="5fcea9ca-16f1-475a-a00b-4e0a78c3708a" pod="tigera-operator/tigera-operator-5bf8dfcb4-hgs24" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:45.905518 kubelet[2739]: I0813 01:33:45.905428 2739 kubelet.go:2306] "Pod admission denied" podUID="d0645d4f-6a57-4b59-b984-3ef3cd577ad9" pod="tigera-operator/tigera-operator-5bf8dfcb4-m6vtq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:46.001895 kubelet[2739]: I0813 01:33:46.001498 2739 kubelet.go:2306] "Pod admission denied" podUID="dcdab9a3-3955-45d9-8559-e4bfc02c574d" pod="tigera-operator/tigera-operator-5bf8dfcb4-tgj5s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:46.106286 kubelet[2739]: I0813 01:33:46.105435 2739 kubelet.go:2306] "Pod admission denied" podUID="a9dea1af-2f9c-485e-9775-3ddc89793649" pod="tigera-operator/tigera-operator-5bf8dfcb4-92lpl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:46.201766 kubelet[2739]: I0813 01:33:46.201726 2739 kubelet.go:2306] "Pod admission denied" podUID="8afb4444-10e9-4fa7-bddd-6f805bcc4808" pod="tigera-operator/tigera-operator-5bf8dfcb4-w8kv9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:46.301312 kubelet[2739]: I0813 01:33:46.301264 2739 kubelet.go:2306] "Pod admission denied" podUID="0f2ea829-514b-4136-b174-11cba70492b2" pod="tigera-operator/tigera-operator-5bf8dfcb4-2qrn8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:46.400778 kubelet[2739]: I0813 01:33:46.399967 2739 kubelet.go:2306] "Pod admission denied" podUID="bed63e9a-eef3-4d79-b6f0-f209751ef6db" pod="tigera-operator/tigera-operator-5bf8dfcb4-hmrpd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:46.431644 kubelet[2739]: I0813 01:33:46.430849 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:33:46.431644 kubelet[2739]: I0813 01:33:46.430879 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:33:46.432460 kubelet[2739]: I0813 01:33:46.432439 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:33:46.444873 kubelet[2739]: I0813 01:33:46.444845 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:33:46.445349 kubelet[2739]: I0813 01:33:46.445217 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","kube-system/coredns-7c65d6cfc9-mx5v9","kube-system/coredns-7c65d6cfc9-994jv","calico-system/csi-node-driver-7bj49","calico-system/calico-node-5c47r","calico-system/calico-typha-79464475b5-bbrtw","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:33:46.445349 kubelet[2739]: E0813 01:33:46.445247 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:33:46.445349 kubelet[2739]: E0813 01:33:46.445257 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:33:46.445349 kubelet[2739]: E0813 01:33:46.445264 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:33:46.445349 kubelet[2739]: E0813 01:33:46.445270 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:46.445349 kubelet[2739]: E0813 01:33:46.445277 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:33:46.445349 kubelet[2739]: E0813 01:33:46.445287 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:33:46.445349 kubelet[2739]: E0813 01:33:46.445294 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:33:46.445349 kubelet[2739]: E0813 01:33:46.445303 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:33:46.445349 kubelet[2739]: E0813 01:33:46.445310 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:33:46.445349 kubelet[2739]: E0813 01:33:46.445318 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:33:46.445349 kubelet[2739]: I0813 01:33:46.445326 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:33:46.502571 kubelet[2739]: I0813 01:33:46.502543 2739 kubelet.go:2306] "Pod admission denied" podUID="118641f4-f121-465a-b8f6-188177babe3f" pod="tigera-operator/tigera-operator-5bf8dfcb4-vgcjr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:46.601111 kubelet[2739]: I0813 01:33:46.601061 2739 kubelet.go:2306] "Pod admission denied" podUID="42731f27-bd22-43eb-bc52-e57633fd1db3" pod="tigera-operator/tigera-operator-5bf8dfcb4-5pdgv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:46.650116 kubelet[2739]: I0813 01:33:46.650066 2739 kubelet.go:2306] "Pod admission denied" podUID="5b280b4c-6082-4f15-aa24-19b55ab2e263" pod="tigera-operator/tigera-operator-5bf8dfcb4-fkp8s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:46.751836 kubelet[2739]: I0813 01:33:46.751789 2739 kubelet.go:2306] "Pod admission denied" podUID="58fdda87-df73-4759-ab80-d879e0c33419" pod="tigera-operator/tigera-operator-5bf8dfcb4-849ll" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:46.800152 containerd[1544]: time="2025-08-13T01:33:46.800036344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,}" Aug 13 01:33:46.870336 kubelet[2739]: I0813 01:33:46.870286 2739 kubelet.go:2306] "Pod admission denied" podUID="314e4e21-ed22-43d8-941e-83e5fa88f942" pod="tigera-operator/tigera-operator-5bf8dfcb4-zd2n8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:46.873175 containerd[1544]: time="2025-08-13T01:33:46.873081603Z" level=error msg="Failed to destroy network for sandbox \"227790a756ff66a563ebdaba0ce304223daaa23dea6be50ed5b35425d38fa73c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:46.875735 systemd[1]: run-netns-cni\x2dc57dbb18\x2d29e7\x2d670e\x2d39cc\x2da3e8a7d4a1e0.mount: Deactivated successfully. Aug 13 01:33:46.879607 containerd[1544]: time="2025-08-13T01:33:46.879517381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"227790a756ff66a563ebdaba0ce304223daaa23dea6be50ed5b35425d38fa73c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:46.880089 kubelet[2739]: E0813 01:33:46.879843 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"227790a756ff66a563ebdaba0ce304223daaa23dea6be50ed5b35425d38fa73c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:46.880225 kubelet[2739]: E0813 01:33:46.880110 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"227790a756ff66a563ebdaba0ce304223daaa23dea6be50ed5b35425d38fa73c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:46.880225 kubelet[2739]: E0813 01:33:46.880183 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"227790a756ff66a563ebdaba0ce304223daaa23dea6be50ed5b35425d38fa73c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:46.882690 kubelet[2739]: E0813 01:33:46.882580 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"227790a756ff66a563ebdaba0ce304223daaa23dea6be50ed5b35425d38fa73c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7bj49" podUID="4e5845c9-626c-4c83-900a-0da0bae2daed" Aug 13 01:33:46.916015 kubelet[2739]: I0813 01:33:46.915727 2739 kubelet.go:2306] "Pod admission denied" podUID="eeb7fe3c-bf0e-45b1-b095-1f0c0a3f0a69" pod="tigera-operator/tigera-operator-5bf8dfcb4-vmcc5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:47.005561 kubelet[2739]: I0813 01:33:47.005399 2739 kubelet.go:2306] "Pod admission denied" podUID="84c260ab-fd8d-4139-b078-6c20202d1301" pod="tigera-operator/tigera-operator-5bf8dfcb4-26sc6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:47.209960 kubelet[2739]: I0813 01:33:47.209901 2739 kubelet.go:2306] "Pod admission denied" podUID="53d77d24-d6c8-4c2d-b839-05f842ec14c2" pod="tigera-operator/tigera-operator-5bf8dfcb4-b4bl8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:47.300853 kubelet[2739]: I0813 01:33:47.300721 2739 kubelet.go:2306] "Pod admission denied" podUID="ab4bd10b-38bd-42b4-a497-aa2eb1a6972b" pod="tigera-operator/tigera-operator-5bf8dfcb4-45z8l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:47.405242 kubelet[2739]: I0813 01:33:47.405181 2739 kubelet.go:2306] "Pod admission denied" podUID="2f15b048-e9b2-4697-a8a4-6b4d85f865b7" pod="tigera-operator/tigera-operator-5bf8dfcb4-846d2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:47.501751 kubelet[2739]: I0813 01:33:47.501701 2739 kubelet.go:2306] "Pod admission denied" podUID="bca2a2a5-87d6-43c0-9a38-8271f5b0cde2" pod="tigera-operator/tigera-operator-5bf8dfcb4-hgngw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:47.604568 kubelet[2739]: I0813 01:33:47.604384 2739 kubelet.go:2306] "Pod admission denied" podUID="7f836e2f-cbd3-48f5-9c4f-d31878a86ebd" pod="tigera-operator/tigera-operator-5bf8dfcb4-8z786" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:47.704461 kubelet[2739]: I0813 01:33:47.704415 2739 kubelet.go:2306] "Pod admission denied" podUID="01d16abf-9191-4e27-a9f6-dd667838729f" pod="tigera-operator/tigera-operator-5bf8dfcb4-cnwvb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:47.757071 kubelet[2739]: I0813 01:33:47.756362 2739 kubelet.go:2306] "Pod admission denied" podUID="0a27348c-cb00-4240-9ed3-8f82e8a26b87" pod="tigera-operator/tigera-operator-5bf8dfcb4-qmpqp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:47.855653 kubelet[2739]: I0813 01:33:47.855333 2739 kubelet.go:2306] "Pod admission denied" podUID="b9cdb864-f855-4022-86e5-1f44a8313639" pod="tigera-operator/tigera-operator-5bf8dfcb4-qcjjv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:47.954156 kubelet[2739]: I0813 01:33:47.954083 2739 kubelet.go:2306] "Pod admission denied" podUID="2a9eac63-5790-4c51-a79c-8aba2fa3236b" pod="tigera-operator/tigera-operator-5bf8dfcb4-5sw65" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:48.055327 kubelet[2739]: I0813 01:33:48.055079 2739 kubelet.go:2306] "Pod admission denied" podUID="b6013ace-239f-4056-848e-645cd024f762" pod="tigera-operator/tigera-operator-5bf8dfcb4-fk9gl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:48.163467 kubelet[2739]: I0813 01:33:48.163340 2739 kubelet.go:2306] "Pod admission denied" podUID="d417c99e-a525-44a5-90c2-0855f76a4883" pod="tigera-operator/tigera-operator-5bf8dfcb4-zrtmn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:48.252769 kubelet[2739]: I0813 01:33:48.252720 2739 kubelet.go:2306] "Pod admission denied" podUID="8f8d6a6b-39bb-4eb6-98c0-2030ac1b6687" pod="tigera-operator/tigera-operator-5bf8dfcb4-mq2f7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:48.352463 kubelet[2739]: I0813 01:33:48.352423 2739 kubelet.go:2306] "Pod admission denied" podUID="85068966-df0a-4786-8eed-4c638a5bab40" pod="tigera-operator/tigera-operator-5bf8dfcb4-4xpbl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:48.403683 kubelet[2739]: I0813 01:33:48.403628 2739 kubelet.go:2306] "Pod admission denied" podUID="da66654c-a7cc-4c3a-a8a0-a3005710624e" pod="tigera-operator/tigera-operator-5bf8dfcb4-bt7z8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:48.512899 kubelet[2739]: I0813 01:33:48.512850 2739 kubelet.go:2306] "Pod admission denied" podUID="486f155b-2e97-4bf0-a171-a9b1ff61a620" pod="tigera-operator/tigera-operator-5bf8dfcb4-nb6jp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:48.606275 kubelet[2739]: I0813 01:33:48.606200 2739 kubelet.go:2306] "Pod admission denied" podUID="d83f2640-a117-4729-af16-ca33be729de8" pod="tigera-operator/tigera-operator-5bf8dfcb4-7sbsz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:48.703710 kubelet[2739]: I0813 01:33:48.703665 2739 kubelet.go:2306] "Pod admission denied" podUID="7d9bf5fe-5a0d-4c93-ab4a-4dce4c170a0c" pod="tigera-operator/tigera-operator-5bf8dfcb4-stx7d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:48.798073 kubelet[2739]: E0813 01:33:48.797409 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:33:48.798458 containerd[1544]: time="2025-08-13T01:33:48.798425470Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-994jv,Uid:3bd5a2e2-42ee-4e27-a412-724b2f0527b4,Namespace:kube-system,Attempt:0,}" Aug 13 01:33:48.855279 containerd[1544]: time="2025-08-13T01:33:48.855210409Z" level=error msg="Failed to destroy network for sandbox \"7f253d561bc1933607deadfa5fcc7cd25640ca56abd477c7a2da1fb2b11eff94\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:48.857470 systemd[1]: run-netns-cni\x2d5ae29095\x2df3c9\x2d3ec1\x2d9557\x2df97e50b84dc1.mount: Deactivated successfully. Aug 13 01:33:48.859205 containerd[1544]: time="2025-08-13T01:33:48.859156528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-994jv,Uid:3bd5a2e2-42ee-4e27-a412-724b2f0527b4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f253d561bc1933607deadfa5fcc7cd25640ca56abd477c7a2da1fb2b11eff94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:48.859514 kubelet[2739]: E0813 01:33:48.859444 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f253d561bc1933607deadfa5fcc7cd25640ca56abd477c7a2da1fb2b11eff94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:48.859514 kubelet[2739]: E0813 01:33:48.859509 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f253d561bc1933607deadfa5fcc7cd25640ca56abd477c7a2da1fb2b11eff94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:33:48.859717 kubelet[2739]: E0813 01:33:48.859529 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f253d561bc1933607deadfa5fcc7cd25640ca56abd477c7a2da1fb2b11eff94\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:33:48.859717 kubelet[2739]: E0813 01:33:48.859576 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-994jv_kube-system(3bd5a2e2-42ee-4e27-a412-724b2f0527b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-994jv_kube-system(3bd5a2e2-42ee-4e27-a412-724b2f0527b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f253d561bc1933607deadfa5fcc7cd25640ca56abd477c7a2da1fb2b11eff94\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-994jv" podUID="3bd5a2e2-42ee-4e27-a412-724b2f0527b4" Aug 13 01:33:48.903534 kubelet[2739]: I0813 01:33:48.903478 2739 kubelet.go:2306] "Pod admission denied" podUID="ac560b47-c0f6-4c71-b0b2-485ba3fb165f" pod="tigera-operator/tigera-operator-5bf8dfcb4-5hrrw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:49.002948 kubelet[2739]: I0813 01:33:49.002908 2739 kubelet.go:2306] "Pod admission denied" podUID="4690bb42-a04b-471e-a7aa-76a3e9033517" pod="tigera-operator/tigera-operator-5bf8dfcb4-94b5c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:49.099251 kubelet[2739]: I0813 01:33:49.099158 2739 kubelet.go:2306] "Pod admission denied" podUID="1b08df47-a2be-41a7-ba98-46d05a4c6d13" pod="tigera-operator/tigera-operator-5bf8dfcb4-c7ld8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:49.202697 kubelet[2739]: I0813 01:33:49.202654 2739 kubelet.go:2306] "Pod admission denied" podUID="3adbf6bd-7cd9-453c-a02c-0e8c28aa6169" pod="tigera-operator/tigera-operator-5bf8dfcb4-hw2bn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:49.299874 kubelet[2739]: I0813 01:33:49.299825 2739 kubelet.go:2306] "Pod admission denied" podUID="bf3c8d7d-ac1d-4fa3-a412-1b2cf250838d" pod="tigera-operator/tigera-operator-5bf8dfcb4-x8b92" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:49.507734 kubelet[2739]: I0813 01:33:49.507004 2739 kubelet.go:2306] "Pod admission denied" podUID="c2666ad2-d1b5-4756-8d86-1783c5d8ee94" pod="tigera-operator/tigera-operator-5bf8dfcb4-6wtkn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:49.600192 kubelet[2739]: I0813 01:33:49.600125 2739 kubelet.go:2306] "Pod admission denied" podUID="10fa09da-81a8-4b3c-bdfc-705a450d0f6e" pod="tigera-operator/tigera-operator-5bf8dfcb4-tzhhw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:49.705988 kubelet[2739]: I0813 01:33:49.705912 2739 kubelet.go:2306] "Pod admission denied" podUID="c89d85e5-d2d5-41d0-9740-609f28e6c71f" pod="tigera-operator/tigera-operator-5bf8dfcb4-ssfg8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:49.798617 kubelet[2739]: E0813 01:33:49.798509 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:33:49.799598 kubelet[2739]: E0813 01:33:49.799545 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:33:49.800541 containerd[1544]: time="2025-08-13T01:33:49.800407381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5v9,Uid:9cb65184-4613-43ed-9fa1-0cf23f1e0e56,Namespace:kube-system,Attempt:0,}" Aug 13 01:33:49.802748 kubelet[2739]: I0813 01:33:49.802575 2739 kubelet.go:2306] "Pod admission denied" podUID="7ff8562f-449f-4e93-af4b-1efc0c878d6b" pod="tigera-operator/tigera-operator-5bf8dfcb4-6njb4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:49.870265 containerd[1544]: time="2025-08-13T01:33:49.870216649Z" level=error msg="Failed to destroy network for sandbox \"25def13a18b8c5a15a933c7891e49d0f45e0efefc1f155695641da04c0363893\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:49.872568 systemd[1]: run-netns-cni\x2d86b9141b\x2d245c\x2d368c\x2d12a9\x2d9a5159083e64.mount: Deactivated successfully. Aug 13 01:33:49.873791 containerd[1544]: time="2025-08-13T01:33:49.873711079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5v9,Uid:9cb65184-4613-43ed-9fa1-0cf23f1e0e56,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"25def13a18b8c5a15a933c7891e49d0f45e0efefc1f155695641da04c0363893\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:49.874350 kubelet[2739]: E0813 01:33:49.874305 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25def13a18b8c5a15a933c7891e49d0f45e0efefc1f155695641da04c0363893\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:49.874417 kubelet[2739]: E0813 01:33:49.874400 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25def13a18b8c5a15a933c7891e49d0f45e0efefc1f155695641da04c0363893\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:33:49.874463 kubelet[2739]: E0813 01:33:49.874424 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"25def13a18b8c5a15a933c7891e49d0f45e0efefc1f155695641da04c0363893\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:33:49.874557 kubelet[2739]: E0813 01:33:49.874508 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mx5v9_kube-system(9cb65184-4613-43ed-9fa1-0cf23f1e0e56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mx5v9_kube-system(9cb65184-4613-43ed-9fa1-0cf23f1e0e56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"25def13a18b8c5a15a933c7891e49d0f45e0efefc1f155695641da04c0363893\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mx5v9" podUID="9cb65184-4613-43ed-9fa1-0cf23f1e0e56" Aug 13 01:33:49.902963 kubelet[2739]: I0813 01:33:49.902923 2739 kubelet.go:2306] "Pod admission denied" podUID="a3883e52-d7ef-47cb-b744-27c2a3df7b69" pod="tigera-operator/tigera-operator-5bf8dfcb4-j2wnl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:50.002104 kubelet[2739]: I0813 01:33:50.002053 2739 kubelet.go:2306] "Pod admission denied" podUID="946eb2d9-265f-4f86-8419-021a82fbf708" pod="tigera-operator/tigera-operator-5bf8dfcb4-6rh8x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:50.106781 kubelet[2739]: I0813 01:33:50.106671 2739 kubelet.go:2306] "Pod admission denied" podUID="a5311e46-7b1b-4339-9568-63255c13cc29" pod="tigera-operator/tigera-operator-5bf8dfcb4-wz7sn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:50.200825 kubelet[2739]: I0813 01:33:50.200773 2739 kubelet.go:2306] "Pod admission denied" podUID="c5d565f8-4ae9-475f-8711-a59a06c9f883" pod="tigera-operator/tigera-operator-5bf8dfcb4-h6szz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:50.302097 kubelet[2739]: I0813 01:33:50.302055 2739 kubelet.go:2306] "Pod admission denied" podUID="633a5843-13b4-4f2e-aeb1-4c1d15c8e1d3" pod="tigera-operator/tigera-operator-5bf8dfcb4-vzcwl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:50.401260 kubelet[2739]: I0813 01:33:50.400823 2739 kubelet.go:2306] "Pod admission denied" podUID="6e3a3797-0d18-40bc-a2e6-51f942903bc1" pod="tigera-operator/tigera-operator-5bf8dfcb4-67462" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:50.500401 kubelet[2739]: I0813 01:33:50.500344 2739 kubelet.go:2306] "Pod admission denied" podUID="bf390235-2ded-425e-b9d3-1414565763ee" pod="tigera-operator/tigera-operator-5bf8dfcb4-x6g6w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:50.609211 kubelet[2739]: I0813 01:33:50.609149 2739 kubelet.go:2306] "Pod admission denied" podUID="0c05f663-b6bd-4b92-a37d-8311a6391d21" pod="tigera-operator/tigera-operator-5bf8dfcb4-q4hpn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:50.701035 kubelet[2739]: I0813 01:33:50.700607 2739 kubelet.go:2306] "Pod admission denied" podUID="491f6607-8316-4711-9bb5-6c43f492ad2e" pod="tigera-operator/tigera-operator-5bf8dfcb4-8dcmq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:50.905768 kubelet[2739]: I0813 01:33:50.905728 2739 kubelet.go:2306] "Pod admission denied" podUID="43a8d1c9-64dc-4a7c-a3c7-c3ab7c57afb5" pod="tigera-operator/tigera-operator-5bf8dfcb4-qckcm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:51.002835 kubelet[2739]: I0813 01:33:51.002782 2739 kubelet.go:2306] "Pod admission denied" podUID="0166e3ca-b867-45cc-850c-00bc009ab697" pod="tigera-operator/tigera-operator-5bf8dfcb4-w78q4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:51.056809 kubelet[2739]: I0813 01:33:51.056756 2739 kubelet.go:2306] "Pod admission denied" podUID="09676cdd-bb43-4259-8b7b-c3eb0fca7a62" pod="tigera-operator/tigera-operator-5bf8dfcb4-gzjwb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:51.153610 kubelet[2739]: I0813 01:33:51.153320 2739 kubelet.go:2306] "Pod admission denied" podUID="23866356-307e-4d3a-ab41-d89168e1cef5" pod="tigera-operator/tigera-operator-5bf8dfcb4-ppmdb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:51.251722 kubelet[2739]: I0813 01:33:51.251676 2739 kubelet.go:2306] "Pod admission denied" podUID="5c30821f-4a31-4ec5-84eb-535975f02968" pod="tigera-operator/tigera-operator-5bf8dfcb4-wljvw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:51.354674 kubelet[2739]: I0813 01:33:51.354532 2739 kubelet.go:2306] "Pod admission denied" podUID="cfcebb43-4309-465c-9e0b-42c90dfb87f2" pod="tigera-operator/tigera-operator-5bf8dfcb4-df6tn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:51.453462 kubelet[2739]: I0813 01:33:51.453397 2739 kubelet.go:2306] "Pod admission denied" podUID="5f563894-4290-42ca-b6f8-c0baf35aa4bb" pod="tigera-operator/tigera-operator-5bf8dfcb4-kgqcb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:51.501902 kubelet[2739]: I0813 01:33:51.501858 2739 kubelet.go:2306] "Pod admission denied" podUID="3224ca8d-cbf8-419f-85f7-7ea63790e019" pod="tigera-operator/tigera-operator-5bf8dfcb4-4hwc4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:51.602053 kubelet[2739]: I0813 01:33:51.601999 2739 kubelet.go:2306] "Pod admission denied" podUID="35aaa3ef-8d8d-4aee-8591-041c53aa723c" pod="tigera-operator/tigera-operator-5bf8dfcb4-pl6rn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:51.710143 kubelet[2739]: I0813 01:33:51.709686 2739 kubelet.go:2306] "Pod admission denied" podUID="dc55fc90-a19f-40a4-be92-408c4a2e2515" pod="tigera-operator/tigera-operator-5bf8dfcb4-mfjfh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:51.803786 containerd[1544]: time="2025-08-13T01:33:51.802705183Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:33:51.811325 kubelet[2739]: I0813 01:33:51.811297 2739 kubelet.go:2306] "Pod admission denied" podUID="19ed9214-e10b-4d93-bf48-27d6413163e8" pod="tigera-operator/tigera-operator-5bf8dfcb4-fwhkk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:51.902535 kubelet[2739]: I0813 01:33:51.902484 2739 kubelet.go:2306] "Pod admission denied" podUID="2504cb44-2fc6-4abc-8acf-c768586909aa" pod="tigera-operator/tigera-operator-5bf8dfcb4-4bkbz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:51.953258 kubelet[2739]: I0813 01:33:51.953211 2739 kubelet.go:2306] "Pod admission denied" podUID="81a24a59-abb3-445e-8a93-bccb944a7c07" pod="tigera-operator/tigera-operator-5bf8dfcb4-g6bmq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:52.050212 kubelet[2739]: I0813 01:33:52.050163 2739 kubelet.go:2306] "Pod admission denied" podUID="ed14298c-212d-4ce7-9459-47dc938f6be8" pod="tigera-operator/tigera-operator-5bf8dfcb4-wkr59" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:52.152092 kubelet[2739]: I0813 01:33:52.152038 2739 kubelet.go:2306] "Pod admission denied" podUID="6dbd499b-6c54-4be5-8ad2-d127a67a30ce" pod="tigera-operator/tigera-operator-5bf8dfcb4-24mdk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:52.258444 kubelet[2739]: I0813 01:33:52.258401 2739 kubelet.go:2306] "Pod admission denied" podUID="cb2187d2-8929-4511-a2cd-c893c171a408" pod="tigera-operator/tigera-operator-5bf8dfcb4-cnzvf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:52.355957 kubelet[2739]: I0813 01:33:52.355252 2739 kubelet.go:2306] "Pod admission denied" podUID="ee7fbb4c-a060-491a-b716-bda908c33d3c" pod="tigera-operator/tigera-operator-5bf8dfcb4-c2qhb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:52.402104 kubelet[2739]: I0813 01:33:52.402055 2739 kubelet.go:2306] "Pod admission denied" podUID="9604fd0a-d75c-452d-80d0-796e7cdb3ddc" pod="tigera-operator/tigera-operator-5bf8dfcb4-rcqb5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:52.510263 kubelet[2739]: I0813 01:33:52.509934 2739 kubelet.go:2306] "Pod admission denied" podUID="cd914615-f47a-40c9-82a7-92f15567bc1c" pod="tigera-operator/tigera-operator-5bf8dfcb4-qjlbc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:52.604703 kubelet[2739]: I0813 01:33:52.604576 2739 kubelet.go:2306] "Pod admission denied" podUID="3d3a08d2-74dd-4307-81f7-fc63f8cc015d" pod="tigera-operator/tigera-operator-5bf8dfcb4-cxkws" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:52.711271 kubelet[2739]: I0813 01:33:52.710614 2739 kubelet.go:2306] "Pod admission denied" podUID="17a8fb75-e12d-4172-a2f2-17de34e89391" pod="tigera-operator/tigera-operator-5bf8dfcb4-8k62z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:52.813265 kubelet[2739]: I0813 01:33:52.813230 2739 kubelet.go:2306] "Pod admission denied" podUID="1169f070-9bf2-489e-8857-b83574dc5162" pod="tigera-operator/tigera-operator-5bf8dfcb4-zr78x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:52.932240 kubelet[2739]: I0813 01:33:52.932106 2739 kubelet.go:2306] "Pod admission denied" podUID="d47cfd59-7296-436a-afef-d0122b3e17c2" pod="tigera-operator/tigera-operator-5bf8dfcb4-vczrw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:53.014671 kubelet[2739]: I0813 01:33:53.014631 2739 kubelet.go:2306] "Pod admission denied" podUID="18d1b13d-628e-43d7-9421-89428602dfe9" pod="tigera-operator/tigera-operator-5bf8dfcb4-tw62v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:53.108156 kubelet[2739]: I0813 01:33:53.108065 2739 kubelet.go:2306] "Pod admission denied" podUID="730d5683-fcbe-4220-abe6-259d8341a2d3" pod="tigera-operator/tigera-operator-5bf8dfcb4-d8b4q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:53.306425 kubelet[2739]: I0813 01:33:53.305637 2739 kubelet.go:2306] "Pod admission denied" podUID="0046add0-7047-4ec2-b225-b7341721c42f" pod="tigera-operator/tigera-operator-5bf8dfcb4-xbp7f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:53.406096 kubelet[2739]: I0813 01:33:53.406052 2739 kubelet.go:2306] "Pod admission denied" podUID="fa4911b9-bd4e-474b-8b4b-ed44da515cc9" pod="tigera-operator/tigera-operator-5bf8dfcb4-c2rnp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:53.509672 kubelet[2739]: I0813 01:33:53.509618 2739 kubelet.go:2306] "Pod admission denied" podUID="a915b23e-6e03-486d-8331-f1645362efe1" pod="tigera-operator/tigera-operator-5bf8dfcb4-qjx7q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:53.714325 kubelet[2739]: I0813 01:33:53.713735 2739 kubelet.go:2306] "Pod admission denied" podUID="83731d33-e937-4f95-bd71-a3cb0fdbd805" pod="tigera-operator/tigera-operator-5bf8dfcb4-lxsfs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:53.814088 kubelet[2739]: I0813 01:33:53.814050 2739 kubelet.go:2306] "Pod admission denied" podUID="6eb6534f-2637-4e97-83bf-550c8190b0aa" pod="tigera-operator/tigera-operator-5bf8dfcb4-fpshw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:53.912830 kubelet[2739]: I0813 01:33:53.912184 2739 kubelet.go:2306] "Pod admission denied" podUID="6c4a8185-9f81-4580-9e26-10586ca67db6" pod="tigera-operator/tigera-operator-5bf8dfcb4-n4zjs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:54.011778 kubelet[2739]: I0813 01:33:54.011701 2739 kubelet.go:2306] "Pod admission denied" podUID="e77f6030-41d7-4da6-abbb-f02c5c914b6d" pod="tigera-operator/tigera-operator-5bf8dfcb4-jm8cg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:54.110284 kubelet[2739]: I0813 01:33:54.109666 2739 kubelet.go:2306] "Pod admission denied" podUID="3395f31f-e3af-478d-b0a6-b374880e2666" pod="tigera-operator/tigera-operator-5bf8dfcb4-9dcxp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:54.224066 kubelet[2739]: I0813 01:33:54.224014 2739 kubelet.go:2306] "Pod admission denied" podUID="669ce05e-d658-4be6-a448-fcc989ddd128" pod="tigera-operator/tigera-operator-5bf8dfcb4-j7rzc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:54.309775 kubelet[2739]: I0813 01:33:54.309340 2739 kubelet.go:2306] "Pod admission denied" podUID="95fe5bc2-1ff1-47b8-8512-566904c61d0d" pod="tigera-operator/tigera-operator-5bf8dfcb4-442w9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:54.415835 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1252382039.mount: Deactivated successfully. Aug 13 01:33:54.417872 containerd[1544]: time="2025-08-13T01:33:54.417825741Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1252382039: write /var/lib/containerd/tmpmounts/containerd-mount1252382039/usr/bin/calico-node: no space left on device" Aug 13 01:33:54.418838 containerd[1544]: time="2025-08-13T01:33:54.418246571Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:33:54.418898 kubelet[2739]: E0813 01:33:54.418522 2739 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1252382039: write /var/lib/containerd/tmpmounts/containerd-mount1252382039/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:33:54.418898 kubelet[2739]: E0813 01:33:54.418564 2739 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1252382039: write /var/lib/containerd/tmpmounts/containerd-mount1252382039/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:33:54.419418 kubelet[2739]: E0813 01:33:54.418738 2739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-8gp52,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-5c47r_calico-system(7d03562e-9842-4425-9847-632615391bfb): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1252382039: write /var/lib/containerd/tmpmounts/containerd-mount1252382039/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:33:54.420625 kubelet[2739]: E0813 01:33:54.420530 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1252382039: write /var/lib/containerd/tmpmounts/containerd-mount1252382039/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-5c47r" podUID="7d03562e-9842-4425-9847-632615391bfb" Aug 13 01:33:54.504075 kubelet[2739]: I0813 01:33:54.504002 2739 kubelet.go:2306] "Pod admission denied" podUID="06c4eac3-cc6c-41dc-9a03-521203634a78" pod="tigera-operator/tigera-operator-5bf8dfcb4-92kkg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:54.603945 kubelet[2739]: I0813 01:33:54.603827 2739 kubelet.go:2306] "Pod admission denied" podUID="fdee324c-5984-49b9-b14e-676e55605019" pod="tigera-operator/tigera-operator-5bf8dfcb4-dj4z2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:54.713346 kubelet[2739]: I0813 01:33:54.713298 2739 kubelet.go:2306] "Pod admission denied" podUID="988d1da6-1e1a-482e-acdf-3c55e276fdf6" pod="tigera-operator/tigera-operator-5bf8dfcb4-466ht" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:54.802420 kubelet[2739]: I0813 01:33:54.802386 2739 kubelet.go:2306] "Pod admission denied" podUID="5f710de4-34ed-4a43-9ab4-96efec66c9e0" pod="tigera-operator/tigera-operator-5bf8dfcb4-ht4sz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:54.903775 kubelet[2739]: I0813 01:33:54.903638 2739 kubelet.go:2306] "Pod admission denied" podUID="d27916b2-8be9-4902-9d5c-58dd32bc1ece" pod="tigera-operator/tigera-operator-5bf8dfcb4-wmmsj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:55.002627 kubelet[2739]: I0813 01:33:55.002572 2739 kubelet.go:2306] "Pod admission denied" podUID="0fdc2439-d6d6-49eb-a78e-63b0059f49cb" pod="tigera-operator/tigera-operator-5bf8dfcb4-h42hx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:55.115162 kubelet[2739]: I0813 01:33:55.114374 2739 kubelet.go:2306] "Pod admission denied" podUID="40879afa-bbae-4bb9-969b-25a0958caba9" pod="tigera-operator/tigera-operator-5bf8dfcb4-sms22" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:55.305202 kubelet[2739]: I0813 01:33:55.305115 2739 kubelet.go:2306] "Pod admission denied" podUID="ba1396f0-ab27-4e62-8dc8-f584cbc0507a" pod="tigera-operator/tigera-operator-5bf8dfcb4-wtf48" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:55.404257 kubelet[2739]: I0813 01:33:55.404192 2739 kubelet.go:2306] "Pod admission denied" podUID="c5b83730-83b9-4f18-9ab4-020f780c0599" pod="tigera-operator/tigera-operator-5bf8dfcb4-s7ns7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:55.503342 kubelet[2739]: I0813 01:33:55.503282 2739 kubelet.go:2306] "Pod admission denied" podUID="32837b46-ddda-4826-aed4-48b76f601439" pod="tigera-operator/tigera-operator-5bf8dfcb4-lsftn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:55.602961 kubelet[2739]: I0813 01:33:55.602852 2739 kubelet.go:2306] "Pod admission denied" podUID="65c57714-bdf1-4490-a075-fe4d0e1156cf" pod="tigera-operator/tigera-operator-5bf8dfcb4-bvlq4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:55.702154 kubelet[2739]: I0813 01:33:55.702098 2739 kubelet.go:2306] "Pod admission denied" podUID="e6f64dd7-66b7-45d5-8e8b-f0129381f50a" pod="tigera-operator/tigera-operator-5bf8dfcb4-ctx4c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:55.798848 containerd[1544]: time="2025-08-13T01:33:55.798495743Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,}" Aug 13 01:33:55.807967 kubelet[2739]: I0813 01:33:55.807717 2739 kubelet.go:2306] "Pod admission denied" podUID="dbe835e9-0d9d-4b1c-8a37-fb778be9a011" pod="tigera-operator/tigera-operator-5bf8dfcb4-l52cc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:55.861002 containerd[1544]: time="2025-08-13T01:33:55.860891868Z" level=error msg="Failed to destroy network for sandbox \"36b4f0af639bc43e19017671357ec88221adfb07ce8e6c3b86f451596762b3e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:55.864945 systemd[1]: run-netns-cni\x2d41d0f4d8\x2d4367\x2dc1df\x2d09c6\x2d313c45d0ecf1.mount: Deactivated successfully. Aug 13 01:33:55.866363 containerd[1544]: time="2025-08-13T01:33:55.866288217Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"36b4f0af639bc43e19017671357ec88221adfb07ce8e6c3b86f451596762b3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:55.866653 kubelet[2739]: E0813 01:33:55.866526 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36b4f0af639bc43e19017671357ec88221adfb07ce8e6c3b86f451596762b3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:55.866653 kubelet[2739]: E0813 01:33:55.866605 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36b4f0af639bc43e19017671357ec88221adfb07ce8e6c3b86f451596762b3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:33:55.866653 kubelet[2739]: E0813 01:33:55.866629 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36b4f0af639bc43e19017671357ec88221adfb07ce8e6c3b86f451596762b3e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:33:55.866746 kubelet[2739]: E0813 01:33:55.866693 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36b4f0af639bc43e19017671357ec88221adfb07ce8e6c3b86f451596762b3e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:33:55.911955 kubelet[2739]: I0813 01:33:55.911720 2739 kubelet.go:2306] "Pod admission denied" podUID="9acc2fc5-e82d-40c1-96f0-e01fe6f62e21" pod="tigera-operator/tigera-operator-5bf8dfcb4-qgth7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:56.003014 kubelet[2739]: I0813 01:33:56.002970 2739 kubelet.go:2306] "Pod admission denied" podUID="4ead5a58-10ff-4ab1-9bed-817eb079a768" pod="tigera-operator/tigera-operator-5bf8dfcb4-fhcdb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:56.050267 kubelet[2739]: I0813 01:33:56.050232 2739 kubelet.go:2306] "Pod admission denied" podUID="90518ee9-942d-4011-b7cd-9fd44b7a95e9" pod="tigera-operator/tigera-operator-5bf8dfcb4-zxf66" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:56.155561 kubelet[2739]: I0813 01:33:56.154949 2739 kubelet.go:2306] "Pod admission denied" podUID="071a6f25-bb2d-43aa-84d6-68f168c15a6e" pod="tigera-operator/tigera-operator-5bf8dfcb4-f89lz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:56.354452 kubelet[2739]: I0813 01:33:56.354397 2739 kubelet.go:2306] "Pod admission denied" podUID="3bcebc22-786e-4770-b7e8-11e0d73a14b4" pod="tigera-operator/tigera-operator-5bf8dfcb4-2lh42" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:56.455597 kubelet[2739]: I0813 01:33:56.455285 2739 kubelet.go:2306] "Pod admission denied" podUID="e2d7caec-ac6c-44c4-a2f0-0bb763c8c769" pod="tigera-operator/tigera-operator-5bf8dfcb4-lq6st" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:56.470342 kubelet[2739]: I0813 01:33:56.470320 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:33:56.470434 kubelet[2739]: I0813 01:33:56.470347 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:33:56.473414 kubelet[2739]: I0813 01:33:56.473339 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:33:56.483277 kubelet[2739]: I0813 01:33:56.483239 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:33:56.483418 kubelet[2739]: I0813 01:33:56.483299 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-node-5c47r","calico-system/csi-node-driver-7bj49","calico-system/calico-typha-79464475b5-bbrtw","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:33:56.483418 kubelet[2739]: E0813 01:33:56.483326 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:33:56.483418 kubelet[2739]: E0813 01:33:56.483336 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:33:56.483418 kubelet[2739]: E0813 01:33:56.483342 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:33:56.483418 kubelet[2739]: E0813 01:33:56.483349 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:33:56.483418 kubelet[2739]: E0813 01:33:56.483355 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:56.483418 kubelet[2739]: E0813 01:33:56.483365 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:33:56.483418 kubelet[2739]: E0813 01:33:56.483374 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:33:56.483418 kubelet[2739]: E0813 01:33:56.483382 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:33:56.483418 kubelet[2739]: E0813 01:33:56.483389 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:33:56.483418 kubelet[2739]: E0813 01:33:56.483397 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:33:56.483418 kubelet[2739]: I0813 01:33:56.483406 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:33:56.555209 kubelet[2739]: I0813 01:33:56.555160 2739 kubelet.go:2306] "Pod admission denied" podUID="1209d61e-7649-48d0-9381-2b8e05aac74d" pod="tigera-operator/tigera-operator-5bf8dfcb4-d86v8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:56.755872 kubelet[2739]: I0813 01:33:56.755806 2739 kubelet.go:2306] "Pod admission denied" podUID="4da2ecc6-8e1c-4987-b500-cc8944bad430" pod="tigera-operator/tigera-operator-5bf8dfcb4-lwdqj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:56.852960 kubelet[2739]: I0813 01:33:56.852898 2739 kubelet.go:2306] "Pod admission denied" podUID="44b0d037-87be-43c1-aaa9-5d279e037442" pod="tigera-operator/tigera-operator-5bf8dfcb4-8rmjq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:56.952456 kubelet[2739]: I0813 01:33:56.952396 2739 kubelet.go:2306] "Pod admission denied" podUID="d8c9eeb4-5c3d-4c89-9169-8a6c585a79f7" pod="tigera-operator/tigera-operator-5bf8dfcb4-2s5fp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:57.162439 kubelet[2739]: I0813 01:33:57.162192 2739 kubelet.go:2306] "Pod admission denied" podUID="4f9f6cdf-b25e-465a-97b0-488ed9e3edc7" pod="tigera-operator/tigera-operator-5bf8dfcb4-nlgvw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:57.251571 kubelet[2739]: I0813 01:33:57.251509 2739 kubelet.go:2306] "Pod admission denied" podUID="951cc5de-5a0e-4500-854f-68959a444e8e" pod="tigera-operator/tigera-operator-5bf8dfcb4-dqhqx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:57.351815 kubelet[2739]: I0813 01:33:57.351767 2739 kubelet.go:2306] "Pod admission denied" podUID="7c44bbf7-a9fc-41bf-9876-47f6fc529abb" pod="tigera-operator/tigera-operator-5bf8dfcb4-l252l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:57.553005 kubelet[2739]: I0813 01:33:57.552952 2739 kubelet.go:2306] "Pod admission denied" podUID="5a552bd2-b792-439a-8560-853b9a35c1ff" pod="tigera-operator/tigera-operator-5bf8dfcb4-ghd2w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:57.665184 kubelet[2739]: I0813 01:33:57.663980 2739 kubelet.go:2306] "Pod admission denied" podUID="71f83303-7c89-4457-bb72-b82f67f3e95f" pod="tigera-operator/tigera-operator-5bf8dfcb4-bmc7w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:57.751874 kubelet[2739]: I0813 01:33:57.751831 2739 kubelet.go:2306] "Pod admission denied" podUID="5b11a11f-2543-4c80-b552-ab957f09b8de" pod="tigera-operator/tigera-operator-5bf8dfcb4-7tj85" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:57.798205 kubelet[2739]: E0813 01:33:57.797742 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:33:57.798471 containerd[1544]: time="2025-08-13T01:33:57.798425727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,}" Aug 13 01:33:57.859202 kubelet[2739]: I0813 01:33:57.858664 2739 kubelet.go:2306] "Pod admission denied" podUID="94172f5a-517a-4fc3-b44b-c76caaaa66ba" pod="tigera-operator/tigera-operator-5bf8dfcb4-zmff9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:57.864948 containerd[1544]: time="2025-08-13T01:33:57.864853943Z" level=error msg="Failed to destroy network for sandbox \"701fc0543c7b31906ac9daf81c17b772b4c78ed6ec7116393733f520a039588b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:57.866219 containerd[1544]: time="2025-08-13T01:33:57.866191654Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"701fc0543c7b31906ac9daf81c17b772b4c78ed6ec7116393733f520a039588b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:57.867274 kubelet[2739]: E0813 01:33:57.867237 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"701fc0543c7b31906ac9daf81c17b772b4c78ed6ec7116393733f520a039588b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:33:57.867542 kubelet[2739]: E0813 01:33:57.867488 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"701fc0543c7b31906ac9daf81c17b772b4c78ed6ec7116393733f520a039588b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:57.868148 kubelet[2739]: E0813 01:33:57.868016 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"701fc0543c7b31906ac9daf81c17b772b4c78ed6ec7116393733f520a039588b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:33:57.868790 systemd[1]: run-netns-cni\x2d85f138e5\x2d8ab4\x2de014\x2da371\x2db8b46bff165b.mount: Deactivated successfully. Aug 13 01:33:57.869927 kubelet[2739]: E0813 01:33:57.869475 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"701fc0543c7b31906ac9daf81c17b772b4c78ed6ec7116393733f520a039588b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7bj49" podUID="4e5845c9-626c-4c83-900a-0da0bae2daed" Aug 13 01:33:57.953836 kubelet[2739]: I0813 01:33:57.953799 2739 kubelet.go:2306] "Pod admission denied" podUID="03d14e72-d8c1-4bf7-b0fc-6af2b874f7b4" pod="tigera-operator/tigera-operator-5bf8dfcb4-kh6mh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:58.054374 kubelet[2739]: I0813 01:33:58.054316 2739 kubelet.go:2306] "Pod admission denied" podUID="317dce09-1d69-4a72-9d76-7df1a864b577" pod="tigera-operator/tigera-operator-5bf8dfcb4-2zjrr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:58.158799 kubelet[2739]: I0813 01:33:58.156783 2739 kubelet.go:2306] "Pod admission denied" podUID="c5397ad9-282e-4197-879c-ec6ff3760e92" pod="tigera-operator/tigera-operator-5bf8dfcb4-wnqrk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:58.252031 kubelet[2739]: I0813 01:33:58.251970 2739 kubelet.go:2306] "Pod admission denied" podUID="14bf78ab-f28b-4297-9422-22fe327ff515" pod="tigera-operator/tigera-operator-5bf8dfcb4-rh9gt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:58.367380 kubelet[2739]: I0813 01:33:58.367330 2739 kubelet.go:2306] "Pod admission denied" podUID="f2584fee-2f6b-4cce-abc9-246c405c199f" pod="tigera-operator/tigera-operator-5bf8dfcb4-dvllh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:58.555737 kubelet[2739]: I0813 01:33:58.555684 2739 kubelet.go:2306] "Pod admission denied" podUID="f2b81733-1ce2-47c7-982d-6a504a3f2b3f" pod="tigera-operator/tigera-operator-5bf8dfcb4-prgds" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:58.655829 kubelet[2739]: I0813 01:33:58.655615 2739 kubelet.go:2306] "Pod admission denied" podUID="1fb4d173-fe0b-453a-a564-eb8a54f93db4" pod="tigera-operator/tigera-operator-5bf8dfcb4-6qjff" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:58.752704 kubelet[2739]: I0813 01:33:58.752660 2739 kubelet.go:2306] "Pod admission denied" podUID="bd6db256-6c67-42da-9ff3-9fe82fda9bf9" pod="tigera-operator/tigera-operator-5bf8dfcb4-78qp6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:58.854632 kubelet[2739]: I0813 01:33:58.854513 2739 kubelet.go:2306] "Pod admission denied" podUID="e0fb8581-d345-4e1c-806b-42605303028b" pod="tigera-operator/tigera-operator-5bf8dfcb4-xhrcl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:58.956072 kubelet[2739]: I0813 01:33:58.956018 2739 kubelet.go:2306] "Pod admission denied" podUID="1d4e526d-4bf8-4ecd-884d-e11b8b8fec03" pod="tigera-operator/tigera-operator-5bf8dfcb4-vjsvf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:59.053963 kubelet[2739]: I0813 01:33:59.053917 2739 kubelet.go:2306] "Pod admission denied" podUID="812bc3ed-eff6-4e8a-9abf-b128ffde9b7f" pod="tigera-operator/tigera-operator-5bf8dfcb4-nxvgq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:59.167204 kubelet[2739]: I0813 01:33:59.166516 2739 kubelet.go:2306] "Pod admission denied" podUID="490e5d43-2329-486f-bf6d-cc679a4c1b3b" pod="tigera-operator/tigera-operator-5bf8dfcb4-98dwq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:59.255137 kubelet[2739]: I0813 01:33:59.255098 2739 kubelet.go:2306] "Pod admission denied" podUID="3441a8c6-ef0e-44dd-8174-1d9959270cc3" pod="tigera-operator/tigera-operator-5bf8dfcb4-ktn5k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:59.356711 kubelet[2739]: I0813 01:33:59.356661 2739 kubelet.go:2306] "Pod admission denied" podUID="9f2309bd-b3f4-491c-bc2c-dc9b5376a3d6" pod="tigera-operator/tigera-operator-5bf8dfcb4-d2jks" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:59.557980 kubelet[2739]: I0813 01:33:59.557915 2739 kubelet.go:2306] "Pod admission denied" podUID="6b00a5cc-92dc-45b5-8db2-460e9f37d0ad" pod="tigera-operator/tigera-operator-5bf8dfcb4-s5zb5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:59.655673 kubelet[2739]: I0813 01:33:59.655605 2739 kubelet.go:2306] "Pod admission denied" podUID="4cad400a-3386-46f4-a73b-6391a130bea7" pod="tigera-operator/tigera-operator-5bf8dfcb4-ffvl5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:59.755666 kubelet[2739]: I0813 01:33:59.755633 2739 kubelet.go:2306] "Pod admission denied" podUID="de8404e6-7d1f-42e9-b2ee-ea386e73ac77" pod="tigera-operator/tigera-operator-5bf8dfcb4-hp59d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:33:59.962914 kubelet[2739]: I0813 01:33:59.962807 2739 kubelet.go:2306] "Pod admission denied" podUID="e35a3459-a6db-4076-a38f-6061e192c315" pod="tigera-operator/tigera-operator-5bf8dfcb4-7wp2n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:00.070852 kubelet[2739]: I0813 01:34:00.069891 2739 kubelet.go:2306] "Pod admission denied" podUID="73fa4d2c-6574-47f3-8a6b-722297c95f1e" pod="tigera-operator/tigera-operator-5bf8dfcb4-6qgjg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:00.153243 kubelet[2739]: I0813 01:34:00.153193 2739 kubelet.go:2306] "Pod admission denied" podUID="9cba9092-67c1-42a2-a7ce-286445f7dbce" pod="tigera-operator/tigera-operator-5bf8dfcb4-zq8tb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:00.355368 kubelet[2739]: I0813 01:34:00.355325 2739 kubelet.go:2306] "Pod admission denied" podUID="63f7c480-29f2-4543-a249-bdee699378b0" pod="tigera-operator/tigera-operator-5bf8dfcb4-vx9gf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:00.454092 kubelet[2739]: I0813 01:34:00.454036 2739 kubelet.go:2306] "Pod admission denied" podUID="dab28fa2-5187-47a2-b17d-ae5d4e800359" pod="tigera-operator/tigera-operator-5bf8dfcb4-vsl8s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:00.556717 kubelet[2739]: I0813 01:34:00.556657 2739 kubelet.go:2306] "Pod admission denied" podUID="7085e9c9-b492-409c-9f98-827c3886c650" pod="tigera-operator/tigera-operator-5bf8dfcb4-pttm9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:00.655632 kubelet[2739]: I0813 01:34:00.655525 2739 kubelet.go:2306] "Pod admission denied" podUID="9f97ccd5-b389-4b7d-b7ce-a1124731ca15" pod="tigera-operator/tigera-operator-5bf8dfcb4-wlkqx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:00.703224 kubelet[2739]: I0813 01:34:00.703174 2739 kubelet.go:2306] "Pod admission denied" podUID="d844ddff-8172-4a94-bfd6-6fa4921474c4" pod="tigera-operator/tigera-operator-5bf8dfcb4-kzdhl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:00.818704 kubelet[2739]: I0813 01:34:00.818653 2739 kubelet.go:2306] "Pod admission denied" podUID="5cf8aaba-0340-46b3-8eb1-a19d01238d6c" pod="tigera-operator/tigera-operator-5bf8dfcb4-fnnwl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:01.004047 kubelet[2739]: I0813 01:34:01.004004 2739 kubelet.go:2306] "Pod admission denied" podUID="1f8edc21-8180-4d66-a14b-9e6f97d3423d" pod="tigera-operator/tigera-operator-5bf8dfcb4-2vw8j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:01.104002 kubelet[2739]: I0813 01:34:01.103909 2739 kubelet.go:2306] "Pod admission denied" podUID="f3b7aba4-c373-4c25-9343-9852f09122f5" pod="tigera-operator/tigera-operator-5bf8dfcb4-jtb4j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:01.204577 kubelet[2739]: I0813 01:34:01.204515 2739 kubelet.go:2306] "Pod admission denied" podUID="36480a28-b695-4f4f-a873-fc147cec581c" pod="tigera-operator/tigera-operator-5bf8dfcb4-sx64g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:01.305469 kubelet[2739]: I0813 01:34:01.304760 2739 kubelet.go:2306] "Pod admission denied" podUID="bf6cf46a-db81-4a86-b56c-1f1f76a759ee" pod="tigera-operator/tigera-operator-5bf8dfcb4-dtvhb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:01.405237 kubelet[2739]: I0813 01:34:01.405188 2739 kubelet.go:2306] "Pod admission denied" podUID="9499585d-cfd7-44a9-97b8-17ef2c75a121" pod="tigera-operator/tigera-operator-5bf8dfcb4-zh22d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:01.503425 kubelet[2739]: I0813 01:34:01.503366 2739 kubelet.go:2306] "Pod admission denied" podUID="dbd874c8-2d47-44ae-8088-7070d1ba8c42" pod="tigera-operator/tigera-operator-5bf8dfcb4-c2p4j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:01.613216 kubelet[2739]: I0813 01:34:01.613056 2739 kubelet.go:2306] "Pod admission denied" podUID="aa32356b-1843-4033-95d1-146a2b582cc1" pod="tigera-operator/tigera-operator-5bf8dfcb4-l95hr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:01.800809 kubelet[2739]: E0813 01:34:01.797892 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:34:01.808143 kubelet[2739]: I0813 01:34:01.808089 2739 kubelet.go:2306] "Pod admission denied" podUID="a9f99553-750d-4526-a96f-37233b727ba9" pod="tigera-operator/tigera-operator-5bf8dfcb4-958pk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:01.906948 kubelet[2739]: I0813 01:34:01.906415 2739 kubelet.go:2306] "Pod admission denied" podUID="068522f4-4c37-4e4e-8902-3ba63040a4bf" pod="tigera-operator/tigera-operator-5bf8dfcb4-n6h8c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:01.957587 kubelet[2739]: I0813 01:34:01.957451 2739 kubelet.go:2306] "Pod admission denied" podUID="84244509-8034-4ef2-89d3-51ee7a7a6b36" pod="tigera-operator/tigera-operator-5bf8dfcb4-g75pf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:02.056157 kubelet[2739]: I0813 01:34:02.056098 2739 kubelet.go:2306] "Pod admission denied" podUID="5f80daae-6a2d-451e-b421-8e8aa66c0ac8" pod="tigera-operator/tigera-operator-5bf8dfcb4-mznbk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:02.154539 kubelet[2739]: I0813 01:34:02.154494 2739 kubelet.go:2306] "Pod admission denied" podUID="45d96390-398b-4771-b928-fea77a1d5b47" pod="tigera-operator/tigera-operator-5bf8dfcb4-qd7qq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:02.208299 kubelet[2739]: I0813 01:34:02.207989 2739 kubelet.go:2306] "Pod admission denied" podUID="aa81849b-e771-46f8-921d-619da9d101e4" pod="tigera-operator/tigera-operator-5bf8dfcb4-ktdl7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:02.302714 kubelet[2739]: I0813 01:34:02.302672 2739 kubelet.go:2306] "Pod admission denied" podUID="da76563b-2885-403b-beb6-8c5757052470" pod="tigera-operator/tigera-operator-5bf8dfcb4-8tjhr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:02.405418 kubelet[2739]: I0813 01:34:02.405358 2739 kubelet.go:2306] "Pod admission denied" podUID="6e19b079-ebad-4edd-b03b-7bd84c25ec75" pod="tigera-operator/tigera-operator-5bf8dfcb4-8bqln" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:02.510322 kubelet[2739]: I0813 01:34:02.510288 2739 kubelet.go:2306] "Pod admission denied" podUID="3a457265-c02e-4252-a942-aaee78b2222c" pod="tigera-operator/tigera-operator-5bf8dfcb4-wq6nq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:02.604614 kubelet[2739]: I0813 01:34:02.604564 2739 kubelet.go:2306] "Pod admission denied" podUID="0e8144c9-b101-47b0-8ef0-62bd464b84fc" pod="tigera-operator/tigera-operator-5bf8dfcb4-mgdw7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:02.704001 kubelet[2739]: I0813 01:34:02.703954 2739 kubelet.go:2306] "Pod admission denied" podUID="33092372-be0b-4b0a-bda1-80cf9b42870d" pod="tigera-operator/tigera-operator-5bf8dfcb4-xgvmp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:02.800206 kubelet[2739]: E0813 01:34:02.800103 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:34:02.801090 containerd[1544]: time="2025-08-13T01:34:02.801002983Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5v9,Uid:9cb65184-4613-43ed-9fa1-0cf23f1e0e56,Namespace:kube-system,Attempt:0,}" Aug 13 01:34:02.806197 kubelet[2739]: I0813 01:34:02.806058 2739 kubelet.go:2306] "Pod admission denied" podUID="ecdc13c8-3b2a-4a69-918d-540f2a639987" pod="tigera-operator/tigera-operator-5bf8dfcb4-mnjqn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:02.860740 kubelet[2739]: I0813 01:34:02.860694 2739 kubelet.go:2306] "Pod admission denied" podUID="7837d5ca-01ce-4437-b3d9-44b337e41710" pod="tigera-operator/tigera-operator-5bf8dfcb4-74bxp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:02.891461 containerd[1544]: time="2025-08-13T01:34:02.891418185Z" level=error msg="Failed to destroy network for sandbox \"436a33b713a825a97eeb6c4b14e02b9ae76bf4de901225a4e251e2e26069d680\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:02.895694 containerd[1544]: time="2025-08-13T01:34:02.895106170Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5v9,Uid:9cb65184-4613-43ed-9fa1-0cf23f1e0e56,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"436a33b713a825a97eeb6c4b14e02b9ae76bf4de901225a4e251e2e26069d680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:02.895430 systemd[1]: run-netns-cni\x2d5d4acc06\x2d6637\x2dbe11\x2d5b19\x2d4f3651c6e92b.mount: Deactivated successfully. Aug 13 01:34:02.896299 kubelet[2739]: E0813 01:34:02.895300 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"436a33b713a825a97eeb6c4b14e02b9ae76bf4de901225a4e251e2e26069d680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:02.896299 kubelet[2739]: E0813 01:34:02.895340 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"436a33b713a825a97eeb6c4b14e02b9ae76bf4de901225a4e251e2e26069d680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:34:02.896299 kubelet[2739]: E0813 01:34:02.895356 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"436a33b713a825a97eeb6c4b14e02b9ae76bf4de901225a4e251e2e26069d680\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:34:02.896299 kubelet[2739]: E0813 01:34:02.895391 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mx5v9_kube-system(9cb65184-4613-43ed-9fa1-0cf23f1e0e56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mx5v9_kube-system(9cb65184-4613-43ed-9fa1-0cf23f1e0e56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"436a33b713a825a97eeb6c4b14e02b9ae76bf4de901225a4e251e2e26069d680\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mx5v9" podUID="9cb65184-4613-43ed-9fa1-0cf23f1e0e56" Aug 13 01:34:02.954367 kubelet[2739]: I0813 01:34:02.954333 2739 kubelet.go:2306] "Pod admission denied" podUID="f5bfdfb2-57ba-4f8a-8579-490d2d5def61" pod="tigera-operator/tigera-operator-5bf8dfcb4-tcl6l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:03.158653 kubelet[2739]: I0813 01:34:03.157534 2739 kubelet.go:2306] "Pod admission denied" podUID="609104a5-7f30-4f2e-9806-e203a9a479b7" pod="tigera-operator/tigera-operator-5bf8dfcb4-fvz9v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:03.265063 kubelet[2739]: I0813 01:34:03.264999 2739 kubelet.go:2306] "Pod admission denied" podUID="ce5fec7b-ccfe-4ca2-8f80-db127099945d" pod="tigera-operator/tigera-operator-5bf8dfcb4-spchb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:03.355892 kubelet[2739]: I0813 01:34:03.355851 2739 kubelet.go:2306] "Pod admission denied" podUID="4431c44a-0eb9-4ab7-8628-18f823b60f98" pod="tigera-operator/tigera-operator-5bf8dfcb4-7s752" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:03.453152 kubelet[2739]: I0813 01:34:03.452316 2739 kubelet.go:2306] "Pod admission denied" podUID="ea9c0b37-0310-4525-82c8-2d3557b5fdd4" pod="tigera-operator/tigera-operator-5bf8dfcb4-5z7v4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:03.558050 kubelet[2739]: I0813 01:34:03.557998 2739 kubelet.go:2306] "Pod admission denied" podUID="e23219d0-4b5a-4af2-9cf1-b2d38bf4d456" pod="tigera-operator/tigera-operator-5bf8dfcb4-67glw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:03.655255 kubelet[2739]: I0813 01:34:03.655195 2739 kubelet.go:2306] "Pod admission denied" podUID="9587aa3c-5471-4856-a7b9-b86a6884f431" pod="tigera-operator/tigera-operator-5bf8dfcb4-d99hx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:03.754823 kubelet[2739]: I0813 01:34:03.754761 2739 kubelet.go:2306] "Pod admission denied" podUID="79ded755-efd6-491b-ad80-b9335179715a" pod="tigera-operator/tigera-operator-5bf8dfcb4-zvwwn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:03.797912 kubelet[2739]: E0813 01:34:03.797750 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:34:03.798383 containerd[1544]: time="2025-08-13T01:34:03.798324426Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-994jv,Uid:3bd5a2e2-42ee-4e27-a412-724b2f0527b4,Namespace:kube-system,Attempt:0,}" Aug 13 01:34:03.860857 kubelet[2739]: I0813 01:34:03.859733 2739 kubelet.go:2306] "Pod admission denied" podUID="e8876584-c82f-43db-b31d-e86286b897d1" pod="tigera-operator/tigera-operator-5bf8dfcb4-zg264" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:03.865986 containerd[1544]: time="2025-08-13T01:34:03.865938440Z" level=error msg="Failed to destroy network for sandbox \"e2e7fe7e36f6a2baf485d9ff2756ac176a4a3521fc3b7cab8d3d1d3faa4f8f91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:03.869901 containerd[1544]: time="2025-08-13T01:34:03.868886742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-994jv,Uid:3bd5a2e2-42ee-4e27-a412-724b2f0527b4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2e7fe7e36f6a2baf485d9ff2756ac176a4a3521fc3b7cab8d3d1d3faa4f8f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:03.872259 kubelet[2739]: E0813 01:34:03.869729 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2e7fe7e36f6a2baf485d9ff2756ac176a4a3521fc3b7cab8d3d1d3faa4f8f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:03.872259 kubelet[2739]: E0813 01:34:03.869769 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2e7fe7e36f6a2baf485d9ff2756ac176a4a3521fc3b7cab8d3d1d3faa4f8f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:34:03.872259 kubelet[2739]: E0813 01:34:03.869786 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e2e7fe7e36f6a2baf485d9ff2756ac176a4a3521fc3b7cab8d3d1d3faa4f8f91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:34:03.872259 kubelet[2739]: E0813 01:34:03.869824 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-994jv_kube-system(3bd5a2e2-42ee-4e27-a412-724b2f0527b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-994jv_kube-system(3bd5a2e2-42ee-4e27-a412-724b2f0527b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e2e7fe7e36f6a2baf485d9ff2756ac176a4a3521fc3b7cab8d3d1d3faa4f8f91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-994jv" podUID="3bd5a2e2-42ee-4e27-a412-724b2f0527b4" Aug 13 01:34:03.872083 systemd[1]: run-netns-cni\x2d0fe53525\x2d368f\x2d6cb2\x2d8e77\x2d015df89bb8e7.mount: Deactivated successfully. Aug 13 01:34:03.969196 kubelet[2739]: I0813 01:34:03.969141 2739 kubelet.go:2306] "Pod admission denied" podUID="94125e0c-0d86-4654-a51f-9d4a7c474ad5" pod="tigera-operator/tigera-operator-5bf8dfcb4-mcdvd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:04.157913 kubelet[2739]: I0813 01:34:04.157801 2739 kubelet.go:2306] "Pod admission denied" podUID="c5b0ca89-6db6-44b1-83c6-373290fcdf9d" pod="tigera-operator/tigera-operator-5bf8dfcb4-fvjnr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:04.253163 kubelet[2739]: I0813 01:34:04.253066 2739 kubelet.go:2306] "Pod admission denied" podUID="1286b90d-adfb-416a-93e3-d84eea173b12" pod="tigera-operator/tigera-operator-5bf8dfcb4-59b82" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:04.357770 kubelet[2739]: I0813 01:34:04.357709 2739 kubelet.go:2306] "Pod admission denied" podUID="f525ff9c-0e5c-438a-b97d-3c0f5c68628a" pod="tigera-operator/tigera-operator-5bf8dfcb4-c728h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:04.452082 kubelet[2739]: I0813 01:34:04.451747 2739 kubelet.go:2306] "Pod admission denied" podUID="0d39af84-7437-4aa3-9c2f-80f7daa0eff1" pod="tigera-operator/tigera-operator-5bf8dfcb4-84546" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:04.553872 kubelet[2739]: I0813 01:34:04.553830 2739 kubelet.go:2306] "Pod admission denied" podUID="280a3e75-2004-4adf-9af9-d14c0cc4d90a" pod="tigera-operator/tigera-operator-5bf8dfcb4-lzs9f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:04.759994 kubelet[2739]: I0813 01:34:04.759956 2739 kubelet.go:2306] "Pod admission denied" podUID="85e976d2-31f0-44fa-b131-6893055b2901" pod="tigera-operator/tigera-operator-5bf8dfcb4-xb2vx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:04.852524 kubelet[2739]: I0813 01:34:04.852487 2739 kubelet.go:2306] "Pod admission denied" podUID="a664c978-acac-46cc-9497-0067f4f52a0a" pod="tigera-operator/tigera-operator-5bf8dfcb4-7z98z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:04.954174 kubelet[2739]: I0813 01:34:04.954115 2739 kubelet.go:2306] "Pod admission denied" podUID="77805429-2340-434b-88e5-95e2422d2601" pod="tigera-operator/tigera-operator-5bf8dfcb4-555tz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:05.066037 kubelet[2739]: I0813 01:34:05.065725 2739 kubelet.go:2306] "Pod admission denied" podUID="4136a4f6-d1f0-4fc8-961a-6ecb6dc7385d" pod="tigera-operator/tigera-operator-5bf8dfcb4-8kw2r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:05.153312 kubelet[2739]: I0813 01:34:05.153256 2739 kubelet.go:2306] "Pod admission denied" podUID="23badea5-b702-4f54-8aaf-af62c2b37c2b" pod="tigera-operator/tigera-operator-5bf8dfcb4-zfgrl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:05.356879 kubelet[2739]: I0813 01:34:05.356765 2739 kubelet.go:2306] "Pod admission denied" podUID="3498e063-0ddc-41ba-83c5-d55328b7a82a" pod="tigera-operator/tigera-operator-5bf8dfcb4-vzb4g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:05.453853 kubelet[2739]: I0813 01:34:05.453816 2739 kubelet.go:2306] "Pod admission denied" podUID="6b49b0d0-8040-44ab-9e63-89296406c23d" pod="tigera-operator/tigera-operator-5bf8dfcb4-578m2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:05.555796 kubelet[2739]: I0813 01:34:05.555740 2739 kubelet.go:2306] "Pod admission denied" podUID="a76bd404-b19c-4b88-a10a-ed98a1d3f88c" pod="tigera-operator/tigera-operator-5bf8dfcb4-p52nw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:05.656452 kubelet[2739]: I0813 01:34:05.656332 2739 kubelet.go:2306] "Pod admission denied" podUID="62914460-93f0-4f16-9455-f8ecac43aa1c" pod="tigera-operator/tigera-operator-5bf8dfcb4-svrwf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:05.702916 kubelet[2739]: I0813 01:34:05.702870 2739 kubelet.go:2306] "Pod admission denied" podUID="ba4aa379-f4ca-4e67-b2f5-850802688665" pod="tigera-operator/tigera-operator-5bf8dfcb4-jz567" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:05.812906 kubelet[2739]: I0813 01:34:05.812407 2739 kubelet.go:2306] "Pod admission denied" podUID="fcd0d098-d084-4779-91b3-6547c90244a1" pod="tigera-operator/tigera-operator-5bf8dfcb4-xgjt5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:05.904250 kubelet[2739]: I0813 01:34:05.904192 2739 kubelet.go:2306] "Pod admission denied" podUID="cb0aac5d-5c2b-4236-891a-9d58dd1fb757" pod="tigera-operator/tigera-operator-5bf8dfcb4-8jgm2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:06.004358 kubelet[2739]: I0813 01:34:06.004325 2739 kubelet.go:2306] "Pod admission denied" podUID="e6a6cc44-7cd1-487a-b7a9-3fbb3829267d" pod="tigera-operator/tigera-operator-5bf8dfcb4-xmhbm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:06.105834 kubelet[2739]: I0813 01:34:06.105787 2739 kubelet.go:2306] "Pod admission denied" podUID="50cf57a6-bd2e-48dc-93aa-bd86955905e9" pod="tigera-operator/tigera-operator-5bf8dfcb4-8cfrd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:06.155234 kubelet[2739]: I0813 01:34:06.155196 2739 kubelet.go:2306] "Pod admission denied" podUID="fba00e6f-3585-4b9b-aac6-ed9b79b29bc6" pod="tigera-operator/tigera-operator-5bf8dfcb4-hgn8f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:06.252704 kubelet[2739]: I0813 01:34:06.252655 2739 kubelet.go:2306] "Pod admission denied" podUID="b2afc8d8-0a43-4df6-96e8-32ee5403261e" pod="tigera-operator/tigera-operator-5bf8dfcb4-4kjkp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:06.360532 kubelet[2739]: I0813 01:34:06.360373 2739 kubelet.go:2306] "Pod admission denied" podUID="5170597c-9215-4313-9596-bd4b68cf98cf" pod="tigera-operator/tigera-operator-5bf8dfcb4-9jj4d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:06.469445 kubelet[2739]: I0813 01:34:06.468748 2739 kubelet.go:2306] "Pod admission denied" podUID="8bf077fd-5ff5-4d6b-b72c-da9ef37fd1c1" pod="tigera-operator/tigera-operator-5bf8dfcb4-5lhx9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:06.501955 kubelet[2739]: I0813 01:34:06.501922 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:34:06.501955 kubelet[2739]: I0813 01:34:06.501956 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:34:06.503792 kubelet[2739]: I0813 01:34:06.503744 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:34:06.517021 kubelet[2739]: I0813 01:34:06.516987 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:34:06.517084 kubelet[2739]: I0813 01:34:06.517053 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/csi-node-driver-7bj49","calico-system/calico-node-5c47r","calico-system/calico-typha-79464475b5-bbrtw","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:34:06.517084 kubelet[2739]: E0813 01:34:06.517076 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:34:06.517181 kubelet[2739]: E0813 01:34:06.517086 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:34:06.517181 kubelet[2739]: E0813 01:34:06.517094 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:34:06.517181 kubelet[2739]: E0813 01:34:06.517100 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:34:06.517181 kubelet[2739]: E0813 01:34:06.517107 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:34:06.517181 kubelet[2739]: E0813 01:34:06.517117 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:34:06.517181 kubelet[2739]: E0813 01:34:06.517125 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:34:06.517181 kubelet[2739]: E0813 01:34:06.517172 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:34:06.517181 kubelet[2739]: E0813 01:34:06.517182 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:34:06.517181 kubelet[2739]: E0813 01:34:06.517191 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:34:06.517515 kubelet[2739]: I0813 01:34:06.517200 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:34:06.554455 kubelet[2739]: I0813 01:34:06.554410 2739 kubelet.go:2306] "Pod admission denied" podUID="915426cf-5394-48a8-923f-b0acd4638192" pod="tigera-operator/tigera-operator-5bf8dfcb4-d9k6j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:06.654048 kubelet[2739]: I0813 01:34:06.653627 2739 kubelet.go:2306] "Pod admission denied" podUID="2d4a3c68-6464-4284-9384-5cf54025f1fe" pod="tigera-operator/tigera-operator-5bf8dfcb4-h4pd7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:06.866061 kubelet[2739]: I0813 01:34:06.865786 2739 kubelet.go:2306] "Pod admission denied" podUID="1f087119-1bba-451d-bd73-322e1550de9b" pod="tigera-operator/tigera-operator-5bf8dfcb4-9p6w5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:06.955300 kubelet[2739]: I0813 01:34:06.954727 2739 kubelet.go:2306] "Pod admission denied" podUID="cf556d11-19f8-4c9c-87fd-4ef3729b23e8" pod="tigera-operator/tigera-operator-5bf8dfcb4-dhf58" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:07.055606 kubelet[2739]: I0813 01:34:07.055554 2739 kubelet.go:2306] "Pod admission denied" podUID="4312bdbf-ed83-4b19-a476-a86245a25e3e" pod="tigera-operator/tigera-operator-5bf8dfcb4-bxl6g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:07.156939 kubelet[2739]: I0813 01:34:07.156895 2739 kubelet.go:2306] "Pod admission denied" podUID="f7270b74-574b-4e87-a227-23c9415b45cf" pod="tigera-operator/tigera-operator-5bf8dfcb4-wvg47" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:07.257164 kubelet[2739]: I0813 01:34:07.257110 2739 kubelet.go:2306] "Pod admission denied" podUID="c7c464f4-e4a4-4a14-a0c3-0bd96ffe248e" pod="tigera-operator/tigera-operator-5bf8dfcb4-gkdpb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:07.455876 kubelet[2739]: I0813 01:34:07.455827 2739 kubelet.go:2306] "Pod admission denied" podUID="245f2b7f-dca2-4e83-9551-76ef6f351e7e" pod="tigera-operator/tigera-operator-5bf8dfcb4-lk6p7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:07.562217 kubelet[2739]: I0813 01:34:07.561419 2739 kubelet.go:2306] "Pod admission denied" podUID="f6ba7a41-748b-456f-aba9-d4cd709889d2" pod="tigera-operator/tigera-operator-5bf8dfcb4-ff2lm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:07.602570 kubelet[2739]: I0813 01:34:07.602506 2739 kubelet.go:2306] "Pod admission denied" podUID="3c5c334a-4d4f-4abc-96f5-2b1877f72c38" pod="tigera-operator/tigera-operator-5bf8dfcb4-jsb8m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:07.702612 kubelet[2739]: I0813 01:34:07.702568 2739 kubelet.go:2306] "Pod admission denied" podUID="ac5b6385-17a7-4667-87d9-94a6d27270c3" pod="tigera-operator/tigera-operator-5bf8dfcb4-qjl6n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:07.804261 kubelet[2739]: I0813 01:34:07.804192 2739 kubelet.go:2306] "Pod admission denied" podUID="82c43bc3-ed61-4296-aedc-164cfea54e24" pod="tigera-operator/tigera-operator-5bf8dfcb4-xdtfn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:07.852232 kubelet[2739]: I0813 01:34:07.852013 2739 kubelet.go:2306] "Pod admission denied" podUID="0241d8ff-aef2-4a7a-a317-bddf391228a9" pod="tigera-operator/tigera-operator-5bf8dfcb4-4cwrw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:07.952756 kubelet[2739]: I0813 01:34:07.952717 2739 kubelet.go:2306] "Pod admission denied" podUID="11f98d71-c20f-4d42-a22b-777e980bdd49" pod="tigera-operator/tigera-operator-5bf8dfcb4-8kcd6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:08.171635 kubelet[2739]: I0813 01:34:08.168745 2739 kubelet.go:2306] "Pod admission denied" podUID="2a4ec508-78a7-48bc-b934-24397b5b4ff5" pod="tigera-operator/tigera-operator-5bf8dfcb4-4cgmm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:08.257337 kubelet[2739]: I0813 01:34:08.257277 2739 kubelet.go:2306] "Pod admission denied" podUID="2cc67bcf-872d-4e19-9af8-e1cdda552f82" pod="tigera-operator/tigera-operator-5bf8dfcb4-pl9vn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:08.305938 kubelet[2739]: I0813 01:34:08.305905 2739 kubelet.go:2306] "Pod admission denied" podUID="f69f54f9-0311-4521-abc5-42ef2da1ec42" pod="tigera-operator/tigera-operator-5bf8dfcb4-pl5kz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:08.405221 kubelet[2739]: I0813 01:34:08.405179 2739 kubelet.go:2306] "Pod admission denied" podUID="a90c01f1-d2c3-40e4-b24f-aa7ddde72c74" pod="tigera-operator/tigera-operator-5bf8dfcb4-gghx2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:08.605526 kubelet[2739]: I0813 01:34:08.605475 2739 kubelet.go:2306] "Pod admission denied" podUID="9efdcf31-438e-4e65-a5f7-df9fcd1ca119" pod="tigera-operator/tigera-operator-5bf8dfcb4-kw6cd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:08.704978 kubelet[2739]: I0813 01:34:08.704935 2739 kubelet.go:2306] "Pod admission denied" podUID="b1f026d0-1063-4e2f-9837-e95926810ffd" pod="tigera-operator/tigera-operator-5bf8dfcb4-d5hhk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:08.800070 containerd[1544]: time="2025-08-13T01:34:08.800031101Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,}" Aug 13 01:34:08.805309 kubelet[2739]: I0813 01:34:08.805252 2739 kubelet.go:2306] "Pod admission denied" podUID="85262150-b08f-47b3-b5c6-a6f303e9d046" pod="tigera-operator/tigera-operator-5bf8dfcb4-gf7kd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:08.861145 containerd[1544]: time="2025-08-13T01:34:08.861023902Z" level=error msg="Failed to destroy network for sandbox \"2a6971813a87e34142a74a6afb7ffec6f5d926c7844bcfc6229048d98906ab65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:08.865019 containerd[1544]: time="2025-08-13T01:34:08.862413527Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a6971813a87e34142a74a6afb7ffec6f5d926c7844bcfc6229048d98906ab65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:08.864049 systemd[1]: run-netns-cni\x2d1f258f18\x2d38d7\x2d9c25\x2d6b32\x2d6992cdb4d626.mount: Deactivated successfully. Aug 13 01:34:08.865526 kubelet[2739]: E0813 01:34:08.865489 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a6971813a87e34142a74a6afb7ffec6f5d926c7844bcfc6229048d98906ab65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:08.865641 kubelet[2739]: E0813 01:34:08.865552 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a6971813a87e34142a74a6afb7ffec6f5d926c7844bcfc6229048d98906ab65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:34:08.865641 kubelet[2739]: E0813 01:34:08.865574 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a6971813a87e34142a74a6afb7ffec6f5d926c7844bcfc6229048d98906ab65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:34:08.865641 kubelet[2739]: E0813 01:34:08.865620 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a6971813a87e34142a74a6afb7ffec6f5d926c7844bcfc6229048d98906ab65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:34:09.021146 kubelet[2739]: I0813 01:34:09.020822 2739 kubelet.go:2306] "Pod admission denied" podUID="163eb7f0-84e1-452f-b4ea-81ef336ec0a3" pod="tigera-operator/tigera-operator-5bf8dfcb4-8zhwj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:09.105023 kubelet[2739]: I0813 01:34:09.104988 2739 kubelet.go:2306] "Pod admission denied" podUID="fa91b979-1af8-441c-84e1-0b45efe4eb53" pod="tigera-operator/tigera-operator-5bf8dfcb4-26m92" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:09.206115 kubelet[2739]: I0813 01:34:09.205628 2739 kubelet.go:2306] "Pod admission denied" podUID="91c6b559-5370-484c-8e83-dedf3eeda4c9" pod="tigera-operator/tigera-operator-5bf8dfcb4-s2r2g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:09.405462 kubelet[2739]: I0813 01:34:09.405399 2739 kubelet.go:2306] "Pod admission denied" podUID="78baef56-e556-40ee-bcef-3c073eddad39" pod="tigera-operator/tigera-operator-5bf8dfcb4-cmznd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:09.505692 kubelet[2739]: I0813 01:34:09.505625 2739 kubelet.go:2306] "Pod admission denied" podUID="34697e79-68f6-4e65-ab3c-416db4a66981" pod="tigera-operator/tigera-operator-5bf8dfcb4-jv27j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:09.557153 kubelet[2739]: I0813 01:34:09.557088 2739 kubelet.go:2306] "Pod admission denied" podUID="88b2dec2-591b-494e-9fac-d4a4357566ec" pod="tigera-operator/tigera-operator-5bf8dfcb4-2pszc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:09.653299 kubelet[2739]: I0813 01:34:09.653250 2739 kubelet.go:2306] "Pod admission denied" podUID="84e1b223-d457-4149-b2fb-ea6b7409b54d" pod="tigera-operator/tigera-operator-5bf8dfcb4-d4cv9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:09.798470 kubelet[2739]: E0813 01:34:09.798364 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-5c47r" podUID="7d03562e-9842-4425-9847-632615391bfb" Aug 13 01:34:09.853493 kubelet[2739]: I0813 01:34:09.853449 2739 kubelet.go:2306] "Pod admission denied" podUID="a4399b55-c21e-445c-9063-4ca2c3ffb96b" pod="tigera-operator/tigera-operator-5bf8dfcb4-nl88d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:09.955878 kubelet[2739]: I0813 01:34:09.955814 2739 kubelet.go:2306] "Pod admission denied" podUID="77ab8a94-fae7-41eb-bf6c-fa869a20bdb9" pod="tigera-operator/tigera-operator-5bf8dfcb4-4qsjt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:10.054434 kubelet[2739]: I0813 01:34:10.054194 2739 kubelet.go:2306] "Pod admission denied" podUID="5b115b76-11a1-46b9-a298-6f61eea020bd" pod="tigera-operator/tigera-operator-5bf8dfcb4-9rr7h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:10.158617 kubelet[2739]: I0813 01:34:10.158575 2739 kubelet.go:2306] "Pod admission denied" podUID="8db464cf-10d3-4953-a0d1-bff03421b9f9" pod="tigera-operator/tigera-operator-5bf8dfcb4-rnf6d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:10.256580 kubelet[2739]: I0813 01:34:10.256543 2739 kubelet.go:2306] "Pod admission denied" podUID="db20074d-da06-4606-9f72-4b737906299e" pod="tigera-operator/tigera-operator-5bf8dfcb4-dtkxj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:10.353101 kubelet[2739]: I0813 01:34:10.352985 2739 kubelet.go:2306] "Pod admission denied" podUID="7aa210ff-397f-4d65-957c-66319fb72c57" pod="tigera-operator/tigera-operator-5bf8dfcb4-tjvx6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:10.455089 kubelet[2739]: I0813 01:34:10.455027 2739 kubelet.go:2306] "Pod admission denied" podUID="597acbdb-266d-409d-886d-892970f4af80" pod="tigera-operator/tigera-operator-5bf8dfcb4-lkkt9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:10.656922 kubelet[2739]: I0813 01:34:10.656453 2739 kubelet.go:2306] "Pod admission denied" podUID="2da5eac2-645f-451c-8ed7-3daa7ece6fe1" pod="tigera-operator/tigera-operator-5bf8dfcb4-6qsd6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:10.760117 kubelet[2739]: I0813 01:34:10.760040 2739 kubelet.go:2306] "Pod admission denied" podUID="e36bca37-9adc-45b9-91c5-c248948d1350" pod="tigera-operator/tigera-operator-5bf8dfcb4-4vh44" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:10.859645 kubelet[2739]: I0813 01:34:10.859596 2739 kubelet.go:2306] "Pod admission denied" podUID="8f3582cf-c80f-4ce3-b96e-47ad409a4f7c" pod="tigera-operator/tigera-operator-5bf8dfcb4-lzdq9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:10.956383 kubelet[2739]: I0813 01:34:10.956260 2739 kubelet.go:2306] "Pod admission denied" podUID="bf7797b2-489e-44d4-9cf8-debc21ba6b3d" pod="tigera-operator/tigera-operator-5bf8dfcb4-g8pnt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:11.053867 kubelet[2739]: I0813 01:34:11.053804 2739 kubelet.go:2306] "Pod admission denied" podUID="1fadbf0b-0a8f-473f-8d2b-2d8909fb7bb1" pod="tigera-operator/tigera-operator-5bf8dfcb4-jcv4p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:11.270567 kubelet[2739]: I0813 01:34:11.270018 2739 kubelet.go:2306] "Pod admission denied" podUID="d05c93f9-245a-47d4-aa7b-09a653cb900a" pod="tigera-operator/tigera-operator-5bf8dfcb4-8dkdh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:11.355032 kubelet[2739]: I0813 01:34:11.354975 2739 kubelet.go:2306] "Pod admission denied" podUID="62db2268-7e87-4a35-b31e-f2447efca213" pod="tigera-operator/tigera-operator-5bf8dfcb4-rb2c9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:11.454821 kubelet[2739]: I0813 01:34:11.454784 2739 kubelet.go:2306] "Pod admission denied" podUID="5c6ee3bb-5032-4b6f-89cb-cc88357fc58a" pod="tigera-operator/tigera-operator-5bf8dfcb4-v48kr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:11.564077 kubelet[2739]: I0813 01:34:11.563346 2739 kubelet.go:2306] "Pod admission denied" podUID="975b675d-5e20-4bae-b12c-d2ca878f101b" pod="tigera-operator/tigera-operator-5bf8dfcb4-w5mtq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:11.603870 kubelet[2739]: I0813 01:34:11.603813 2739 kubelet.go:2306] "Pod admission denied" podUID="768af907-0a1e-4f89-8209-17cd4cac542b" pod="tigera-operator/tigera-operator-5bf8dfcb4-7c4rw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:11.709023 kubelet[2739]: I0813 01:34:11.708970 2739 kubelet.go:2306] "Pod admission denied" podUID="6d38a954-68b0-4323-9809-98d53141d3b9" pod="tigera-operator/tigera-operator-5bf8dfcb4-mfntm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:11.919747 kubelet[2739]: I0813 01:34:11.918562 2739 kubelet.go:2306] "Pod admission denied" podUID="17f167f0-236d-4701-b937-98a40cb81119" pod="tigera-operator/tigera-operator-5bf8dfcb4-89hkt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:12.006521 kubelet[2739]: I0813 01:34:12.006461 2739 kubelet.go:2306] "Pod admission denied" podUID="afc2cab2-db2b-4845-b8db-77e010644454" pod="tigera-operator/tigera-operator-5bf8dfcb4-krhbg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:12.106254 kubelet[2739]: I0813 01:34:12.106200 2739 kubelet.go:2306] "Pod admission denied" podUID="f015e7df-be43-43c5-9921-99332f3c2a8d" pod="tigera-operator/tigera-operator-5bf8dfcb4-wq564" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:12.217577 kubelet[2739]: I0813 01:34:12.217341 2739 kubelet.go:2306] "Pod admission denied" podUID="e9dc2a67-f5d4-41e6-b12d-87dd821accf5" pod="tigera-operator/tigera-operator-5bf8dfcb4-wvwsb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:12.255510 kubelet[2739]: I0813 01:34:12.255477 2739 kubelet.go:2306] "Pod admission denied" podUID="14b1c956-7e59-443d-8ec5-88844067e77c" pod="tigera-operator/tigera-operator-5bf8dfcb4-4tmzs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:12.358799 kubelet[2739]: I0813 01:34:12.358724 2739 kubelet.go:2306] "Pod admission denied" podUID="3d7ee0ca-82b4-4eab-94b4-916c2a224ec1" pod="tigera-operator/tigera-operator-5bf8dfcb4-l5m74" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:12.468689 kubelet[2739]: I0813 01:34:12.468565 2739 kubelet.go:2306] "Pod admission denied" podUID="06e29778-5e73-4119-b036-c0273e926d45" pod="tigera-operator/tigera-operator-5bf8dfcb4-9jpfx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:12.503713 kubelet[2739]: I0813 01:34:12.503654 2739 kubelet.go:2306] "Pod admission denied" podUID="25ee9f8d-fde0-48f2-9cad-86b7eacc4631" pod="tigera-operator/tigera-operator-5bf8dfcb4-c7fvw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:12.610688 kubelet[2739]: I0813 01:34:12.610631 2739 kubelet.go:2306] "Pod admission denied" podUID="f48f9732-a13a-4b27-8159-3a75de417a06" pod="tigera-operator/tigera-operator-5bf8dfcb4-lfj5h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:12.716824 kubelet[2739]: I0813 01:34:12.716775 2739 kubelet.go:2306] "Pod admission denied" podUID="e3a460ae-64b9-4148-971a-bfc06b3223bb" pod="tigera-operator/tigera-operator-5bf8dfcb4-x9x64" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:12.755011 kubelet[2739]: I0813 01:34:12.754640 2739 kubelet.go:2306] "Pod admission denied" podUID="049d33a6-17f2-4012-9163-8851d1566153" pod="tigera-operator/tigera-operator-5bf8dfcb4-k4vzk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:12.798091 containerd[1544]: time="2025-08-13T01:34:12.797842129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,}" Aug 13 01:34:12.855733 containerd[1544]: time="2025-08-13T01:34:12.855665562Z" level=error msg="Failed to destroy network for sandbox \"447ec2e06193b19b0522c3a42162b859706186ab7e778e5dd841e3fc256b9f00\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:12.860742 containerd[1544]: time="2025-08-13T01:34:12.860657008Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"447ec2e06193b19b0522c3a42162b859706186ab7e778e5dd841e3fc256b9f00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:12.860951 kubelet[2739]: E0813 01:34:12.860908 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"447ec2e06193b19b0522c3a42162b859706186ab7e778e5dd841e3fc256b9f00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:12.861015 kubelet[2739]: E0813 01:34:12.860968 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"447ec2e06193b19b0522c3a42162b859706186ab7e778e5dd841e3fc256b9f00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:34:12.861015 kubelet[2739]: E0813 01:34:12.860988 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"447ec2e06193b19b0522c3a42162b859706186ab7e778e5dd841e3fc256b9f00\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:34:12.861072 kubelet[2739]: E0813 01:34:12.861033 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"447ec2e06193b19b0522c3a42162b859706186ab7e778e5dd841e3fc256b9f00\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7bj49" podUID="4e5845c9-626c-4c83-900a-0da0bae2daed" Aug 13 01:34:12.861251 systemd[1]: run-netns-cni\x2d7d7bdb12\x2d60f4\x2deff7\x2d7977\x2d25f455825813.mount: Deactivated successfully. Aug 13 01:34:12.867276 kubelet[2739]: I0813 01:34:12.867220 2739 kubelet.go:2306] "Pod admission denied" podUID="20fe5855-731e-4fa6-b789-ea053af35524" pod="tigera-operator/tigera-operator-5bf8dfcb4-2v4wk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:12.969653 kubelet[2739]: I0813 01:34:12.968587 2739 kubelet.go:2306] "Pod admission denied" podUID="8b17ae77-bfec-494c-8a0e-c58d75408b1f" pod="tigera-operator/tigera-operator-5bf8dfcb4-fw4pz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:13.055348 kubelet[2739]: I0813 01:34:13.054886 2739 kubelet.go:2306] "Pod admission denied" podUID="32c73ecb-9af3-47cf-990f-56da246a017a" pod="tigera-operator/tigera-operator-5bf8dfcb4-ks65s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:13.154792 kubelet[2739]: I0813 01:34:13.154750 2739 kubelet.go:2306] "Pod admission denied" podUID="b8e4b844-dfd5-4999-9d85-8a6aebb38d53" pod="tigera-operator/tigera-operator-5bf8dfcb4-j2zzf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:13.261732 kubelet[2739]: I0813 01:34:13.261608 2739 kubelet.go:2306] "Pod admission denied" podUID="91a2f386-5548-4407-99c8-61cea6967827" pod="tigera-operator/tigera-operator-5bf8dfcb4-f7pbq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:13.355329 kubelet[2739]: I0813 01:34:13.354483 2739 kubelet.go:2306] "Pod admission denied" podUID="5198e46f-b961-4cfe-a364-b1321952ae97" pod="tigera-operator/tigera-operator-5bf8dfcb4-zh9hw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:13.454401 kubelet[2739]: I0813 01:34:13.454352 2739 kubelet.go:2306] "Pod admission denied" podUID="00d90636-7e0f-4a62-b420-4a4568c4537b" pod="tigera-operator/tigera-operator-5bf8dfcb4-6zh4n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:13.670579 kubelet[2739]: I0813 01:34:13.670097 2739 kubelet.go:2306] "Pod admission denied" podUID="55da4c1b-98d3-4885-b47f-1c328c49f8cb" pod="tigera-operator/tigera-operator-5bf8dfcb4-9bqzw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:13.756598 kubelet[2739]: I0813 01:34:13.756565 2739 kubelet.go:2306] "Pod admission denied" podUID="71b9f74f-fd53-4ea8-9652-3d4fafa1d6fc" pod="tigera-operator/tigera-operator-5bf8dfcb4-2p4c7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:13.855474 kubelet[2739]: I0813 01:34:13.855434 2739 kubelet.go:2306] "Pod admission denied" podUID="a1676d8e-683f-44ac-8a3d-15734dacf8d7" pod="tigera-operator/tigera-operator-5bf8dfcb4-9w6gk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:13.961170 kubelet[2739]: I0813 01:34:13.960076 2739 kubelet.go:2306] "Pod admission denied" podUID="22f684d7-2fd9-4062-9fc7-7b30092e6889" pod="tigera-operator/tigera-operator-5bf8dfcb4-p487x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:14.002397 kubelet[2739]: I0813 01:34:14.002361 2739 kubelet.go:2306] "Pod admission denied" podUID="de4ee2bd-f8e8-4f18-83eb-a12772dd9b47" pod="tigera-operator/tigera-operator-5bf8dfcb4-hwzqm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:14.106732 kubelet[2739]: I0813 01:34:14.106683 2739 kubelet.go:2306] "Pod admission denied" podUID="84744b75-3007-4e87-bac4-667fa80df06e" pod="tigera-operator/tigera-operator-5bf8dfcb4-4x4mx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:14.218369 kubelet[2739]: I0813 01:34:14.216728 2739 kubelet.go:2306] "Pod admission denied" podUID="1bf2ab8f-a3b1-43ff-821a-6f0c352e0194" pod="tigera-operator/tigera-operator-5bf8dfcb4-h4cb6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:14.306816 kubelet[2739]: I0813 01:34:14.306175 2739 kubelet.go:2306] "Pod admission denied" podUID="61ba960a-7aa1-4adf-86ed-335fc3e1bc16" pod="tigera-operator/tigera-operator-5bf8dfcb4-6mkw2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:14.507271 kubelet[2739]: I0813 01:34:14.506760 2739 kubelet.go:2306] "Pod admission denied" podUID="96465a5f-caa3-43dc-83e5-db3c5595cadb" pod="tigera-operator/tigera-operator-5bf8dfcb4-zm6qm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:14.616149 kubelet[2739]: I0813 01:34:14.615117 2739 kubelet.go:2306] "Pod admission denied" podUID="e6d51305-6453-454e-8472-51bf5f52e228" pod="tigera-operator/tigera-operator-5bf8dfcb4-svfw6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:14.707483 kubelet[2739]: I0813 01:34:14.707429 2739 kubelet.go:2306] "Pod admission denied" podUID="a0c79837-abf6-48ea-a8c9-f61b92ada49c" pod="tigera-operator/tigera-operator-5bf8dfcb4-4r99x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:14.806342 kubelet[2739]: I0813 01:34:14.805586 2739 kubelet.go:2306] "Pod admission denied" podUID="65b5abd4-a721-4a49-9a48-aee73e963bdf" pod="tigera-operator/tigera-operator-5bf8dfcb4-gtctk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:14.917954 kubelet[2739]: I0813 01:34:14.917849 2739 kubelet.go:2306] "Pod admission denied" podUID="ef3f79c3-3a96-49b2-94c6-93431f489330" pod="tigera-operator/tigera-operator-5bf8dfcb4-tlqxw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:15.006150 kubelet[2739]: I0813 01:34:15.006077 2739 kubelet.go:2306] "Pod admission denied" podUID="4681695e-60c8-46a0-819d-e0067b0dd475" pod="tigera-operator/tigera-operator-5bf8dfcb4-6864h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:15.106570 kubelet[2739]: I0813 01:34:15.105855 2739 kubelet.go:2306] "Pod admission denied" podUID="bf5e4b44-8813-4e49-acdf-7d07f3671d90" pod="tigera-operator/tigera-operator-5bf8dfcb4-bc56c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:15.315155 kubelet[2739]: I0813 01:34:15.313271 2739 kubelet.go:2306] "Pod admission denied" podUID="73b2a2ae-2c7b-45da-8798-306a35ceb227" pod="tigera-operator/tigera-operator-5bf8dfcb4-dqtwp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:15.406614 kubelet[2739]: I0813 01:34:15.405974 2739 kubelet.go:2306] "Pod admission denied" podUID="3f6c302d-270a-439e-bc0f-9ead4ba9aa49" pod="tigera-operator/tigera-operator-5bf8dfcb4-pv4lg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:15.506599 kubelet[2739]: I0813 01:34:15.506543 2739 kubelet.go:2306] "Pod admission denied" podUID="3e639867-ec32-4ddf-8705-26baa7c95e09" pod="tigera-operator/tigera-operator-5bf8dfcb4-f5x8g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:15.721509 kubelet[2739]: I0813 01:34:15.721476 2739 kubelet.go:2306] "Pod admission denied" podUID="cc2f6389-ed6f-4366-a63d-cdec919935cd" pod="tigera-operator/tigera-operator-5bf8dfcb4-c5rkh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:15.798503 kubelet[2739]: E0813 01:34:15.798463 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:34:15.800487 containerd[1544]: time="2025-08-13T01:34:15.800445965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5v9,Uid:9cb65184-4613-43ed-9fa1-0cf23f1e0e56,Namespace:kube-system,Attempt:0,}" Aug 13 01:34:15.809357 kubelet[2739]: I0813 01:34:15.809294 2739 kubelet.go:2306] "Pod admission denied" podUID="0bc9aaa8-f205-479c-8dc3-f89bfe6bb59e" pod="tigera-operator/tigera-operator-5bf8dfcb4-ndlt4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:15.862688 containerd[1544]: time="2025-08-13T01:34:15.862622319Z" level=error msg="Failed to destroy network for sandbox \"61536e9347a0098a933a62f611ddf6a8a415d973675c0fafefe69dfc5b23272a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:15.864888 systemd[1]: run-netns-cni\x2dc2a1f1ca\x2d6d04\x2deb7e\x2d3e71\x2d43886c3065cf.mount: Deactivated successfully. Aug 13 01:34:15.867984 containerd[1544]: time="2025-08-13T01:34:15.867882574Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5v9,Uid:9cb65184-4613-43ed-9fa1-0cf23f1e0e56,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61536e9347a0098a933a62f611ddf6a8a415d973675c0fafefe69dfc5b23272a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:15.868242 kubelet[2739]: E0813 01:34:15.868204 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61536e9347a0098a933a62f611ddf6a8a415d973675c0fafefe69dfc5b23272a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:15.868327 kubelet[2739]: E0813 01:34:15.868260 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61536e9347a0098a933a62f611ddf6a8a415d973675c0fafefe69dfc5b23272a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:34:15.868327 kubelet[2739]: E0813 01:34:15.868282 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61536e9347a0098a933a62f611ddf6a8a415d973675c0fafefe69dfc5b23272a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:34:15.868389 kubelet[2739]: E0813 01:34:15.868320 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mx5v9_kube-system(9cb65184-4613-43ed-9fa1-0cf23f1e0e56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mx5v9_kube-system(9cb65184-4613-43ed-9fa1-0cf23f1e0e56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61536e9347a0098a933a62f611ddf6a8a415d973675c0fafefe69dfc5b23272a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mx5v9" podUID="9cb65184-4613-43ed-9fa1-0cf23f1e0e56" Aug 13 01:34:15.909069 kubelet[2739]: I0813 01:34:15.909019 2739 kubelet.go:2306] "Pod admission denied" podUID="075df1df-a23b-4502-b69f-181734db8f3d" pod="tigera-operator/tigera-operator-5bf8dfcb4-l8s4j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:16.017381 kubelet[2739]: I0813 01:34:16.016671 2739 kubelet.go:2306] "Pod admission denied" podUID="80919b60-0982-4878-af90-2bc7d5e3ae67" pod="tigera-operator/tigera-operator-5bf8dfcb4-ks27h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:16.107611 kubelet[2739]: I0813 01:34:16.107566 2739 kubelet.go:2306] "Pod admission denied" podUID="83ada3d9-5715-41da-abfb-712ab82e3eee" pod="tigera-operator/tigera-operator-5bf8dfcb4-ndpk2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:16.207831 kubelet[2739]: I0813 01:34:16.207747 2739 kubelet.go:2306] "Pod admission denied" podUID="9622ed0c-1777-423e-9005-b5b5289cbf36" pod="tigera-operator/tigera-operator-5bf8dfcb4-cskdf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:16.321207 kubelet[2739]: I0813 01:34:16.319238 2739 kubelet.go:2306] "Pod admission denied" podUID="883df505-8c6b-46fc-acf8-450b01fd46bf" pod="tigera-operator/tigera-operator-5bf8dfcb4-tkmbk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:16.510886 kubelet[2739]: I0813 01:34:16.510823 2739 kubelet.go:2306] "Pod admission denied" podUID="3d9ab1fd-d388-49b7-b913-58e6685308ac" pod="tigera-operator/tigera-operator-5bf8dfcb4-cwqzb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:16.534907 kubelet[2739]: I0813 01:34:16.534868 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:34:16.534907 kubelet[2739]: I0813 01:34:16.534904 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:34:16.536212 kubelet[2739]: I0813 01:34:16.536180 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:34:16.551032 kubelet[2739]: I0813 01:34:16.551013 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:34:16.551124 kubelet[2739]: I0813 01:34:16.551069 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-kube-controllers-85fbc76f96-d5vf4","kube-system/coredns-7c65d6cfc9-994jv","calico-system/csi-node-driver-7bj49","calico-system/calico-node-5c47r","calico-system/calico-typha-79464475b5-bbrtw","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:34:16.551124 kubelet[2739]: E0813 01:34:16.551094 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:34:16.551124 kubelet[2739]: E0813 01:34:16.551104 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:34:16.551124 kubelet[2739]: E0813 01:34:16.551111 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:34:16.551124 kubelet[2739]: E0813 01:34:16.551117 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:34:16.551124 kubelet[2739]: E0813 01:34:16.551123 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:34:16.551328 kubelet[2739]: E0813 01:34:16.551150 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:34:16.551328 kubelet[2739]: E0813 01:34:16.551158 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:34:16.551328 kubelet[2739]: E0813 01:34:16.551168 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:34:16.551328 kubelet[2739]: E0813 01:34:16.551176 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:34:16.551328 kubelet[2739]: E0813 01:34:16.551185 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:34:16.551328 kubelet[2739]: I0813 01:34:16.551193 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:34:16.608717 kubelet[2739]: I0813 01:34:16.608597 2739 kubelet.go:2306] "Pod admission denied" podUID="6d6fcb51-a475-4a31-8f21-4e6c4ba9012f" pod="tigera-operator/tigera-operator-5bf8dfcb4-fldrn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:16.718570 kubelet[2739]: I0813 01:34:16.718514 2739 kubelet.go:2306] "Pod admission denied" podUID="ccd250f6-b8c9-4f21-a5a3-50369bf02a2b" pod="tigera-operator/tigera-operator-5bf8dfcb4-w45r8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:16.910895 kubelet[2739]: I0813 01:34:16.910476 2739 kubelet.go:2306] "Pod admission denied" podUID="0f983c0d-9ea4-40a9-a9f6-539436cd892c" pod="tigera-operator/tigera-operator-5bf8dfcb4-tgpqb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:17.008301 kubelet[2739]: I0813 01:34:17.008239 2739 kubelet.go:2306] "Pod admission denied" podUID="c2709977-f23e-49a2-a880-b6a7ee24d46f" pod="tigera-operator/tigera-operator-5bf8dfcb4-tfjgt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:17.120678 kubelet[2739]: I0813 01:34:17.119586 2739 kubelet.go:2306] "Pod admission denied" podUID="32b64c7d-cbf3-4c9d-a8ea-ee38e90f5de2" pod="tigera-operator/tigera-operator-5bf8dfcb4-4lz8f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:17.308144 kubelet[2739]: I0813 01:34:17.308092 2739 kubelet.go:2306] "Pod admission denied" podUID="15401f69-7486-4ba0-9314-f2a7bcd2ed23" pod="tigera-operator/tigera-operator-5bf8dfcb4-mb7dq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:17.405348 kubelet[2739]: I0813 01:34:17.405305 2739 kubelet.go:2306] "Pod admission denied" podUID="bb71c06c-29cc-4859-895f-4fca2e6f8d81" pod="tigera-operator/tigera-operator-5bf8dfcb4-vxn6w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:17.515069 kubelet[2739]: I0813 01:34:17.514580 2739 kubelet.go:2306] "Pod admission denied" podUID="776edf46-5d47-4bb3-8f52-3184f30c3985" pod="tigera-operator/tigera-operator-5bf8dfcb4-l4z6l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:17.607965 kubelet[2739]: I0813 01:34:17.607854 2739 kubelet.go:2306] "Pod admission denied" podUID="33756148-9e07-4c90-82b9-0d36feb8c86c" pod="tigera-operator/tigera-operator-5bf8dfcb4-qjv5k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:17.708149 kubelet[2739]: I0813 01:34:17.708092 2739 kubelet.go:2306] "Pod admission denied" podUID="d8780eb8-90c9-491a-8278-8ab6c9a0d93e" pod="tigera-operator/tigera-operator-5bf8dfcb4-sbgp9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:17.932011 kubelet[2739]: I0813 01:34:17.931797 2739 kubelet.go:2306] "Pod admission denied" podUID="8a46dfbc-9a74-432e-8dc2-77ce5c0078e2" pod="tigera-operator/tigera-operator-5bf8dfcb4-ndhcd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:18.022886 kubelet[2739]: I0813 01:34:18.022826 2739 kubelet.go:2306] "Pod admission denied" podUID="3e7df57a-51b1-4dc8-a0a3-752f452ee952" pod="tigera-operator/tigera-operator-5bf8dfcb4-lrplc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:18.108382 kubelet[2739]: I0813 01:34:18.108325 2739 kubelet.go:2306] "Pod admission denied" podUID="2155c2c8-9510-4470-a7db-901fc4003326" pod="tigera-operator/tigera-operator-5bf8dfcb4-xfgsc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:18.211955 kubelet[2739]: I0813 01:34:18.211846 2739 kubelet.go:2306] "Pod admission denied" podUID="0fd5f52b-e9e2-4a81-ad9c-b6525f66fc4c" pod="tigera-operator/tigera-operator-5bf8dfcb4-m29fq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:18.263845 kubelet[2739]: I0813 01:34:18.263806 2739 kubelet.go:2306] "Pod admission denied" podUID="47a9ed2c-b67e-41da-9042-d34fb300e8b3" pod="tigera-operator/tigera-operator-5bf8dfcb4-nppw4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:18.358150 kubelet[2739]: I0813 01:34:18.357457 2739 kubelet.go:2306] "Pod admission denied" podUID="2c71bf1a-72b2-4a00-a145-0c0fd87fac24" pod="tigera-operator/tigera-operator-5bf8dfcb4-b44ff" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:18.458063 kubelet[2739]: I0813 01:34:18.458020 2739 kubelet.go:2306] "Pod admission denied" podUID="c2ad5f59-6b01-453a-8423-6da48bb15a22" pod="tigera-operator/tigera-operator-5bf8dfcb4-t5spm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:18.557061 kubelet[2739]: I0813 01:34:18.557006 2739 kubelet.go:2306] "Pod admission denied" podUID="b4cd2782-4ca1-4a7c-8f22-0a100ae554df" pod="tigera-operator/tigera-operator-5bf8dfcb4-7cjhb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:18.669775 kubelet[2739]: I0813 01:34:18.669714 2739 kubelet.go:2306] "Pod admission denied" podUID="ae19756e-f76c-4f66-bcc0-131743f73c30" pod="tigera-operator/tigera-operator-5bf8dfcb4-z5jxc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:18.705863 kubelet[2739]: I0813 01:34:18.705812 2739 kubelet.go:2306] "Pod admission denied" podUID="992fe92b-bf3d-4674-86d0-f328a53c5fdb" pod="tigera-operator/tigera-operator-5bf8dfcb4-kgtlm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:18.800452 kubelet[2739]: E0813 01:34:18.800343 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:34:18.801440 kubelet[2739]: E0813 01:34:18.801349 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:34:18.803252 containerd[1544]: time="2025-08-13T01:34:18.802803104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-994jv,Uid:3bd5a2e2-42ee-4e27-a412-724b2f0527b4,Namespace:kube-system,Attempt:0,}" Aug 13 01:34:18.811437 kubelet[2739]: I0813 01:34:18.810654 2739 kubelet.go:2306] "Pod admission denied" podUID="d999c06d-3eac-486c-980d-e505d17dfc99" pod="tigera-operator/tigera-operator-5bf8dfcb4-xjcbm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:18.890445 containerd[1544]: time="2025-08-13T01:34:18.890199436Z" level=error msg="Failed to destroy network for sandbox \"eea59858d7f71401dc298f8ca1ff6d022d90725a09b453d10a77e75a2f90a036\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:18.893862 systemd[1]: run-netns-cni\x2dd6a07dc0\x2da158\x2d07f1\x2d58ac\x2d6166fe5aaec1.mount: Deactivated successfully. Aug 13 01:34:18.896647 containerd[1544]: time="2025-08-13T01:34:18.896608685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-994jv,Uid:3bd5a2e2-42ee-4e27-a412-724b2f0527b4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eea59858d7f71401dc298f8ca1ff6d022d90725a09b453d10a77e75a2f90a036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:18.898481 kubelet[2739]: E0813 01:34:18.897838 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eea59858d7f71401dc298f8ca1ff6d022d90725a09b453d10a77e75a2f90a036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:18.900790 kubelet[2739]: E0813 01:34:18.900720 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eea59858d7f71401dc298f8ca1ff6d022d90725a09b453d10a77e75a2f90a036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:34:18.900790 kubelet[2739]: E0813 01:34:18.900788 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eea59858d7f71401dc298f8ca1ff6d022d90725a09b453d10a77e75a2f90a036\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:34:18.900936 kubelet[2739]: E0813 01:34:18.900847 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-994jv_kube-system(3bd5a2e2-42ee-4e27-a412-724b2f0527b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-994jv_kube-system(3bd5a2e2-42ee-4e27-a412-724b2f0527b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eea59858d7f71401dc298f8ca1ff6d022d90725a09b453d10a77e75a2f90a036\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-994jv" podUID="3bd5a2e2-42ee-4e27-a412-724b2f0527b4" Aug 13 01:34:18.914805 kubelet[2739]: I0813 01:34:18.914754 2739 kubelet.go:2306] "Pod admission denied" podUID="a5f2931a-df93-4d17-992b-33b44c4d6db4" pod="tigera-operator/tigera-operator-5bf8dfcb4-9gx8v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:19.008999 kubelet[2739]: I0813 01:34:19.008943 2739 kubelet.go:2306] "Pod admission denied" podUID="4012dd40-b005-4bcc-8415-02a67cc33ea3" pod="tigera-operator/tigera-operator-5bf8dfcb4-j9sg7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:19.109040 kubelet[2739]: I0813 01:34:19.108063 2739 kubelet.go:2306] "Pod admission denied" podUID="223c79a4-da08-49a2-b3ac-c6e31d4dd8ec" pod="tigera-operator/tigera-operator-5bf8dfcb4-scn68" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:19.156957 kubelet[2739]: I0813 01:34:19.156907 2739 kubelet.go:2306] "Pod admission denied" podUID="e36395ce-7318-4f39-ba6b-a36f76062473" pod="tigera-operator/tigera-operator-5bf8dfcb4-xct26" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:19.255287 kubelet[2739]: I0813 01:34:19.255229 2739 kubelet.go:2306] "Pod admission denied" podUID="4d39ff27-ad93-44d7-a846-26cdc9c7fa41" pod="tigera-operator/tigera-operator-5bf8dfcb4-xnxmq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:19.455118 kubelet[2739]: I0813 01:34:19.454997 2739 kubelet.go:2306] "Pod admission denied" podUID="3ced11b1-5127-48be-9de4-91b7d00aacce" pod="tigera-operator/tigera-operator-5bf8dfcb4-vnt24" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:19.555697 kubelet[2739]: I0813 01:34:19.555641 2739 kubelet.go:2306] "Pod admission denied" podUID="421806e7-59f7-46e4-bbc5-1eeeeb2329ac" pod="tigera-operator/tigera-operator-5bf8dfcb4-rdrbv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:19.655551 kubelet[2739]: I0813 01:34:19.655505 2739 kubelet.go:2306] "Pod admission denied" podUID="1b0a218d-538c-4de5-8277-4815a63979c1" pod="tigera-operator/tigera-operator-5bf8dfcb4-kcz7l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:19.756089 kubelet[2739]: I0813 01:34:19.756037 2739 kubelet.go:2306] "Pod admission denied" podUID="1eca0352-4539-4ca9-9819-13c60ff5ee2f" pod="tigera-operator/tigera-operator-5bf8dfcb4-8zn5j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:19.857326 kubelet[2739]: I0813 01:34:19.857277 2739 kubelet.go:2306] "Pod admission denied" podUID="9041e9e9-efe6-4c5b-820c-27cccbe93ad6" pod="tigera-operator/tigera-operator-5bf8dfcb4-2j9s9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:20.057702 kubelet[2739]: I0813 01:34:20.057574 2739 kubelet.go:2306] "Pod admission denied" podUID="c3dee944-602a-4f3d-a9f8-60ae51298152" pod="tigera-operator/tigera-operator-5bf8dfcb4-z2qsb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:20.158641 kubelet[2739]: I0813 01:34:20.158225 2739 kubelet.go:2306] "Pod admission denied" podUID="721ae0f2-5994-4b4e-93cb-3f9a12c4bf58" pod="tigera-operator/tigera-operator-5bf8dfcb4-9xkfd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:20.253637 kubelet[2739]: I0813 01:34:20.253512 2739 kubelet.go:2306] "Pod admission denied" podUID="15772ac2-7a4c-480b-bec0-c979d762573a" pod="tigera-operator/tigera-operator-5bf8dfcb4-svz7x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:20.458077 kubelet[2739]: I0813 01:34:20.457710 2739 kubelet.go:2306] "Pod admission denied" podUID="dd5e8e20-4aba-43ba-aa54-3e057a443c10" pod="tigera-operator/tigera-operator-5bf8dfcb4-x75bt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:20.560682 kubelet[2739]: I0813 01:34:20.560640 2739 kubelet.go:2306] "Pod admission denied" podUID="ea2c9baa-c06e-443a-82a9-bec93a0c0987" pod="tigera-operator/tigera-operator-5bf8dfcb4-ct47r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:20.659339 kubelet[2739]: I0813 01:34:20.659249 2739 kubelet.go:2306] "Pod admission denied" podUID="1ac1527f-bc2e-41f9-8d37-2a6db3138403" pod="tigera-operator/tigera-operator-5bf8dfcb4-tvg7g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:20.758084 kubelet[2739]: I0813 01:34:20.758027 2739 kubelet.go:2306] "Pod admission denied" podUID="83f4addf-3cd8-4916-b268-6115382256e1" pod="tigera-operator/tigera-operator-5bf8dfcb4-kmhdl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:20.861783 kubelet[2739]: I0813 01:34:20.861737 2739 kubelet.go:2306] "Pod admission denied" podUID="7b43033c-8589-467d-9d1e-eca15873ed9c" pod="tigera-operator/tigera-operator-5bf8dfcb4-l762m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:20.959225 kubelet[2739]: I0813 01:34:20.959159 2739 kubelet.go:2306] "Pod admission denied" podUID="5d9ce27d-7501-433e-a25a-fa1ac93cd31d" pod="tigera-operator/tigera-operator-5bf8dfcb4-mqz5n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:21.010210 kubelet[2739]: I0813 01:34:21.010074 2739 kubelet.go:2306] "Pod admission denied" podUID="72075efa-e928-4839-ae6c-65a1cd1382a0" pod="tigera-operator/tigera-operator-5bf8dfcb4-26nqk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:21.108098 kubelet[2739]: I0813 01:34:21.108040 2739 kubelet.go:2306] "Pod admission denied" podUID="cf196f1d-cb1f-483a-8a5d-4c1ef3239dad" pod="tigera-operator/tigera-operator-5bf8dfcb4-7vrrt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:21.307056 kubelet[2739]: I0813 01:34:21.306586 2739 kubelet.go:2306] "Pod admission denied" podUID="e1adcc58-e455-400f-8026-4f9cd37fef1f" pod="tigera-operator/tigera-operator-5bf8dfcb4-cfrm4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:21.421159 kubelet[2739]: I0813 01:34:21.420608 2739 kubelet.go:2306] "Pod admission denied" podUID="c7c3dafc-ca94-4bc8-8a55-b15c15c758af" pod="tigera-operator/tigera-operator-5bf8dfcb4-4mbgv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:21.510277 kubelet[2739]: I0813 01:34:21.510205 2739 kubelet.go:2306] "Pod admission denied" podUID="32084758-ebd8-4381-863e-53f0c4765ef1" pod="tigera-operator/tigera-operator-5bf8dfcb4-pn56m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:21.608768 kubelet[2739]: I0813 01:34:21.608271 2739 kubelet.go:2306] "Pod admission denied" podUID="7f5f94db-837e-47ce-a2b9-5b16edf4c3f1" pod="tigera-operator/tigera-operator-5bf8dfcb4-lnsgd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:21.721515 kubelet[2739]: I0813 01:34:21.721461 2739 kubelet.go:2306] "Pod admission denied" podUID="ff1aeaf9-b5f3-4e9a-9fdc-18dc04c8beab" pod="tigera-operator/tigera-operator-5bf8dfcb4-rbjt8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:21.800844 kubelet[2739]: E0813 01:34:21.800778 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-5c47r" podUID="7d03562e-9842-4425-9847-632615391bfb" Aug 13 01:34:21.814374 kubelet[2739]: I0813 01:34:21.814329 2739 kubelet.go:2306] "Pod admission denied" podUID="06ed2913-15f9-4b5a-97e6-a8cadff2fbe2" pod="tigera-operator/tigera-operator-5bf8dfcb4-pbzpg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:21.917555 kubelet[2739]: I0813 01:34:21.916278 2739 kubelet.go:2306] "Pod admission denied" podUID="c072052f-5154-4f3e-a6b3-c8c2237f578f" pod="tigera-operator/tigera-operator-5bf8dfcb4-w96bt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:22.111019 kubelet[2739]: I0813 01:34:22.110961 2739 kubelet.go:2306] "Pod admission denied" podUID="24ce0ae4-cdca-44f7-8b26-08ba66aa8c87" pod="tigera-operator/tigera-operator-5bf8dfcb4-mklhf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:22.217109 kubelet[2739]: I0813 01:34:22.216711 2739 kubelet.go:2306] "Pod admission denied" podUID="d45bd6c7-0125-4580-ab75-4981bda54bb1" pod="tigera-operator/tigera-operator-5bf8dfcb4-kd79f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:22.267762 kubelet[2739]: I0813 01:34:22.267724 2739 kubelet.go:2306] "Pod admission denied" podUID="e75cddca-139e-4623-a2f8-b58fbdab9944" pod="tigera-operator/tigera-operator-5bf8dfcb4-qzrqs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:22.357391 kubelet[2739]: I0813 01:34:22.357144 2739 kubelet.go:2306] "Pod admission denied" podUID="a0d30527-cf58-4989-8549-c598a229e6f2" pod="tigera-operator/tigera-operator-5bf8dfcb4-6f5kb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:22.465361 kubelet[2739]: I0813 01:34:22.464835 2739 kubelet.go:2306] "Pod admission denied" podUID="755b8d50-9e6a-445c-a8dd-db3b1b5a77a8" pod="tigera-operator/tigera-operator-5bf8dfcb4-x4krk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:22.558394 kubelet[2739]: I0813 01:34:22.558346 2739 kubelet.go:2306] "Pod admission denied" podUID="ef20c9c8-40c4-44b5-91c5-550816f48059" pod="tigera-operator/tigera-operator-5bf8dfcb4-q7cb8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:22.659238 kubelet[2739]: I0813 01:34:22.659181 2739 kubelet.go:2306] "Pod admission denied" podUID="b25e6ad2-75c2-4d38-a766-abba4a3e2b02" pod="tigera-operator/tigera-operator-5bf8dfcb4-klnf6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:22.757599 kubelet[2739]: I0813 01:34:22.757336 2739 kubelet.go:2306] "Pod admission denied" podUID="0a3b4cf0-3671-4e58-9f20-0f4600bfe7e5" pod="tigera-operator/tigera-operator-5bf8dfcb4-bv8q8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:22.798380 containerd[1544]: time="2025-08-13T01:34:22.798329531Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,}" Aug 13 01:34:22.843912 containerd[1544]: time="2025-08-13T01:34:22.841398672Z" level=error msg="Failed to destroy network for sandbox \"78451fc863e37888283f71070b73940d10d3fe94eea9f463166357d87820eaf9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:22.843461 systemd[1]: run-netns-cni\x2d02c52f82\x2d0777\x2d1fe3\x2d62be\x2df412b1d5db94.mount: Deactivated successfully. Aug 13 01:34:22.845339 containerd[1544]: time="2025-08-13T01:34:22.845289742Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"78451fc863e37888283f71070b73940d10d3fe94eea9f463166357d87820eaf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:22.845548 kubelet[2739]: E0813 01:34:22.845500 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78451fc863e37888283f71070b73940d10d3fe94eea9f463166357d87820eaf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:22.845594 kubelet[2739]: E0813 01:34:22.845555 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78451fc863e37888283f71070b73940d10d3fe94eea9f463166357d87820eaf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:34:22.845594 kubelet[2739]: E0813 01:34:22.845575 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78451fc863e37888283f71070b73940d10d3fe94eea9f463166357d87820eaf9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:34:22.845661 kubelet[2739]: E0813 01:34:22.845611 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78451fc863e37888283f71070b73940d10d3fe94eea9f463166357d87820eaf9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:34:22.957507 kubelet[2739]: I0813 01:34:22.957468 2739 kubelet.go:2306] "Pod admission denied" podUID="dbb37d68-ad61-4897-9b92-1535aadf4846" pod="tigera-operator/tigera-operator-5bf8dfcb4-dnxgb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:23.063556 kubelet[2739]: I0813 01:34:23.062981 2739 kubelet.go:2306] "Pod admission denied" podUID="e4d7493a-c5a0-489a-b9fa-37a8c0eea534" pod="tigera-operator/tigera-operator-5bf8dfcb4-bj4x8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:23.157364 kubelet[2739]: I0813 01:34:23.157256 2739 kubelet.go:2306] "Pod admission denied" podUID="67f8a3f3-b387-4a5d-8009-7aee8ffdfb46" pod="tigera-operator/tigera-operator-5bf8dfcb4-z2wdr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:23.258815 kubelet[2739]: I0813 01:34:23.258764 2739 kubelet.go:2306] "Pod admission denied" podUID="17bf0203-39d8-4cf1-b228-19265c9c6d2b" pod="tigera-operator/tigera-operator-5bf8dfcb4-gr77q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:23.365953 kubelet[2739]: I0813 01:34:23.365912 2739 kubelet.go:2306] "Pod admission denied" podUID="06bb31c9-9ab2-4158-b4d4-226444e7c737" pod="tigera-operator/tigera-operator-5bf8dfcb4-52ldf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:23.561606 kubelet[2739]: I0813 01:34:23.561556 2739 kubelet.go:2306] "Pod admission denied" podUID="c47faa7f-1291-4aed-a0a1-58773c5d0c5f" pod="tigera-operator/tigera-operator-5bf8dfcb4-dtp2k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:23.655742 kubelet[2739]: I0813 01:34:23.655696 2739 kubelet.go:2306] "Pod admission denied" podUID="a8869f52-65e8-4367-9c8c-65f79895e50a" pod="tigera-operator/tigera-operator-5bf8dfcb4-24n2r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:23.717472 kubelet[2739]: I0813 01:34:23.717422 2739 kubelet.go:2306] "Pod admission denied" podUID="83f92b85-d0ec-4faa-a715-4753181f3c84" pod="tigera-operator/tigera-operator-5bf8dfcb4-4dndj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:23.811831 kubelet[2739]: I0813 01:34:23.811683 2739 kubelet.go:2306] "Pod admission denied" podUID="dc695811-5a87-4e56-8eff-2ce2078a5aa9" pod="tigera-operator/tigera-operator-5bf8dfcb4-vlmwc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:23.907359 kubelet[2739]: I0813 01:34:23.907308 2739 kubelet.go:2306] "Pod admission denied" podUID="ec01b552-7b2b-4066-8ee6-d8fb6116d145" pod="tigera-operator/tigera-operator-5bf8dfcb4-hmb8z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:24.008573 kubelet[2739]: I0813 01:34:24.008517 2739 kubelet.go:2306] "Pod admission denied" podUID="21fc3133-946f-4fcb-ad53-f83177b9b7bf" pod="tigera-operator/tigera-operator-5bf8dfcb4-xr2fj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:24.108859 kubelet[2739]: I0813 01:34:24.108734 2739 kubelet.go:2306] "Pod admission denied" podUID="196e2996-059f-45a1-aa16-dd08b91a5d24" pod="tigera-operator/tigera-operator-5bf8dfcb4-5krx2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:24.216229 kubelet[2739]: I0813 01:34:24.216187 2739 kubelet.go:2306] "Pod admission denied" podUID="13f828b9-bc0e-405c-9b79-c6979e1ad57c" pod="tigera-operator/tigera-operator-5bf8dfcb4-qn655" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:24.308529 kubelet[2739]: I0813 01:34:24.308490 2739 kubelet.go:2306] "Pod admission denied" podUID="2fc5ce21-dd44-4c6e-8a2a-781908380c12" pod="tigera-operator/tigera-operator-5bf8dfcb4-lr7dp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:24.408223 kubelet[2739]: I0813 01:34:24.407429 2739 kubelet.go:2306] "Pod admission denied" podUID="759d4d8c-d1f3-44d4-b5ef-ea8cebe5db26" pod="tigera-operator/tigera-operator-5bf8dfcb4-xzt4r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:24.506767 kubelet[2739]: I0813 01:34:24.506709 2739 kubelet.go:2306] "Pod admission denied" podUID="b6162090-237b-46e7-aacb-9aa420ec01d9" pod="tigera-operator/tigera-operator-5bf8dfcb4-gqp94" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:24.558659 kubelet[2739]: I0813 01:34:24.558586 2739 kubelet.go:2306] "Pod admission denied" podUID="3b4b66cd-c15d-4523-bd7e-60b8b2fc2270" pod="tigera-operator/tigera-operator-5bf8dfcb4-ktzsq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:24.668193 kubelet[2739]: I0813 01:34:24.667580 2739 kubelet.go:2306] "Pod admission denied" podUID="6035089f-d923-428b-b312-ab6b70411895" pod="tigera-operator/tigera-operator-5bf8dfcb4-kpc26" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:24.760110 kubelet[2739]: I0813 01:34:24.760061 2739 kubelet.go:2306] "Pod admission denied" podUID="99e9e6c4-773c-4037-a8cb-6ac2708a05a1" pod="tigera-operator/tigera-operator-5bf8dfcb4-8vsg2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:24.857246 kubelet[2739]: I0813 01:34:24.857194 2739 kubelet.go:2306] "Pod admission denied" podUID="f0b1b916-e3b1-4e26-8554-207556493782" pod="tigera-operator/tigera-operator-5bf8dfcb4-75btk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:24.958562 kubelet[2739]: I0813 01:34:24.958422 2739 kubelet.go:2306] "Pod admission denied" podUID="05870f44-6948-4290-8af1-0bf3e51a2551" pod="tigera-operator/tigera-operator-5bf8dfcb4-hgwwb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:25.058310 kubelet[2739]: I0813 01:34:25.058260 2739 kubelet.go:2306] "Pod admission denied" podUID="3bc4fec9-fcb9-46ed-bf78-d8abd60d9cc6" pod="tigera-operator/tigera-operator-5bf8dfcb4-5jtbg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:25.259851 kubelet[2739]: I0813 01:34:25.259792 2739 kubelet.go:2306] "Pod admission denied" podUID="81476ca4-5e2f-4a21-9bed-3f1dcb1b3130" pod="tigera-operator/tigera-operator-5bf8dfcb4-xgbjz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:25.357059 kubelet[2739]: I0813 01:34:25.357007 2739 kubelet.go:2306] "Pod admission denied" podUID="10e2c708-6da2-428d-bbb8-650aa3ff710e" pod="tigera-operator/tigera-operator-5bf8dfcb4-fjqwq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:25.456042 kubelet[2739]: I0813 01:34:25.455986 2739 kubelet.go:2306] "Pod admission denied" podUID="5f80a3dd-b796-42e1-baf8-441870df29cc" pod="tigera-operator/tigera-operator-5bf8dfcb4-9nhfv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:25.562227 kubelet[2739]: I0813 01:34:25.561634 2739 kubelet.go:2306] "Pod admission denied" podUID="cf231332-7560-4c3e-8679-b731efbcc706" pod="tigera-operator/tigera-operator-5bf8dfcb4-8l5c8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:25.656344 kubelet[2739]: I0813 01:34:25.656289 2739 kubelet.go:2306] "Pod admission denied" podUID="427afd42-ad58-42bb-b3ae-3e1634f73791" pod="tigera-operator/tigera-operator-5bf8dfcb4-f4lr5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:25.760998 kubelet[2739]: I0813 01:34:25.760938 2739 kubelet.go:2306] "Pod admission denied" podUID="9ecc410c-f01d-4b21-93c9-89cac3cac028" pod="tigera-operator/tigera-operator-5bf8dfcb4-cwt69" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:25.798054 containerd[1544]: time="2025-08-13T01:34:25.797994608Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,}" Aug 13 01:34:25.864015 containerd[1544]: time="2025-08-13T01:34:25.863804967Z" level=error msg="Failed to destroy network for sandbox \"c931689ceddee66e4edde38f8ad5cc2049a34a4c9d483970b313616e1f15e24a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:25.866833 systemd[1]: run-netns-cni\x2d4f4de28c\x2ddbae\x2d74c6\x2dd812\x2d504d12db2967.mount: Deactivated successfully. Aug 13 01:34:25.867389 containerd[1544]: time="2025-08-13T01:34:25.867110814Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c931689ceddee66e4edde38f8ad5cc2049a34a4c9d483970b313616e1f15e24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:25.868802 kubelet[2739]: E0813 01:34:25.868694 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c931689ceddee66e4edde38f8ad5cc2049a34a4c9d483970b313616e1f15e24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:25.868802 kubelet[2739]: E0813 01:34:25.868753 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c931689ceddee66e4edde38f8ad5cc2049a34a4c9d483970b313616e1f15e24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:34:25.868802 kubelet[2739]: E0813 01:34:25.868774 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c931689ceddee66e4edde38f8ad5cc2049a34a4c9d483970b313616e1f15e24a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:34:25.868915 kubelet[2739]: E0813 01:34:25.868808 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c931689ceddee66e4edde38f8ad5cc2049a34a4c9d483970b313616e1f15e24a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7bj49" podUID="4e5845c9-626c-4c83-900a-0da0bae2daed" Aug 13 01:34:25.871463 kubelet[2739]: I0813 01:34:25.871229 2739 kubelet.go:2306] "Pod admission denied" podUID="64ea6997-e4a3-4e9f-9863-947f8eb071c9" pod="tigera-operator/tigera-operator-5bf8dfcb4-xkkhm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:25.954894 kubelet[2739]: I0813 01:34:25.954864 2739 kubelet.go:2306] "Pod admission denied" podUID="0014068a-14cf-41ec-b644-0186daf74a14" pod="tigera-operator/tigera-operator-5bf8dfcb4-7hddt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:26.064111 kubelet[2739]: I0813 01:34:26.064076 2739 kubelet.go:2306] "Pod admission denied" podUID="11856e63-c894-4d46-bc08-73f21a62bb6b" pod="tigera-operator/tigera-operator-5bf8dfcb4-x8mnq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:26.158272 kubelet[2739]: I0813 01:34:26.157834 2739 kubelet.go:2306] "Pod admission denied" podUID="f13f922a-3522-40e2-af8b-515d983c490a" pod="tigera-operator/tigera-operator-5bf8dfcb4-2wdks" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:26.259805 kubelet[2739]: I0813 01:34:26.259728 2739 kubelet.go:2306] "Pod admission denied" podUID="f8f578c5-5883-4b95-bcef-3c7760ae89f9" pod="tigera-operator/tigera-operator-5bf8dfcb4-gx9j9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:26.459596 kubelet[2739]: I0813 01:34:26.459479 2739 kubelet.go:2306] "Pod admission denied" podUID="44c8599c-06d7-4a9b-a6ef-8cfc376a6bc1" pod="tigera-operator/tigera-operator-5bf8dfcb4-r4wc8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:26.559286 kubelet[2739]: I0813 01:34:26.559254 2739 kubelet.go:2306] "Pod admission denied" podUID="1a53dc1f-67a4-4640-a34d-5ddcbd0c2ba8" pod="tigera-operator/tigera-operator-5bf8dfcb4-f8b4l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:26.569731 kubelet[2739]: I0813 01:34:26.569705 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:34:26.570005 kubelet[2739]: I0813 01:34:26.569832 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:34:26.572836 kubelet[2739]: I0813 01:34:26.572172 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:34:26.581832 kubelet[2739]: I0813 01:34:26.581585 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:34:26.581832 kubelet[2739]: I0813 01:34:26.581643 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-kube-controllers-85fbc76f96-d5vf4","kube-system/coredns-7c65d6cfc9-994jv","calico-system/csi-node-driver-7bj49","calico-system/calico-node-5c47r","calico-system/calico-typha-79464475b5-bbrtw","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:34:26.581832 kubelet[2739]: E0813 01:34:26.581665 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:34:26.581832 kubelet[2739]: E0813 01:34:26.581675 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:34:26.581832 kubelet[2739]: E0813 01:34:26.581681 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:34:26.581832 kubelet[2739]: E0813 01:34:26.581688 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:34:26.581832 kubelet[2739]: E0813 01:34:26.581694 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:34:26.581832 kubelet[2739]: E0813 01:34:26.581704 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:34:26.581832 kubelet[2739]: E0813 01:34:26.581712 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:34:26.581832 kubelet[2739]: E0813 01:34:26.581721 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:34:26.581832 kubelet[2739]: E0813 01:34:26.581728 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:34:26.581832 kubelet[2739]: E0813 01:34:26.581736 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:34:26.581832 kubelet[2739]: I0813 01:34:26.581744 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:34:26.668568 kubelet[2739]: I0813 01:34:26.667044 2739 kubelet.go:2306] "Pod admission denied" podUID="71d8c8f9-ec81-4f4a-81fb-f0fa9ba13c70" pod="tigera-operator/tigera-operator-5bf8dfcb4-7qtpw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:26.776700 kubelet[2739]: I0813 01:34:26.776645 2739 kubelet.go:2306] "Pod admission denied" podUID="cc3b3da1-bb95-4f23-9eb2-68a20a73c84d" pod="tigera-operator/tigera-operator-5bf8dfcb4-bk7k8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:26.830833 kubelet[2739]: I0813 01:34:26.830794 2739 kubelet.go:2306] "Pod admission denied" podUID="ae4e12ae-be64-4048-abeb-6032c247f0f2" pod="tigera-operator/tigera-operator-5bf8dfcb4-rnpng" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:26.908730 kubelet[2739]: I0813 01:34:26.908696 2739 kubelet.go:2306] "Pod admission denied" podUID="1701718e-ea65-4eab-8194-4434d66782e4" pod="tigera-operator/tigera-operator-5bf8dfcb4-5cmrr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:27.008537 kubelet[2739]: I0813 01:34:27.008484 2739 kubelet.go:2306] "Pod admission denied" podUID="92f2449e-0fc4-4c52-960c-f165389e840f" pod="tigera-operator/tigera-operator-5bf8dfcb4-rnspm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:27.122329 kubelet[2739]: I0813 01:34:27.121828 2739 kubelet.go:2306] "Pod admission denied" podUID="6c5ab8ae-94be-471f-a7b6-b139339bacd0" pod="tigera-operator/tigera-operator-5bf8dfcb4-gmc2n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:27.209801 kubelet[2739]: I0813 01:34:27.209745 2739 kubelet.go:2306] "Pod admission denied" podUID="b0c9db3d-9582-4c38-bee5-db8f741d8282" pod="tigera-operator/tigera-operator-5bf8dfcb4-rpp74" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:27.310845 kubelet[2739]: I0813 01:34:27.310784 2739 kubelet.go:2306] "Pod admission denied" podUID="d912817c-aa05-4802-ae6f-01a50916fdd1" pod="tigera-operator/tigera-operator-5bf8dfcb4-mrhn9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:27.409769 kubelet[2739]: I0813 01:34:27.409350 2739 kubelet.go:2306] "Pod admission denied" podUID="3dabebb5-f5ab-431e-a7ad-5fed1bcedae1" pod="tigera-operator/tigera-operator-5bf8dfcb4-f8scg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:27.457855 kubelet[2739]: I0813 01:34:27.457802 2739 kubelet.go:2306] "Pod admission denied" podUID="3b43211e-d892-40c7-90a3-a05976e4afe7" pod="tigera-operator/tigera-operator-5bf8dfcb4-4cvbw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:27.564728 kubelet[2739]: I0813 01:34:27.564674 2739 kubelet.go:2306] "Pod admission denied" podUID="27e3fad2-dba2-4a88-bd74-2d900783d9c3" pod="tigera-operator/tigera-operator-5bf8dfcb4-wpvw8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:27.662445 kubelet[2739]: I0813 01:34:27.660525 2739 kubelet.go:2306] "Pod admission denied" podUID="b65edc74-d920-404c-bf2b-85c7007fa5a7" pod="tigera-operator/tigera-operator-5bf8dfcb4-hdxzn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:27.760571 kubelet[2739]: I0813 01:34:27.760507 2739 kubelet.go:2306] "Pod admission denied" podUID="d3a59a40-f3e8-4ba0-a083-bde6b733bde5" pod="tigera-operator/tigera-operator-5bf8dfcb4-jx6c6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:27.858502 kubelet[2739]: I0813 01:34:27.858441 2739 kubelet.go:2306] "Pod admission denied" podUID="7e5393c4-6d31-4243-85b3-c73842a15fcf" pod="tigera-operator/tigera-operator-5bf8dfcb4-k8jv6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:27.963224 kubelet[2739]: I0813 01:34:27.963070 2739 kubelet.go:2306] "Pod admission denied" podUID="62f4f3f0-34c1-4277-83c3-294e8a7fd43f" pod="tigera-operator/tigera-operator-5bf8dfcb4-qz42z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:28.058455 kubelet[2739]: I0813 01:34:28.058405 2739 kubelet.go:2306] "Pod admission denied" podUID="4341a635-e902-4b79-b8df-f157a0480d4e" pod="tigera-operator/tigera-operator-5bf8dfcb4-h9g2g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:28.158171 kubelet[2739]: I0813 01:34:28.158119 2739 kubelet.go:2306] "Pod admission denied" podUID="75bf4059-d85d-4a58-ac2d-86683be227f7" pod="tigera-operator/tigera-operator-5bf8dfcb4-cn8nm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:28.257426 kubelet[2739]: I0813 01:34:28.257379 2739 kubelet.go:2306] "Pod admission denied" podUID="abd4b03a-6ccc-4de4-b3df-8604779511ef" pod="tigera-operator/tigera-operator-5bf8dfcb4-z7kt2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:28.361111 kubelet[2739]: I0813 01:34:28.361067 2739 kubelet.go:2306] "Pod admission denied" podUID="1b35e7d9-e02b-46e9-8569-ca8a64c661b7" pod="tigera-operator/tigera-operator-5bf8dfcb4-bj9fz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:28.559967 kubelet[2739]: I0813 01:34:28.559849 2739 kubelet.go:2306] "Pod admission denied" podUID="7d976d72-0215-414f-b11f-696171df36b8" pod="tigera-operator/tigera-operator-5bf8dfcb4-5v6xs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:28.658976 kubelet[2739]: I0813 01:34:28.658929 2739 kubelet.go:2306] "Pod admission denied" podUID="67fa4fdd-658b-4b59-95cd-66915d71d45a" pod="tigera-operator/tigera-operator-5bf8dfcb4-48kdt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:28.761165 kubelet[2739]: I0813 01:34:28.760125 2739 kubelet.go:2306] "Pod admission denied" podUID="d1f30850-3ea0-47fc-bffa-90b5134df15e" pod="tigera-operator/tigera-operator-5bf8dfcb4-vdmhg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:28.861072 kubelet[2739]: I0813 01:34:28.859945 2739 kubelet.go:2306] "Pod admission denied" podUID="bcb991d1-bfea-4ca6-886c-1c129d29ed00" pod="tigera-operator/tigera-operator-5bf8dfcb4-9jxdp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:28.907854 kubelet[2739]: I0813 01:34:28.907797 2739 kubelet.go:2306] "Pod admission denied" podUID="b5b2c9f3-0c3d-429d-849d-c464a239903f" pod="tigera-operator/tigera-operator-5bf8dfcb4-5wk5k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:29.010041 kubelet[2739]: I0813 01:34:29.009982 2739 kubelet.go:2306] "Pod admission denied" podUID="758ae6b5-081f-4dfa-a38f-d12610a736e8" pod="tigera-operator/tigera-operator-5bf8dfcb4-n4vcd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:29.108214 kubelet[2739]: I0813 01:34:29.108144 2739 kubelet.go:2306] "Pod admission denied" podUID="c09c647a-e4f6-4b56-aa08-648e6c2553c2" pod="tigera-operator/tigera-operator-5bf8dfcb4-dpdzx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:29.220145 kubelet[2739]: I0813 01:34:29.219903 2739 kubelet.go:2306] "Pod admission denied" podUID="f37a9b73-615d-46d4-b2eb-f96f31e40a71" pod="tigera-operator/tigera-operator-5bf8dfcb4-twtjh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:29.310227 kubelet[2739]: I0813 01:34:29.310174 2739 kubelet.go:2306] "Pod admission denied" podUID="492aba62-824b-403d-8e75-fd063e389841" pod="tigera-operator/tigera-operator-5bf8dfcb4-66cjn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:29.409112 kubelet[2739]: I0813 01:34:29.409065 2739 kubelet.go:2306] "Pod admission denied" podUID="0642a331-d7e8-45ab-8aa9-7866a0e0cccb" pod="tigera-operator/tigera-operator-5bf8dfcb4-ngfb2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:29.512164 kubelet[2739]: I0813 01:34:29.512021 2739 kubelet.go:2306] "Pod admission denied" podUID="8fd57e36-c2b4-4398-8fa1-b7928b6c5ab7" pod="tigera-operator/tigera-operator-5bf8dfcb4-fv7tk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:29.610558 kubelet[2739]: I0813 01:34:29.610511 2739 kubelet.go:2306] "Pod admission denied" podUID="4f2dd7b6-596d-4c74-bd0b-a1daf2ebaf91" pod="tigera-operator/tigera-operator-5bf8dfcb4-jb26p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:29.714150 kubelet[2739]: I0813 01:34:29.713574 2739 kubelet.go:2306] "Pod admission denied" podUID="2d4a4868-f16e-468d-be63-3ab50161b374" pod="tigera-operator/tigera-operator-5bf8dfcb4-2z4s9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:29.811193 kubelet[2739]: I0813 01:34:29.810709 2739 kubelet.go:2306] "Pod admission denied" podUID="92ee7e45-2d57-42ae-bbe0-bcd5961e44d6" pod="tigera-operator/tigera-operator-5bf8dfcb4-2s9l9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:29.911448 kubelet[2739]: I0813 01:34:29.911402 2739 kubelet.go:2306] "Pod admission denied" podUID="19d33eb5-e756-47ce-9999-b1b8b254230a" pod="tigera-operator/tigera-operator-5bf8dfcb4-k5lw6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:30.020360 kubelet[2739]: I0813 01:34:30.020309 2739 kubelet.go:2306] "Pod admission denied" podUID="0804a905-7e62-4c11-8d31-2563dd8bd83d" pod="tigera-operator/tigera-operator-5bf8dfcb4-xksh8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:30.210675 kubelet[2739]: I0813 01:34:30.210550 2739 kubelet.go:2306] "Pod admission denied" podUID="7b67640b-407f-4994-9e35-83287eb1c3a5" pod="tigera-operator/tigera-operator-5bf8dfcb4-lj944" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:30.308693 kubelet[2739]: I0813 01:34:30.308645 2739 kubelet.go:2306] "Pod admission denied" podUID="3b96bcd5-eeea-4184-8b1e-f6806ded713e" pod="tigera-operator/tigera-operator-5bf8dfcb4-87zd9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:30.408258 kubelet[2739]: I0813 01:34:30.408214 2739 kubelet.go:2306] "Pod admission denied" podUID="61f8acba-60f1-442a-8f6d-afe27a7ecaa6" pod="tigera-operator/tigera-operator-5bf8dfcb4-dq99c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:30.611967 kubelet[2739]: I0813 01:34:30.611907 2739 kubelet.go:2306] "Pod admission denied" podUID="2588ac52-4eab-4b35-be9b-0803c1a50e79" pod="tigera-operator/tigera-operator-5bf8dfcb4-vj529" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:30.715542 kubelet[2739]: I0813 01:34:30.715485 2739 kubelet.go:2306] "Pod admission denied" podUID="ef9c04ca-48f0-4422-a696-5e16caec5897" pod="tigera-operator/tigera-operator-5bf8dfcb4-mn57l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:30.778486 kubelet[2739]: I0813 01:34:30.778432 2739 kubelet.go:2306] "Pod admission denied" podUID="359b3ded-a09e-4ba2-90d2-5971f275de6c" pod="tigera-operator/tigera-operator-5bf8dfcb4-tm5lz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:30.805592 kubelet[2739]: E0813 01:34:30.804614 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:34:30.806486 containerd[1544]: time="2025-08-13T01:34:30.806407490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-994jv,Uid:3bd5a2e2-42ee-4e27-a412-724b2f0527b4,Namespace:kube-system,Attempt:0,}" Aug 13 01:34:30.807495 kubelet[2739]: E0813 01:34:30.807206 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:34:30.807570 containerd[1544]: time="2025-08-13T01:34:30.807431153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5v9,Uid:9cb65184-4613-43ed-9fa1-0cf23f1e0e56,Namespace:kube-system,Attempt:0,}" Aug 13 01:34:30.873085 kubelet[2739]: I0813 01:34:30.872783 2739 kubelet.go:2306] "Pod admission denied" podUID="bfff716e-8117-46e4-bc6b-cd6221202226" pod="tigera-operator/tigera-operator-5bf8dfcb4-kxk7g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:30.904694 containerd[1544]: time="2025-08-13T01:34:30.904615026Z" level=error msg="Failed to destroy network for sandbox \"3c6cde12e10c00bbd2397a49f339f0838e096430d0276a184a261e1fc99f0182\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:30.907027 systemd[1]: run-netns-cni\x2d4888a1df\x2da9e1\x2da0f5\x2d676f\x2d3a60fa899867.mount: Deactivated successfully. Aug 13 01:34:30.909194 containerd[1544]: time="2025-08-13T01:34:30.908993776Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-994jv,Uid:3bd5a2e2-42ee-4e27-a412-724b2f0527b4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c6cde12e10c00bbd2397a49f339f0838e096430d0276a184a261e1fc99f0182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:30.909826 kubelet[2739]: E0813 01:34:30.909386 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c6cde12e10c00bbd2397a49f339f0838e096430d0276a184a261e1fc99f0182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:30.909826 kubelet[2739]: E0813 01:34:30.909432 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c6cde12e10c00bbd2397a49f339f0838e096430d0276a184a261e1fc99f0182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:34:30.909826 kubelet[2739]: E0813 01:34:30.909451 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3c6cde12e10c00bbd2397a49f339f0838e096430d0276a184a261e1fc99f0182\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:34:30.909826 kubelet[2739]: E0813 01:34:30.909485 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-994jv_kube-system(3bd5a2e2-42ee-4e27-a412-724b2f0527b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-994jv_kube-system(3bd5a2e2-42ee-4e27-a412-724b2f0527b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3c6cde12e10c00bbd2397a49f339f0838e096430d0276a184a261e1fc99f0182\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-994jv" podUID="3bd5a2e2-42ee-4e27-a412-724b2f0527b4" Aug 13 01:34:30.912091 containerd[1544]: time="2025-08-13T01:34:30.912053122Z" level=error msg="Failed to destroy network for sandbox \"b35ab64a89b554f47a4593aaabacc1067bada87279cfe116eff5f372818e09e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:30.913769 containerd[1544]: time="2025-08-13T01:34:30.913729816Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5v9,Uid:9cb65184-4613-43ed-9fa1-0cf23f1e0e56,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b35ab64a89b554f47a4593aaabacc1067bada87279cfe116eff5f372818e09e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:30.914608 kubelet[2739]: E0813 01:34:30.914563 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b35ab64a89b554f47a4593aaabacc1067bada87279cfe116eff5f372818e09e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:30.914649 kubelet[2739]: E0813 01:34:30.914621 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b35ab64a89b554f47a4593aaabacc1067bada87279cfe116eff5f372818e09e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:34:30.914649 kubelet[2739]: E0813 01:34:30.914641 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b35ab64a89b554f47a4593aaabacc1067bada87279cfe116eff5f372818e09e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:34:30.914705 kubelet[2739]: E0813 01:34:30.914674 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mx5v9_kube-system(9cb65184-4613-43ed-9fa1-0cf23f1e0e56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mx5v9_kube-system(9cb65184-4613-43ed-9fa1-0cf23f1e0e56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b35ab64a89b554f47a4593aaabacc1067bada87279cfe116eff5f372818e09e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mx5v9" podUID="9cb65184-4613-43ed-9fa1-0cf23f1e0e56" Aug 13 01:34:30.915066 systemd[1]: run-netns-cni\x2da657a2e2\x2d46e1\x2d8c82\x2d0882\x2dbe06ce2302a7.mount: Deactivated successfully. Aug 13 01:34:30.971995 kubelet[2739]: I0813 01:34:30.971239 2739 kubelet.go:2306] "Pod admission denied" podUID="2d06b513-0717-4d4e-8c5d-396d4bb35782" pod="tigera-operator/tigera-operator-5bf8dfcb4-xvjmc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:31.062100 kubelet[2739]: I0813 01:34:31.062016 2739 kubelet.go:2306] "Pod admission denied" podUID="36688b85-9bfc-4f2f-afc8-5b27586ad935" pod="tigera-operator/tigera-operator-5bf8dfcb4-tttjp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:31.164177 kubelet[2739]: I0813 01:34:31.164036 2739 kubelet.go:2306] "Pod admission denied" podUID="e3994704-8014-475e-aa05-e66984dd824d" pod="tigera-operator/tigera-operator-5bf8dfcb4-lgnml" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:31.278075 kubelet[2739]: I0813 01:34:31.278042 2739 kubelet.go:2306] "Pod admission denied" podUID="f759d053-3f52-4b4e-aa8d-6195358a5c07" pod="tigera-operator/tigera-operator-5bf8dfcb4-mhl85" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:31.359789 kubelet[2739]: I0813 01:34:31.359733 2739 kubelet.go:2306] "Pod admission denied" podUID="c1c68df1-1b77-4474-bc11-18e237b108b0" pod="tigera-operator/tigera-operator-5bf8dfcb4-77xqs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:31.405952 kubelet[2739]: I0813 01:34:31.405912 2739 kubelet.go:2306] "Pod admission denied" podUID="224b5945-c193-4fbf-b4b2-b4d3566a27f4" pod="tigera-operator/tigera-operator-5bf8dfcb4-xl5fq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:31.509746 kubelet[2739]: I0813 01:34:31.509683 2739 kubelet.go:2306] "Pod admission denied" podUID="6fdbcab8-98fd-4292-98ee-7d66c654a9e2" pod="tigera-operator/tigera-operator-5bf8dfcb4-bptb7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:31.708268 kubelet[2739]: I0813 01:34:31.708220 2739 kubelet.go:2306] "Pod admission denied" podUID="b34dd32f-e71b-4080-b303-8732b9d3e5e8" pod="tigera-operator/tigera-operator-5bf8dfcb4-cnl9d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:31.820885 kubelet[2739]: I0813 01:34:31.820307 2739 kubelet.go:2306] "Pod admission denied" podUID="cecdebbd-3b8a-48e8-b6d3-873fee6ff6f4" pod="tigera-operator/tigera-operator-5bf8dfcb4-87czr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:31.906553 kubelet[2739]: I0813 01:34:31.906514 2739 kubelet.go:2306] "Pod admission denied" podUID="1efffd0a-e1c5-4e12-9260-c76769e4ff72" pod="tigera-operator/tigera-operator-5bf8dfcb4-2sxfj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:32.108357 kubelet[2739]: I0813 01:34:32.108231 2739 kubelet.go:2306] "Pod admission denied" podUID="72f1722c-7a96-4f05-8cde-a6c38da46c49" pod="tigera-operator/tigera-operator-5bf8dfcb4-zzs5c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:32.207864 kubelet[2739]: I0813 01:34:32.207800 2739 kubelet.go:2306] "Pod admission denied" podUID="34d1bdd2-b226-4396-ae6a-7ebaa02190d1" pod="tigera-operator/tigera-operator-5bf8dfcb4-xcvw6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:32.306883 kubelet[2739]: I0813 01:34:32.306845 2739 kubelet.go:2306] "Pod admission denied" podUID="f7502368-165c-4228-aac7-4e838640b2a4" pod="tigera-operator/tigera-operator-5bf8dfcb4-r72q2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:32.416626 kubelet[2739]: I0813 01:34:32.415846 2739 kubelet.go:2306] "Pod admission denied" podUID="551182a7-a224-439b-a323-e2a72f1d78ac" pod="tigera-operator/tigera-operator-5bf8dfcb4-f67vf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:32.508735 kubelet[2739]: I0813 01:34:32.508692 2739 kubelet.go:2306] "Pod admission denied" podUID="50b50056-a2af-4162-b70c-2f9424127092" pod="tigera-operator/tigera-operator-5bf8dfcb4-r848x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:32.612056 kubelet[2739]: I0813 01:34:32.612001 2739 kubelet.go:2306] "Pod admission denied" podUID="aa034c51-1a39-4166-b496-4ce28b4a290a" pod="tigera-operator/tigera-operator-5bf8dfcb4-qznmz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:32.709186 kubelet[2739]: I0813 01:34:32.709026 2739 kubelet.go:2306] "Pod admission denied" podUID="38d48d41-5348-4235-8863-9b15e3626847" pod="tigera-operator/tigera-operator-5bf8dfcb4-75brw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:32.809468 kubelet[2739]: I0813 01:34:32.809421 2739 kubelet.go:2306] "Pod admission denied" podUID="9817f8ef-94f1-4ef3-9751-79872ffe752b" pod="tigera-operator/tigera-operator-5bf8dfcb4-99m9c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:32.918147 kubelet[2739]: I0813 01:34:32.917814 2739 kubelet.go:2306] "Pod admission denied" podUID="7ff2f7c0-cbaa-496f-a382-c29d24d3c363" pod="tigera-operator/tigera-operator-5bf8dfcb4-9vfq4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:33.011867 kubelet[2739]: I0813 01:34:33.011821 2739 kubelet.go:2306] "Pod admission denied" podUID="ed9f9421-26e5-43ac-ba58-a886e19ce9bd" pod="tigera-operator/tigera-operator-5bf8dfcb4-44nbt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:33.114617 kubelet[2739]: I0813 01:34:33.114564 2739 kubelet.go:2306] "Pod admission denied" podUID="46ad6734-afda-46f8-b1bf-9a8411062bc9" pod="tigera-operator/tigera-operator-5bf8dfcb4-j8h7z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:33.210150 kubelet[2739]: I0813 01:34:33.210093 2739 kubelet.go:2306] "Pod admission denied" podUID="1836f85b-4247-4c5c-b1df-6fde4143a3a1" pod="tigera-operator/tigera-operator-5bf8dfcb4-klmsp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:33.306993 kubelet[2739]: I0813 01:34:33.306884 2739 kubelet.go:2306] "Pod admission denied" podUID="bc8ee8c4-8127-470f-b576-545c8f2e018e" pod="tigera-operator/tigera-operator-5bf8dfcb4-ggtnt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:33.418977 kubelet[2739]: I0813 01:34:33.418926 2739 kubelet.go:2306] "Pod admission denied" podUID="94152302-3304-4980-bfe5-2c11ec377bae" pod="tigera-operator/tigera-operator-5bf8dfcb4-tjdrr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:33.459934 kubelet[2739]: I0813 01:34:33.459883 2739 kubelet.go:2306] "Pod admission denied" podUID="7e4aa357-5653-4bce-8480-814a7a83636c" pod="tigera-operator/tigera-operator-5bf8dfcb4-hkkwq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:33.558067 kubelet[2739]: I0813 01:34:33.557792 2739 kubelet.go:2306] "Pod admission denied" podUID="102c8514-e49f-45db-80af-bbe4de5053af" pod="tigera-operator/tigera-operator-5bf8dfcb4-zs75x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:33.660303 kubelet[2739]: I0813 01:34:33.660246 2739 kubelet.go:2306] "Pod admission denied" podUID="ce604343-a904-4d60-ad52-9f9c0a4607ec" pod="tigera-operator/tigera-operator-5bf8dfcb4-k8d8g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:33.757254 kubelet[2739]: I0813 01:34:33.757208 2739 kubelet.go:2306] "Pod admission denied" podUID="43989bf6-e55f-4980-ae10-a9a29d4c3c29" pod="tigera-operator/tigera-operator-5bf8dfcb4-85bg9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:33.798621 kubelet[2739]: E0813 01:34:33.798520 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-5c47r" podUID="7d03562e-9842-4425-9847-632615391bfb" Aug 13 01:34:33.860259 kubelet[2739]: I0813 01:34:33.860122 2739 kubelet.go:2306] "Pod admission denied" podUID="1fc4bfee-c3f2-4304-bedf-a6e725480c00" pod="tigera-operator/tigera-operator-5bf8dfcb4-g5p49" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:33.958008 kubelet[2739]: I0813 01:34:33.957964 2739 kubelet.go:2306] "Pod admission denied" podUID="6a4bdde8-4380-4654-9a54-bb4f3464ddaa" pod="tigera-operator/tigera-operator-5bf8dfcb4-v4prv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:34.163166 kubelet[2739]: I0813 01:34:34.162713 2739 kubelet.go:2306] "Pod admission denied" podUID="599d0b1c-15f2-4427-aa10-f5a69bc402bb" pod="tigera-operator/tigera-operator-5bf8dfcb4-2t72v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:34.272824 kubelet[2739]: I0813 01:34:34.272769 2739 kubelet.go:2306] "Pod admission denied" podUID="437afa5c-2b82-4e19-94fb-06ce4ee76414" pod="tigera-operator/tigera-operator-5bf8dfcb4-j7rll" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:34.359431 kubelet[2739]: I0813 01:34:34.359373 2739 kubelet.go:2306] "Pod admission denied" podUID="23eb6f25-da31-469e-ba43-a307033a5a82" pod="tigera-operator/tigera-operator-5bf8dfcb4-h8r9q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:34.462170 kubelet[2739]: I0813 01:34:34.460823 2739 kubelet.go:2306] "Pod admission denied" podUID="c5f4773b-cf54-4a3b-8055-4e2526db3f4a" pod="tigera-operator/tigera-operator-5bf8dfcb4-n87wp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:34.561151 kubelet[2739]: I0813 01:34:34.561080 2739 kubelet.go:2306] "Pod admission denied" podUID="297fe2cf-ac89-4ceb-9191-50039b228ccc" pod="tigera-operator/tigera-operator-5bf8dfcb4-5vrqj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:34.659413 kubelet[2739]: I0813 01:34:34.659351 2739 kubelet.go:2306] "Pod admission denied" podUID="65e45e51-230e-437f-9724-503c782236a2" pod="tigera-operator/tigera-operator-5bf8dfcb4-8mfj5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:34.776224 kubelet[2739]: I0813 01:34:34.776189 2739 kubelet.go:2306] "Pod admission denied" podUID="ad434955-4911-4553-b8ac-431297bcdf29" pod="tigera-operator/tigera-operator-5bf8dfcb4-l5z7d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:34.864215 kubelet[2739]: I0813 01:34:34.864148 2739 kubelet.go:2306] "Pod admission denied" podUID="4fe1cda8-362b-44da-9e10-18c82f7e0c04" pod="tigera-operator/tigera-operator-5bf8dfcb4-kw4zr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:34.962049 kubelet[2739]: I0813 01:34:34.961988 2739 kubelet.go:2306] "Pod admission denied" podUID="84db6cc2-a4c9-4281-89f8-8e2219f76e30" pod="tigera-operator/tigera-operator-5bf8dfcb4-lpnsj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:35.060150 kubelet[2739]: I0813 01:34:35.059998 2739 kubelet.go:2306] "Pod admission denied" podUID="52ad5902-36c5-4034-ac7e-0db82c42df66" pod="tigera-operator/tigera-operator-5bf8dfcb4-fsxxl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:35.166638 kubelet[2739]: I0813 01:34:35.166374 2739 kubelet.go:2306] "Pod admission denied" podUID="0d2f396d-5d59-444f-bb84-0b3a1d4b252c" pod="tigera-operator/tigera-operator-5bf8dfcb4-2b46v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:35.280155 kubelet[2739]: I0813 01:34:35.279224 2739 kubelet.go:2306] "Pod admission denied" podUID="1a308a64-fcb9-4476-8385-51e1b6cfe43f" pod="tigera-operator/tigera-operator-5bf8dfcb4-glrs4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:35.328419 kubelet[2739]: I0813 01:34:35.328184 2739 kubelet.go:2306] "Pod admission denied" podUID="bc305ec8-3b28-4435-ac21-d917516fe37d" pod="tigera-operator/tigera-operator-5bf8dfcb4-48nnq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:35.410037 kubelet[2739]: I0813 01:34:35.409984 2739 kubelet.go:2306] "Pod admission denied" podUID="3baf1ccc-47a3-46f0-8526-8e7b73e3eefa" pod="tigera-operator/tigera-operator-5bf8dfcb4-2ptmn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:35.526608 kubelet[2739]: I0813 01:34:35.526557 2739 kubelet.go:2306] "Pod admission denied" podUID="b87b24c0-b7fe-48cf-9e26-5e6245bf17f6" pod="tigera-operator/tigera-operator-5bf8dfcb4-x8bmg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:35.615637 kubelet[2739]: I0813 01:34:35.615310 2739 kubelet.go:2306] "Pod admission denied" podUID="a83f8249-c759-430c-b655-76835e3c6dbf" pod="tigera-operator/tigera-operator-5bf8dfcb4-cttt7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:35.711639 kubelet[2739]: I0813 01:34:35.711576 2739 kubelet.go:2306] "Pod admission denied" podUID="ff004453-67c0-4066-be47-717cf940bd44" pod="tigera-operator/tigera-operator-5bf8dfcb4-68p47" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:35.809890 kubelet[2739]: I0813 01:34:35.809830 2739 kubelet.go:2306] "Pod admission denied" podUID="63e5a60f-2aa5-4f7e-b08e-280ac2c733ab" pod="tigera-operator/tigera-operator-5bf8dfcb4-nz2nx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:35.910095 kubelet[2739]: I0813 01:34:35.909987 2739 kubelet.go:2306] "Pod admission denied" podUID="4a7013a4-b7ec-4d66-b694-706e9cf4dc68" pod="tigera-operator/tigera-operator-5bf8dfcb4-nz2hr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:36.018168 kubelet[2739]: I0813 01:34:36.017466 2739 kubelet.go:2306] "Pod admission denied" podUID="55ee447f-5e44-401b-b1f5-50094ade022b" pod="tigera-operator/tigera-operator-5bf8dfcb4-4hqk4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:36.210828 kubelet[2739]: I0813 01:34:36.210705 2739 kubelet.go:2306] "Pod admission denied" podUID="39ad0fd4-59fc-45ee-86f6-f1f797102f8e" pod="tigera-operator/tigera-operator-5bf8dfcb4-z2mxb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:36.311868 kubelet[2739]: I0813 01:34:36.311808 2739 kubelet.go:2306] "Pod admission denied" podUID="225e01cf-4442-49d2-ab54-6dd30a237843" pod="tigera-operator/tigera-operator-5bf8dfcb4-hnj4g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:36.411740 kubelet[2739]: I0813 01:34:36.411687 2739 kubelet.go:2306] "Pod admission denied" podUID="bfe1c349-9af9-4767-be9c-5dbf87dd595f" pod="tigera-operator/tigera-operator-5bf8dfcb4-4grsq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:36.516222 kubelet[2739]: I0813 01:34:36.516157 2739 kubelet.go:2306] "Pod admission denied" podUID="4cc12da8-d378-402f-9abb-c8dd08ae1fee" pod="tigera-operator/tigera-operator-5bf8dfcb4-sg6lz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:36.600965 kubelet[2739]: I0813 01:34:36.600903 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:34:36.600965 kubelet[2739]: I0813 01:34:36.600935 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:34:36.606268 kubelet[2739]: I0813 01:34:36.606250 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:34:36.630522 kubelet[2739]: I0813 01:34:36.630501 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:34:36.630670 kubelet[2739]: I0813 01:34:36.630655 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-kube-controllers-85fbc76f96-d5vf4","kube-system/coredns-7c65d6cfc9-994jv","calico-system/calico-node-5c47r","calico-system/csi-node-driver-7bj49","calico-system/calico-typha-79464475b5-bbrtw","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:34:36.630773 kubelet[2739]: E0813 01:34:36.630761 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:34:36.630820 kubelet[2739]: E0813 01:34:36.630812 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:34:36.630861 kubelet[2739]: E0813 01:34:36.630853 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:34:36.630900 kubelet[2739]: E0813 01:34:36.630893 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:34:36.631057 kubelet[2739]: E0813 01:34:36.630957 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:34:36.631057 kubelet[2739]: E0813 01:34:36.630974 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:34:36.631057 kubelet[2739]: E0813 01:34:36.630983 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:34:36.631057 kubelet[2739]: E0813 01:34:36.630997 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:34:36.631057 kubelet[2739]: E0813 01:34:36.631005 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:34:36.631057 kubelet[2739]: E0813 01:34:36.631013 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:34:36.631057 kubelet[2739]: I0813 01:34:36.631024 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:34:36.631579 kubelet[2739]: I0813 01:34:36.631558 2739 kubelet.go:2306] "Pod admission denied" podUID="2d8659f4-9600-4e6b-9488-806653034167" pod="tigera-operator/tigera-operator-5bf8dfcb4-wzqwz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:36.709074 kubelet[2739]: I0813 01:34:36.709038 2739 kubelet.go:2306] "Pod admission denied" podUID="85f54db2-0fd8-4b5e-aff6-842de0db1e42" pod="tigera-operator/tigera-operator-5bf8dfcb4-t7bn2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:36.803081 kubelet[2739]: E0813 01:34:36.802963 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:34:36.806811 containerd[1544]: time="2025-08-13T01:34:36.806556714Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,}" Aug 13 01:34:36.807828 containerd[1544]: time="2025-08-13T01:34:36.807041245Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,}" Aug 13 01:34:36.847570 kubelet[2739]: I0813 01:34:36.847522 2739 kubelet.go:2306] "Pod admission denied" podUID="5cc3678c-c2d1-42b5-8ed7-c276fe793b18" pod="tigera-operator/tigera-operator-5bf8dfcb4-8lwg4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:36.894733 containerd[1544]: time="2025-08-13T01:34:36.894597158Z" level=error msg="Failed to destroy network for sandbox \"769b50b8b4d3d4d4d776362c8a6e96adb2b5a1e1e58a48eb318e66bfcd531b07\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:36.899632 systemd[1]: run-netns-cni\x2d2af1d373\x2d3bbb\x2d1931\x2dc5a4\x2d22b0055f0728.mount: Deactivated successfully. Aug 13 01:34:36.902044 containerd[1544]: time="2025-08-13T01:34:36.901912942Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"769b50b8b4d3d4d4d776362c8a6e96adb2b5a1e1e58a48eb318e66bfcd531b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:36.905889 kubelet[2739]: E0813 01:34:36.905804 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"769b50b8b4d3d4d4d776362c8a6e96adb2b5a1e1e58a48eb318e66bfcd531b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:36.906084 kubelet[2739]: E0813 01:34:36.906004 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"769b50b8b4d3d4d4d776362c8a6e96adb2b5a1e1e58a48eb318e66bfcd531b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:34:36.906315 kubelet[2739]: E0813 01:34:36.906158 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"769b50b8b4d3d4d4d776362c8a6e96adb2b5a1e1e58a48eb318e66bfcd531b07\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:34:36.906315 kubelet[2739]: E0813 01:34:36.906237 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"769b50b8b4d3d4d4d776362c8a6e96adb2b5a1e1e58a48eb318e66bfcd531b07\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:34:36.957311 kubelet[2739]: I0813 01:34:36.957253 2739 kubelet.go:2306] "Pod admission denied" podUID="bcce3454-04fb-4734-991b-9c0a9d021820" pod="tigera-operator/tigera-operator-5bf8dfcb4-88kd4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:36.975889 containerd[1544]: time="2025-08-13T01:34:36.975851569Z" level=error msg="Failed to destroy network for sandbox \"67ce460a58521e05b83d961ba410045a2dd10f5f422b099f3a6df5f1985d3860\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:36.979263 containerd[1544]: time="2025-08-13T01:34:36.979236286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67ce460a58521e05b83d961ba410045a2dd10f5f422b099f3a6df5f1985d3860\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:36.980252 systemd[1]: run-netns-cni\x2d2b7fbe65\x2dede0\x2dd7f0\x2da3b2\x2d01631a76a209.mount: Deactivated successfully. Aug 13 01:34:36.982283 kubelet[2739]: E0813 01:34:36.981403 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67ce460a58521e05b83d961ba410045a2dd10f5f422b099f3a6df5f1985d3860\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:36.982283 kubelet[2739]: E0813 01:34:36.981447 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67ce460a58521e05b83d961ba410045a2dd10f5f422b099f3a6df5f1985d3860\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:34:36.982283 kubelet[2739]: E0813 01:34:36.981464 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67ce460a58521e05b83d961ba410045a2dd10f5f422b099f3a6df5f1985d3860\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:34:36.982283 kubelet[2739]: E0813 01:34:36.981498 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67ce460a58521e05b83d961ba410045a2dd10f5f422b099f3a6df5f1985d3860\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7bj49" podUID="4e5845c9-626c-4c83-900a-0da0bae2daed" Aug 13 01:34:37.063113 kubelet[2739]: I0813 01:34:37.062992 2739 kubelet.go:2306] "Pod admission denied" podUID="8dab8241-93db-43bf-b2fb-2cab22e166e3" pod="tigera-operator/tigera-operator-5bf8dfcb4-hhvb7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:37.161936 kubelet[2739]: I0813 01:34:37.161876 2739 kubelet.go:2306] "Pod admission denied" podUID="71a01862-ff1e-4031-80c3-133596f8b315" pod="tigera-operator/tigera-operator-5bf8dfcb4-6d9ch" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:37.262120 kubelet[2739]: I0813 01:34:37.262056 2739 kubelet.go:2306] "Pod admission denied" podUID="a949eb89-c074-41a5-969f-3a22be241427" pod="tigera-operator/tigera-operator-5bf8dfcb4-g86d2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:37.308146 kubelet[2739]: I0813 01:34:37.308088 2739 kubelet.go:2306] "Pod admission denied" podUID="f9426e66-0102-454e-9a6c-0b5611d3152f" pod="tigera-operator/tigera-operator-5bf8dfcb4-qxh7p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:37.419945 kubelet[2739]: I0813 01:34:37.419805 2739 kubelet.go:2306] "Pod admission denied" podUID="06e993af-f0d0-48ed-95ef-77b856dcc593" pod="tigera-operator/tigera-operator-5bf8dfcb4-znfn7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:37.510186 kubelet[2739]: I0813 01:34:37.510110 2739 kubelet.go:2306] "Pod admission denied" podUID="5b7bd834-4a47-49b9-9137-1172a245e5a9" pod="tigera-operator/tigera-operator-5bf8dfcb4-ndd82" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:37.558220 kubelet[2739]: I0813 01:34:37.558164 2739 kubelet.go:2306] "Pod admission denied" podUID="569a5d8c-c154-4568-bb6d-9ce4976c1db9" pod="tigera-operator/tigera-operator-5bf8dfcb4-gxvx8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:37.662698 kubelet[2739]: I0813 01:34:37.662657 2739 kubelet.go:2306] "Pod admission denied" podUID="31fe5cf2-770c-4473-a97b-0bbcf7de7075" pod="tigera-operator/tigera-operator-5bf8dfcb4-wbjz2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:37.761011 kubelet[2739]: I0813 01:34:37.760958 2739 kubelet.go:2306] "Pod admission denied" podUID="8f216b62-5451-4527-940b-0c3fe1695fa0" pod="tigera-operator/tigera-operator-5bf8dfcb4-fs4qt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:37.820081 kubelet[2739]: I0813 01:34:37.818923 2739 kubelet.go:2306] "Pod admission denied" podUID="38f1d99a-15b0-4075-9390-2860b5d9022b" pod="tigera-operator/tigera-operator-5bf8dfcb4-zkwfz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:37.909787 kubelet[2739]: I0813 01:34:37.909734 2739 kubelet.go:2306] "Pod admission denied" podUID="1cb992d6-1ec1-4c8d-bd4a-cdecff57ff3d" pod="tigera-operator/tigera-operator-5bf8dfcb4-vc2df" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:38.008639 kubelet[2739]: I0813 01:34:38.008588 2739 kubelet.go:2306] "Pod admission denied" podUID="7107cdf2-1947-4900-986b-716714cf3ff1" pod="tigera-operator/tigera-operator-5bf8dfcb4-dsdml" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:38.112545 kubelet[2739]: I0813 01:34:38.111629 2739 kubelet.go:2306] "Pod admission denied" podUID="9ab0ab68-80de-47bf-b652-ccdfcd67da95" pod="tigera-operator/tigera-operator-5bf8dfcb4-rjpgq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:38.211920 kubelet[2739]: I0813 01:34:38.211862 2739 kubelet.go:2306] "Pod admission denied" podUID="81591731-b241-47e1-88f0-337600e61d35" pod="tigera-operator/tigera-operator-5bf8dfcb4-rlfr4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:38.321153 kubelet[2739]: I0813 01:34:38.321059 2739 kubelet.go:2306] "Pod admission denied" podUID="9ee3bc5a-f8c4-4ba1-a6d9-da2706953189" pod="tigera-operator/tigera-operator-5bf8dfcb4-hstzw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:38.408704 kubelet[2739]: I0813 01:34:38.408559 2739 kubelet.go:2306] "Pod admission denied" podUID="db1d7f44-ef0d-42c7-8056-b7202b0d40ea" pod="tigera-operator/tigera-operator-5bf8dfcb4-qphzv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:38.508902 kubelet[2739]: I0813 01:34:38.508857 2739 kubelet.go:2306] "Pod admission denied" podUID="eec7288d-e9a6-4494-b2f1-2ab27b7f29f6" pod="tigera-operator/tigera-operator-5bf8dfcb4-tt5pq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:38.610543 kubelet[2739]: I0813 01:34:38.610500 2739 kubelet.go:2306] "Pod admission denied" podUID="fa079996-7494-4a47-a3a7-4ffc5a1a33b6" pod="tigera-operator/tigera-operator-5bf8dfcb4-798vt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:38.661467 kubelet[2739]: I0813 01:34:38.660542 2739 kubelet.go:2306] "Pod admission denied" podUID="1b3208e0-0b05-4c5a-a62a-8053f64f63fc" pod="tigera-operator/tigera-operator-5bf8dfcb4-lx2x9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:38.769898 kubelet[2739]: I0813 01:34:38.769849 2739 kubelet.go:2306] "Pod admission denied" podUID="c5b20cf9-5958-41f2-bb0d-758cbf50f51f" pod="tigera-operator/tigera-operator-5bf8dfcb4-g5ldn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:38.863398 kubelet[2739]: I0813 01:34:38.863328 2739 kubelet.go:2306] "Pod admission denied" podUID="b65c79be-7efc-44a0-b8e3-cfe2afdb25e1" pod="tigera-operator/tigera-operator-5bf8dfcb4-96rfq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:38.962182 kubelet[2739]: I0813 01:34:38.962034 2739 kubelet.go:2306] "Pod admission denied" podUID="2238985a-2b7a-473b-875e-6e8d45e512ed" pod="tigera-operator/tigera-operator-5bf8dfcb4-cf245" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:39.060493 kubelet[2739]: I0813 01:34:39.060449 2739 kubelet.go:2306] "Pod admission denied" podUID="65fc50fa-59dd-4122-bf55-584b4c9ad902" pod="tigera-operator/tigera-operator-5bf8dfcb4-z4b7w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:39.160616 kubelet[2739]: I0813 01:34:39.160571 2739 kubelet.go:2306] "Pod admission denied" podUID="d8352981-6dc4-4b4f-b932-f0d5dbbae2cc" pod="tigera-operator/tigera-operator-5bf8dfcb4-9f6d7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:39.272088 kubelet[2739]: I0813 01:34:39.271551 2739 kubelet.go:2306] "Pod admission denied" podUID="48ffb89e-d580-4638-8650-62ae46d933c6" pod="tigera-operator/tigera-operator-5bf8dfcb4-qpqfn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:39.360893 kubelet[2739]: I0813 01:34:39.360829 2739 kubelet.go:2306] "Pod admission denied" podUID="7097c22c-d485-4d1f-88bb-634ab2ce8850" pod="tigera-operator/tigera-operator-5bf8dfcb4-k2rrz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:39.563487 kubelet[2739]: I0813 01:34:39.562893 2739 kubelet.go:2306] "Pod admission denied" podUID="5eea91b1-c455-4831-ae83-c6c0eba01caa" pod="tigera-operator/tigera-operator-5bf8dfcb4-wtl49" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:39.660205 kubelet[2739]: I0813 01:34:39.660155 2739 kubelet.go:2306] "Pod admission denied" podUID="bc77abf2-81fa-4e82-9fc4-0a1c3ea532a9" pod="tigera-operator/tigera-operator-5bf8dfcb4-6d4sg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:39.763697 kubelet[2739]: I0813 01:34:39.763647 2739 kubelet.go:2306] "Pod admission denied" podUID="e66017db-b9ee-480a-921f-566f734c4351" pod="tigera-operator/tigera-operator-5bf8dfcb4-52ndp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:39.876265 kubelet[2739]: I0813 01:34:39.874584 2739 kubelet.go:2306] "Pod admission denied" podUID="e29bad39-845a-4cf4-9a6b-ed190d9ad692" pod="tigera-operator/tigera-operator-5bf8dfcb4-9nz4c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:39.962681 kubelet[2739]: I0813 01:34:39.962625 2739 kubelet.go:2306] "Pod admission denied" podUID="d2e406d7-c7ca-4bb7-853f-cd234ead10fa" pod="tigera-operator/tigera-operator-5bf8dfcb4-drhpm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:40.061754 kubelet[2739]: I0813 01:34:40.061698 2739 kubelet.go:2306] "Pod admission denied" podUID="f06b2ef7-7120-471b-a7c4-8a48f37b0493" pod="tigera-operator/tigera-operator-5bf8dfcb4-lpvnt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:40.162697 kubelet[2739]: I0813 01:34:40.162225 2739 kubelet.go:2306] "Pod admission denied" podUID="29738d89-fcec-4b18-8953-9f416afa913f" pod="tigera-operator/tigera-operator-5bf8dfcb4-k9msn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:40.265449 kubelet[2739]: I0813 01:34:40.265391 2739 kubelet.go:2306] "Pod admission denied" podUID="3189b675-56a5-431c-87a9-430478d59299" pod="tigera-operator/tigera-operator-5bf8dfcb4-7g5ws" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:40.381599 kubelet[2739]: I0813 01:34:40.380802 2739 kubelet.go:2306] "Pod admission denied" podUID="58adeb03-254b-4a1e-8037-714c82b94c4c" pod="tigera-operator/tigera-operator-5bf8dfcb4-nhc2q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:40.459678 kubelet[2739]: I0813 01:34:40.459548 2739 kubelet.go:2306] "Pod admission denied" podUID="1c9563e8-30b1-4e29-a147-5e5478f4fe41" pod="tigera-operator/tigera-operator-5bf8dfcb4-mfqmf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:40.563076 kubelet[2739]: I0813 01:34:40.563015 2739 kubelet.go:2306] "Pod admission denied" podUID="121c6b60-2422-4324-b75b-5daa26065619" pod="tigera-operator/tigera-operator-5bf8dfcb4-85x77" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:40.763201 kubelet[2739]: I0813 01:34:40.763122 2739 kubelet.go:2306] "Pod admission denied" podUID="ae4a003c-9984-4f61-a97c-bda299b1ba2c" pod="tigera-operator/tigera-operator-5bf8dfcb4-pqzlv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:40.861155 kubelet[2739]: I0813 01:34:40.861095 2739 kubelet.go:2306] "Pod admission denied" podUID="ceaeffff-7c75-402f-8e6f-413518351f93" pod="tigera-operator/tigera-operator-5bf8dfcb4-jvdjw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:40.963879 kubelet[2739]: I0813 01:34:40.963830 2739 kubelet.go:2306] "Pod admission denied" podUID="863f0bec-6af3-43ef-b7d2-e94e70c84c19" pod="tigera-operator/tigera-operator-5bf8dfcb4-ccxjb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:41.171455 kubelet[2739]: I0813 01:34:41.171295 2739 kubelet.go:2306] "Pod admission denied" podUID="5fcd37c5-1dc6-4fd6-aa46-b49e57c3dc5f" pod="tigera-operator/tigera-operator-5bf8dfcb4-whzhq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:41.262388 kubelet[2739]: I0813 01:34:41.262342 2739 kubelet.go:2306] "Pod admission denied" podUID="ca37b7e3-ce29-47bc-834a-0b0ee8d2a5b5" pod="tigera-operator/tigera-operator-5bf8dfcb4-v6v9z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:41.363765 kubelet[2739]: I0813 01:34:41.363720 2739 kubelet.go:2306] "Pod admission denied" podUID="aed919ea-d4c6-45a4-aae9-d62d8d0b6fe7" pod="tigera-operator/tigera-operator-5bf8dfcb4-nb7hz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:41.462272 kubelet[2739]: I0813 01:34:41.462121 2739 kubelet.go:2306] "Pod admission denied" podUID="06ecd76c-a358-4031-8059-150d66ba610a" pod="tigera-operator/tigera-operator-5bf8dfcb4-bd92z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:41.561871 kubelet[2739]: I0813 01:34:41.561827 2739 kubelet.go:2306] "Pod admission denied" podUID="ca80daa1-beda-4883-81a1-d747b76ba120" pod="tigera-operator/tigera-operator-5bf8dfcb4-9cv8h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:41.676151 kubelet[2739]: I0813 01:34:41.675549 2739 kubelet.go:2306] "Pod admission denied" podUID="584fa0b3-502f-4af5-af66-a0ca1d6fa472" pod="tigera-operator/tigera-operator-5bf8dfcb4-h2hvg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:41.761288 kubelet[2739]: I0813 01:34:41.761229 2739 kubelet.go:2306] "Pod admission denied" podUID="08325d60-ce64-446a-8634-3ced15dce86b" pod="tigera-operator/tigera-operator-5bf8dfcb4-pxjg5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:41.863056 kubelet[2739]: I0813 01:34:41.863001 2739 kubelet.go:2306] "Pod admission denied" podUID="bd1f96f2-3a16-41d8-a628-e8f3c8c163c6" pod="tigera-operator/tigera-operator-5bf8dfcb4-8jkgs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:41.919995 kubelet[2739]: I0813 01:34:41.919940 2739 kubelet.go:2306] "Pod admission denied" podUID="d76897a7-7447-4aa2-89dd-71105ea26cd6" pod="tigera-operator/tigera-operator-5bf8dfcb4-t6486" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:42.011866 kubelet[2739]: I0813 01:34:42.011737 2739 kubelet.go:2306] "Pod admission denied" podUID="ce814a2f-049e-45d9-9e47-bc3915a86293" pod="tigera-operator/tigera-operator-5bf8dfcb4-sfht5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:42.113040 kubelet[2739]: I0813 01:34:42.112354 2739 kubelet.go:2306] "Pod admission denied" podUID="18b68984-df71-46c1-8b7e-366bbac9862a" pod="tigera-operator/tigera-operator-5bf8dfcb4-r7zdh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:42.212655 kubelet[2739]: I0813 01:34:42.212599 2739 kubelet.go:2306] "Pod admission denied" podUID="49ce445f-9e21-45ce-9b1f-978a0f934a87" pod="tigera-operator/tigera-operator-5bf8dfcb4-z2jt9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:42.312958 kubelet[2739]: I0813 01:34:42.312835 2739 kubelet.go:2306] "Pod admission denied" podUID="ccd979bb-a9a1-4ede-9f75-cce337573fc0" pod="tigera-operator/tigera-operator-5bf8dfcb4-6q54x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:42.422163 kubelet[2739]: I0813 01:34:42.421370 2739 kubelet.go:2306] "Pod admission denied" podUID="3797fabc-48b6-4bfd-bc7c-2d809321faec" pod="tigera-operator/tigera-operator-5bf8dfcb4-t2lrj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:42.511571 kubelet[2739]: I0813 01:34:42.511520 2739 kubelet.go:2306] "Pod admission denied" podUID="1648a525-60c5-479a-8ed3-74b68d01b63b" pod="tigera-operator/tigera-operator-5bf8dfcb4-bvjkc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:42.612168 kubelet[2739]: I0813 01:34:42.611061 2739 kubelet.go:2306] "Pod admission denied" podUID="f6e0943c-8172-4a06-bd1a-cf6faeb78b3a" pod="tigera-operator/tigera-operator-5bf8dfcb4-mjwrd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:42.711669 kubelet[2739]: I0813 01:34:42.711612 2739 kubelet.go:2306] "Pod admission denied" podUID="f4341f6d-dc47-4060-94b8-dba013ef1c85" pod="tigera-operator/tigera-operator-5bf8dfcb4-vjprx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:42.812939 kubelet[2739]: I0813 01:34:42.812876 2739 kubelet.go:2306] "Pod admission denied" podUID="2023ff70-e88e-4901-88aa-5645d0930d53" pod="tigera-operator/tigera-operator-5bf8dfcb4-nhtcc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:42.929009 kubelet[2739]: I0813 01:34:42.928490 2739 kubelet.go:2306] "Pod admission denied" podUID="dd92f5ab-821d-4771-b461-732619818e91" pod="tigera-operator/tigera-operator-5bf8dfcb4-9r9wf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:43.014873 kubelet[2739]: I0813 01:34:43.014828 2739 kubelet.go:2306] "Pod admission denied" podUID="06e8c9de-9ebd-4f09-b62b-46f274d239f7" pod="tigera-operator/tigera-operator-5bf8dfcb4-w6bkg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:43.110634 kubelet[2739]: I0813 01:34:43.110581 2739 kubelet.go:2306] "Pod admission denied" podUID="14435073-fcde-4109-92b8-3d0b2af979ae" pod="tigera-operator/tigera-operator-5bf8dfcb4-kxvlx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:43.214170 kubelet[2739]: I0813 01:34:43.212843 2739 kubelet.go:2306] "Pod admission denied" podUID="c7c03781-513b-4cb7-a8b1-d1bd2a5aa93f" pod="tigera-operator/tigera-operator-5bf8dfcb4-gflqs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:43.313084 kubelet[2739]: I0813 01:34:43.313021 2739 kubelet.go:2306] "Pod admission denied" podUID="1d1474c2-30c4-480e-9316-dabc8d8319ca" pod="tigera-operator/tigera-operator-5bf8dfcb4-sggjb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:43.369540 kubelet[2739]: I0813 01:34:43.369454 2739 kubelet.go:2306] "Pod admission denied" podUID="95d13138-4c9b-4e71-abc0-172fcf4102a2" pod="tigera-operator/tigera-operator-5bf8dfcb4-7ppx7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:43.460341 kubelet[2739]: I0813 01:34:43.460285 2739 kubelet.go:2306] "Pod admission denied" podUID="6e91ef4b-091d-4fd9-9778-9739ba0b749a" pod="tigera-operator/tigera-operator-5bf8dfcb4-lptj9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:43.565775 kubelet[2739]: I0813 01:34:43.565216 2739 kubelet.go:2306] "Pod admission denied" podUID="b5984b7b-cbbe-4273-9155-b5b0aed21a14" pod="tigera-operator/tigera-operator-5bf8dfcb4-6d7vh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:43.628725 kubelet[2739]: I0813 01:34:43.628669 2739 kubelet.go:2306] "Pod admission denied" podUID="2e93e4c7-a720-4146-864f-7b4bac55e628" pod="tigera-operator/tigera-operator-5bf8dfcb4-5l8zd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:43.711393 kubelet[2739]: I0813 01:34:43.711331 2739 kubelet.go:2306] "Pod admission denied" podUID="13c9a4d0-d071-4e7d-a1e2-82a2378c8260" pod="tigera-operator/tigera-operator-5bf8dfcb4-4677v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:43.823432 kubelet[2739]: I0813 01:34:43.822451 2739 kubelet.go:2306] "Pod admission denied" podUID="d0f0b356-a83c-4cc2-bfe9-b6248087e82c" pod="tigera-operator/tigera-operator-5bf8dfcb4-dvl5v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:43.869674 kubelet[2739]: I0813 01:34:43.869637 2739 kubelet.go:2306] "Pod admission denied" podUID="cdacdbe1-d54b-45a2-9dd3-70fbec39c01a" pod="tigera-operator/tigera-operator-5bf8dfcb4-kkw4r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:43.968153 kubelet[2739]: I0813 01:34:43.968066 2739 kubelet.go:2306] "Pod admission denied" podUID="42d37fa9-3223-4917-8f82-5a99a0a0a4fc" pod="tigera-operator/tigera-operator-5bf8dfcb4-2szqs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:44.063552 kubelet[2739]: I0813 01:34:44.063509 2739 kubelet.go:2306] "Pod admission denied" podUID="87019e46-3b24-4421-9608-177b6c752095" pod="tigera-operator/tigera-operator-5bf8dfcb4-hp4fc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:44.168156 kubelet[2739]: I0813 01:34:44.168011 2739 kubelet.go:2306] "Pod admission denied" podUID="cc16b265-0aed-4dc3-9833-b2dd601c7a34" pod="tigera-operator/tigera-operator-5bf8dfcb4-swgtt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:44.293156 kubelet[2739]: I0813 01:34:44.292013 2739 kubelet.go:2306] "Pod admission denied" podUID="24ca4e60-5195-43ce-80c3-c5148ac92a25" pod="tigera-operator/tigera-operator-5bf8dfcb4-cfg6h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:44.414970 kubelet[2739]: I0813 01:34:44.414911 2739 kubelet.go:2306] "Pod admission denied" podUID="3d197bdc-006c-4254-88c2-0ee4e3d93eac" pod="tigera-operator/tigera-operator-5bf8dfcb4-npsnq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:44.462905 kubelet[2739]: I0813 01:34:44.462293 2739 kubelet.go:2306] "Pod admission denied" podUID="166d4f10-2ee4-4c4a-b3dd-37f870450964" pod="tigera-operator/tigera-operator-5bf8dfcb4-hnp9j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:44.579343 kubelet[2739]: I0813 01:34:44.579286 2739 kubelet.go:2306] "Pod admission denied" podUID="7bd99ccb-4f31-4860-84ba-5ea663a328f3" pod="tigera-operator/tigera-operator-5bf8dfcb4-lkbqr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:44.764183 kubelet[2739]: I0813 01:34:44.764125 2739 kubelet.go:2306] "Pod admission denied" podUID="57099920-cc37-4883-a290-2b73fd43a5f5" pod="tigera-operator/tigera-operator-5bf8dfcb4-fvrjc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:44.798929 kubelet[2739]: E0813 01:34:44.797846 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:34:44.800153 containerd[1544]: time="2025-08-13T01:34:44.799557524Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5v9,Uid:9cb65184-4613-43ed-9fa1-0cf23f1e0e56,Namespace:kube-system,Attempt:0,}" Aug 13 01:34:44.863470 containerd[1544]: time="2025-08-13T01:34:44.863324825Z" level=error msg="Failed to destroy network for sandbox \"64ae1d2821d729c69f8a41c3dd4dd4b5527e7c5e3e7bad6122f40513fdcddeab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:44.864535 containerd[1544]: time="2025-08-13T01:34:44.864499756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5v9,Uid:9cb65184-4613-43ed-9fa1-0cf23f1e0e56,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ae1d2821d729c69f8a41c3dd4dd4b5527e7c5e3e7bad6122f40513fdcddeab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:44.865717 systemd[1]: run-netns-cni\x2d9dce4ab1\x2d9ce8\x2d49af\x2dfed6\x2d7d488cfbc820.mount: Deactivated successfully. Aug 13 01:34:44.867919 kubelet[2739]: E0813 01:34:44.867543 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ae1d2821d729c69f8a41c3dd4dd4b5527e7c5e3e7bad6122f40513fdcddeab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:44.867919 kubelet[2739]: E0813 01:34:44.867597 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ae1d2821d729c69f8a41c3dd4dd4b5527e7c5e3e7bad6122f40513fdcddeab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:34:44.867919 kubelet[2739]: E0813 01:34:44.867618 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64ae1d2821d729c69f8a41c3dd4dd4b5527e7c5e3e7bad6122f40513fdcddeab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:34:44.867919 kubelet[2739]: E0813 01:34:44.867695 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-mx5v9_kube-system(9cb65184-4613-43ed-9fa1-0cf23f1e0e56)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-mx5v9_kube-system(9cb65184-4613-43ed-9fa1-0cf23f1e0e56)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64ae1d2821d729c69f8a41c3dd4dd4b5527e7c5e3e7bad6122f40513fdcddeab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-mx5v9" podUID="9cb65184-4613-43ed-9fa1-0cf23f1e0e56" Aug 13 01:34:44.871954 kubelet[2739]: I0813 01:34:44.871913 2739 kubelet.go:2306] "Pod admission denied" podUID="de3c3602-22ae-4a55-98e1-8f8c7b6555d7" pod="tigera-operator/tigera-operator-5bf8dfcb4-z6wds" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:44.962791 kubelet[2739]: I0813 01:34:44.962739 2739 kubelet.go:2306] "Pod admission denied" podUID="8b9d7a15-d757-4b94-95d8-c1c46ea22583" pod="tigera-operator/tigera-operator-5bf8dfcb4-5d9vs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:45.064042 kubelet[2739]: I0813 01:34:45.063879 2739 kubelet.go:2306] "Pod admission denied" podUID="f3236449-dcf8-45c8-8cfe-fc179f3e1742" pod="tigera-operator/tigera-operator-5bf8dfcb4-s85zn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:45.120109 kubelet[2739]: I0813 01:34:45.119234 2739 kubelet.go:2306] "Pod admission denied" podUID="581235aa-667e-49a9-ba71-f090c43742b9" pod="tigera-operator/tigera-operator-5bf8dfcb4-2jdss" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:45.212286 kubelet[2739]: I0813 01:34:45.212230 2739 kubelet.go:2306] "Pod admission denied" podUID="07cd6677-fa6f-4759-9beb-c470795b12d6" pod="tigera-operator/tigera-operator-5bf8dfcb4-5dgln" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:45.316116 kubelet[2739]: I0813 01:34:45.315450 2739 kubelet.go:2306] "Pod admission denied" podUID="ecea4c30-a217-4287-a55d-45bf1406f9a7" pod="tigera-operator/tigera-operator-5bf8dfcb4-qwjmx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:45.413986 kubelet[2739]: I0813 01:34:45.413923 2739 kubelet.go:2306] "Pod admission denied" podUID="030d7f97-b04e-4d26-ac44-4baaa3c8481e" pod="tigera-operator/tigera-operator-5bf8dfcb4-gfbcv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:45.513209 kubelet[2739]: I0813 01:34:45.513160 2739 kubelet.go:2306] "Pod admission denied" podUID="30ec3b17-a2d7-4254-aaa9-14f3ca1f6114" pod="tigera-operator/tigera-operator-5bf8dfcb4-k2p49" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:45.570333 kubelet[2739]: I0813 01:34:45.569316 2739 kubelet.go:2306] "Pod admission denied" podUID="d04bb186-b459-4480-8927-dd273ea23c54" pod="tigera-operator/tigera-operator-5bf8dfcb4-lhdk6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:45.663569 kubelet[2739]: I0813 01:34:45.663520 2739 kubelet.go:2306] "Pod admission denied" podUID="c1692f28-2609-4a61-8776-5b0c325b9937" pod="tigera-operator/tigera-operator-5bf8dfcb4-lj5ww" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:45.763641 kubelet[2739]: I0813 01:34:45.763595 2739 kubelet.go:2306] "Pod admission denied" podUID="7d77f1df-b943-4b85-a367-dc65e0e27d24" pod="tigera-operator/tigera-operator-5bf8dfcb4-rt8np" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:45.799098 kubelet[2739]: E0813 01:34:45.798726 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:34:45.800736 containerd[1544]: time="2025-08-13T01:34:45.800415767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-994jv,Uid:3bd5a2e2-42ee-4e27-a412-724b2f0527b4,Namespace:kube-system,Attempt:0,}" Aug 13 01:34:45.824411 kubelet[2739]: I0813 01:34:45.824323 2739 kubelet.go:2306] "Pod admission denied" podUID="ae1aefc9-9d81-4536-9108-f5c694168079" pod="tigera-operator/tigera-operator-5bf8dfcb4-9d6z6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:45.862551 containerd[1544]: time="2025-08-13T01:34:45.862442703Z" level=error msg="Failed to destroy network for sandbox \"acbdbb0b47da64719eb7385a8ac55d81d2f0fcf35e459e892d4a339ebf4a0af2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:45.864940 systemd[1]: run-netns-cni\x2dc8239b4b\x2dd068\x2d46fc\x2d9049\x2d37cca51f2802.mount: Deactivated successfully. Aug 13 01:34:45.865925 containerd[1544]: time="2025-08-13T01:34:45.864970558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-994jv,Uid:3bd5a2e2-42ee-4e27-a412-724b2f0527b4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"acbdbb0b47da64719eb7385a8ac55d81d2f0fcf35e459e892d4a339ebf4a0af2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:45.866642 kubelet[2739]: E0813 01:34:45.866604 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acbdbb0b47da64719eb7385a8ac55d81d2f0fcf35e459e892d4a339ebf4a0af2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:45.866715 kubelet[2739]: E0813 01:34:45.866661 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acbdbb0b47da64719eb7385a8ac55d81d2f0fcf35e459e892d4a339ebf4a0af2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:34:45.866715 kubelet[2739]: E0813 01:34:45.866681 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"acbdbb0b47da64719eb7385a8ac55d81d2f0fcf35e459e892d4a339ebf4a0af2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:34:45.866764 kubelet[2739]: E0813 01:34:45.866726 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-994jv_kube-system(3bd5a2e2-42ee-4e27-a412-724b2f0527b4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-994jv_kube-system(3bd5a2e2-42ee-4e27-a412-724b2f0527b4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"acbdbb0b47da64719eb7385a8ac55d81d2f0fcf35e459e892d4a339ebf4a0af2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-994jv" podUID="3bd5a2e2-42ee-4e27-a412-724b2f0527b4" Aug 13 01:34:45.913521 kubelet[2739]: I0813 01:34:45.913486 2739 kubelet.go:2306] "Pod admission denied" podUID="5919dd97-b905-48e3-b3e2-d9e72d82ed94" pod="tigera-operator/tigera-operator-5bf8dfcb4-2fsmj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:46.030795 kubelet[2739]: I0813 01:34:46.030102 2739 kubelet.go:2306] "Pod admission denied" podUID="d6b1b7b2-9948-4685-81bf-6fd1b7607112" pod="tigera-operator/tigera-operator-5bf8dfcb4-bq2r6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:46.112231 kubelet[2739]: I0813 01:34:46.111706 2739 kubelet.go:2306] "Pod admission denied" podUID="a1013dab-23d9-40ad-b712-42217713d2a5" pod="tigera-operator/tigera-operator-5bf8dfcb4-87mc4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:46.211422 kubelet[2739]: I0813 01:34:46.211360 2739 kubelet.go:2306] "Pod admission denied" podUID="3ba7a18e-321c-4009-934c-745671eb1273" pod="tigera-operator/tigera-operator-5bf8dfcb4-vg4mg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:46.314964 kubelet[2739]: I0813 01:34:46.314904 2739 kubelet.go:2306] "Pod admission denied" podUID="59259abd-a1cb-4908-8a80-0185e7709762" pod="tigera-operator/tigera-operator-5bf8dfcb4-mjg6d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:46.512883 kubelet[2739]: I0813 01:34:46.512840 2739 kubelet.go:2306] "Pod admission denied" podUID="8fc43def-f01c-4fd6-af9c-a6382b977ce7" pod="tigera-operator/tigera-operator-5bf8dfcb4-tjfnc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:46.620166 kubelet[2739]: I0813 01:34:46.620034 2739 kubelet.go:2306] "Pod admission denied" podUID="46ec589d-4ceb-46a0-b586-15e0db95e460" pod="tigera-operator/tigera-operator-5bf8dfcb4-vwpq9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:46.649331 kubelet[2739]: I0813 01:34:46.649278 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:34:46.649686 kubelet[2739]: I0813 01:34:46.649423 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:34:46.651549 kubelet[2739]: I0813 01:34:46.651357 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:34:46.653592 kubelet[2739]: I0813 01:34:46.653446 2739 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" size=18562039 runtimeHandler="" Aug 13 01:34:46.653702 containerd[1544]: time="2025-08-13T01:34:46.653672016Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:34:46.654775 containerd[1544]: time="2025-08-13T01:34:46.654745728Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:34:46.655711 containerd[1544]: time="2025-08-13T01:34:46.655689829Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\"" Aug 13 01:34:46.656106 containerd[1544]: time="2025-08-13T01:34:46.656076971Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" returns successfully" Aug 13 01:34:46.656179 containerd[1544]: time="2025-08-13T01:34:46.656161651Z" level=info msg="ImageDelete event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:34:46.656382 kubelet[2739]: I0813 01:34:46.656310 2739 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" size=56909194 runtimeHandler="" Aug 13 01:34:46.656524 containerd[1544]: time="2025-08-13T01:34:46.656479381Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:34:46.657202 containerd[1544]: time="2025-08-13T01:34:46.657183512Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 01:34:46.657598 containerd[1544]: time="2025-08-13T01:34:46.657559063Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\"" Aug 13 01:34:46.657911 containerd[1544]: time="2025-08-13T01:34:46.657896294Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" returns successfully" Aug 13 01:34:46.657970 containerd[1544]: time="2025-08-13T01:34:46.657948924Z" level=info msg="ImageDelete event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:34:46.666304 kubelet[2739]: I0813 01:34:46.666280 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:34:46.666429 kubelet[2739]: I0813 01:34:46.666412 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-kube-controllers-85fbc76f96-d5vf4","kube-system/coredns-7c65d6cfc9-994jv","calico-system/csi-node-driver-7bj49","calico-system/calico-node-5c47r","calico-system/calico-typha-79464475b5-bbrtw","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:34:46.666533 kubelet[2739]: E0813 01:34:46.666519 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:34:46.666584 kubelet[2739]: E0813 01:34:46.666575 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:34:46.666635 kubelet[2739]: E0813 01:34:46.666626 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:34:46.666680 kubelet[2739]: E0813 01:34:46.666672 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:34:46.666727 kubelet[2739]: E0813 01:34:46.666719 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:34:46.666774 kubelet[2739]: E0813 01:34:46.666766 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:34:46.666821 kubelet[2739]: E0813 01:34:46.666813 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:34:46.666870 kubelet[2739]: E0813 01:34:46.666862 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:34:46.666917 kubelet[2739]: E0813 01:34:46.666909 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:34:46.666969 kubelet[2739]: E0813 01:34:46.666960 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:34:46.667014 kubelet[2739]: I0813 01:34:46.667007 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:34:46.676233 kubelet[2739]: I0813 01:34:46.676203 2739 kubelet.go:2306] "Pod admission denied" podUID="64f4610f-2ef7-4562-9274-0fecf00c85c0" pod="tigera-operator/tigera-operator-5bf8dfcb4-kh9wk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:46.761513 kubelet[2739]: I0813 01:34:46.761023 2739 kubelet.go:2306] "Pod admission denied" podUID="68f17ca8-a4e4-45c9-b1aa-976d4171fcc0" pod="tigera-operator/tigera-operator-5bf8dfcb4-t9w2p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:46.964155 kubelet[2739]: I0813 01:34:46.963994 2739 kubelet.go:2306] "Pod admission denied" podUID="e803e4df-035c-4177-9f53-015263821d8b" pod="tigera-operator/tigera-operator-5bf8dfcb4-phlbl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:47.059586 kubelet[2739]: I0813 01:34:47.059549 2739 kubelet.go:2306] "Pod admission denied" podUID="a538f121-9559-43bd-8b3a-8a1467c75ba2" pod="tigera-operator/tigera-operator-5bf8dfcb4-h27dk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:47.176155 kubelet[2739]: I0813 01:34:47.175067 2739 kubelet.go:2306] "Pod admission denied" podUID="e865d31d-f778-4ce9-af37-625f3bc9fb4a" pod="tigera-operator/tigera-operator-5bf8dfcb4-nztv7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:47.361884 kubelet[2739]: I0813 01:34:47.361827 2739 kubelet.go:2306] "Pod admission denied" podUID="2909c2d5-6b45-43e7-9a69-7570ed8656ed" pod="tigera-operator/tigera-operator-5bf8dfcb4-7gf49" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:47.465469 kubelet[2739]: I0813 01:34:47.465422 2739 kubelet.go:2306] "Pod admission denied" podUID="9a7c0406-bdd4-4cf3-bb60-661a06c322ed" pod="tigera-operator/tigera-operator-5bf8dfcb4-kxhrk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:47.563034 kubelet[2739]: I0813 01:34:47.562976 2739 kubelet.go:2306] "Pod admission denied" podUID="ff2e12aa-cf5d-438b-9ed4-61f3295317ca" pod="tigera-operator/tigera-operator-5bf8dfcb4-rsgt2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:47.662179 kubelet[2739]: I0813 01:34:47.661926 2739 kubelet.go:2306] "Pod admission denied" podUID="095ad756-d964-450c-b0ef-883abc761c00" pod="tigera-operator/tigera-operator-5bf8dfcb4-w6hk7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:47.725153 kubelet[2739]: I0813 01:34:47.724001 2739 kubelet.go:2306] "Pod admission denied" podUID="10c1c2a2-4e7b-4eaf-87bc-d12035119533" pod="tigera-operator/tigera-operator-5bf8dfcb4-rbmxj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:47.813139 kubelet[2739]: I0813 01:34:47.813069 2739 kubelet.go:2306] "Pod admission denied" podUID="ad32c983-6de4-481a-8efb-d5d21f12938f" pod="tigera-operator/tigera-operator-5bf8dfcb4-6p4f6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:48.013519 kubelet[2739]: I0813 01:34:48.013473 2739 kubelet.go:2306] "Pod admission denied" podUID="7d157df6-1110-4bcf-8f9a-ba17f20734ba" pod="tigera-operator/tigera-operator-5bf8dfcb4-tmlzk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:48.113983 kubelet[2739]: I0813 01:34:48.113940 2739 kubelet.go:2306] "Pod admission denied" podUID="9b30f535-b14b-4f36-b3df-d4b9ee95306a" pod="tigera-operator/tigera-operator-5bf8dfcb4-wkst2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:48.212544 kubelet[2739]: I0813 01:34:48.212485 2739 kubelet.go:2306] "Pod admission denied" podUID="17ebbadd-42a3-4c08-9b37-66390ec39501" pod="tigera-operator/tigera-operator-5bf8dfcb4-66mfx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:48.320060 kubelet[2739]: I0813 01:34:48.319593 2739 kubelet.go:2306] "Pod admission denied" podUID="f689f9f9-eccb-42d9-99de-b40ab3d509a3" pod="tigera-operator/tigera-operator-5bf8dfcb4-bjbn8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:48.413473 kubelet[2739]: I0813 01:34:48.413425 2739 kubelet.go:2306] "Pod admission denied" podUID="ffe75d77-5d6e-4d94-a14a-ca934f4b97de" pod="tigera-operator/tigera-operator-5bf8dfcb4-rbjct" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:48.513809 kubelet[2739]: I0813 01:34:48.513757 2739 kubelet.go:2306] "Pod admission denied" podUID="ac51d8db-dffb-4bf0-9b80-6d184f6f54e3" pod="tigera-operator/tigera-operator-5bf8dfcb4-xw5ww" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:48.611392 kubelet[2739]: I0813 01:34:48.611264 2739 kubelet.go:2306] "Pod admission denied" podUID="de0e9ab5-0a5f-4e21-a075-decfc4437c01" pod="tigera-operator/tigera-operator-5bf8dfcb4-xgb9s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:48.802885 containerd[1544]: time="2025-08-13T01:34:48.802630033Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:34:48.820714 kubelet[2739]: I0813 01:34:48.820648 2739 kubelet.go:2306] "Pod admission denied" podUID="72dc2872-bbc1-4055-8489-041ef4b4e9a8" pod="tigera-operator/tigera-operator-5bf8dfcb4-t69vt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:48.929381 kubelet[2739]: I0813 01:34:48.929152 2739 kubelet.go:2306] "Pod admission denied" podUID="13cbf49f-3b47-43fa-a9dd-6e38dd3f7684" pod="tigera-operator/tigera-operator-5bf8dfcb4-5tspd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:48.975842 kubelet[2739]: I0813 01:34:48.975788 2739 kubelet.go:2306] "Pod admission denied" podUID="a1755816-0a1d-40f7-bf56-9092b1f8fd30" pod="tigera-operator/tigera-operator-5bf8dfcb4-9sbt4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:49.066147 kubelet[2739]: I0813 01:34:49.066081 2739 kubelet.go:2306] "Pod admission denied" podUID="f9de683e-f1f7-4b98-b56a-fccb1aac9137" pod="tigera-operator/tigera-operator-5bf8dfcb4-t4lz9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:49.164602 kubelet[2739]: I0813 01:34:49.164557 2739 kubelet.go:2306] "Pod admission denied" podUID="ae21babd-30a9-4b70-b75f-c44365aab8cd" pod="tigera-operator/tigera-operator-5bf8dfcb4-62qnn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:49.263121 kubelet[2739]: I0813 01:34:49.263082 2739 kubelet.go:2306] "Pod admission denied" podUID="474fa80e-b966-416a-818d-6593af7539b3" pod="tigera-operator/tigera-operator-5bf8dfcb4-4lck5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:49.386302 kubelet[2739]: I0813 01:34:49.386258 2739 kubelet.go:2306] "Pod admission denied" podUID="95fd2946-c00a-4a71-8da6-f9fc22788a6b" pod="tigera-operator/tigera-operator-5bf8dfcb4-6tbh2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:49.523176 kubelet[2739]: I0813 01:34:49.521537 2739 kubelet.go:2306] "Pod admission denied" podUID="41738c42-adef-48b7-bd8f-36975f7ae92b" pod="tigera-operator/tigera-operator-5bf8dfcb4-kn42f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:49.619357 kubelet[2739]: I0813 01:34:49.619293 2739 kubelet.go:2306] "Pod admission denied" podUID="9699987d-8c99-4397-8d7e-408454eaa35b" pod="tigera-operator/tigera-operator-5bf8dfcb4-vn9xd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:49.723459 kubelet[2739]: I0813 01:34:49.723414 2739 kubelet.go:2306] "Pod admission denied" podUID="3bb73074-dea9-47a9-ab7d-167149751906" pod="tigera-operator/tigera-operator-5bf8dfcb4-hwlb6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:49.798717 containerd[1544]: time="2025-08-13T01:34:49.798281755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,}" Aug 13 01:34:49.834542 kubelet[2739]: I0813 01:34:49.834498 2739 kubelet.go:2306] "Pod admission denied" podUID="fb8d221f-13d9-449a-ab23-9fb59eaa3655" pod="tigera-operator/tigera-operator-5bf8dfcb4-czv65" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:49.922763 containerd[1544]: time="2025-08-13T01:34:49.922726847Z" level=error msg="Failed to destroy network for sandbox \"1cb3307e1dc51672d54344e61ffc3d8ccd244769a9334d552bd4f86f58c3f221\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:49.926800 systemd[1]: run-netns-cni\x2d89835d1a\x2da55a\x2dcb3a\x2d0edf\x2df8f53b70b19f.mount: Deactivated successfully. Aug 13 01:34:49.929773 containerd[1544]: time="2025-08-13T01:34:49.929624247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cb3307e1dc51672d54344e61ffc3d8ccd244769a9334d552bd4f86f58c3f221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:49.936709 kubelet[2739]: E0813 01:34:49.936341 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cb3307e1dc51672d54344e61ffc3d8ccd244769a9334d552bd4f86f58c3f221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:49.936709 kubelet[2739]: E0813 01:34:49.936401 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cb3307e1dc51672d54344e61ffc3d8ccd244769a9334d552bd4f86f58c3f221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:34:49.936709 kubelet[2739]: E0813 01:34:49.936419 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1cb3307e1dc51672d54344e61ffc3d8ccd244769a9334d552bd4f86f58c3f221\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:34:49.936709 kubelet[2739]: E0813 01:34:49.936452 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1cb3307e1dc51672d54344e61ffc3d8ccd244769a9334d552bd4f86f58c3f221\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-7bj49" podUID="4e5845c9-626c-4c83-900a-0da0bae2daed" Aug 13 01:34:49.956051 kubelet[2739]: I0813 01:34:49.955620 2739 kubelet.go:2306] "Pod admission denied" podUID="ebbeb305-3a50-4582-8db0-085af19b7bd6" pod="tigera-operator/tigera-operator-5bf8dfcb4-5vmnq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:50.077520 kubelet[2739]: I0813 01:34:50.077422 2739 kubelet.go:2306] "Pod admission denied" podUID="685fe617-f7c2-4b03-939e-671b55cf4d50" pod="tigera-operator/tigera-operator-5bf8dfcb4-6dd9f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:50.163192 kubelet[2739]: I0813 01:34:50.163123 2739 kubelet.go:2306] "Pod admission denied" podUID="e9f04356-2560-4120-8b7f-5a12abf03c9e" pod="tigera-operator/tigera-operator-5bf8dfcb4-ngw5k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:50.296931 kubelet[2739]: I0813 01:34:50.296868 2739 kubelet.go:2306] "Pod admission denied" podUID="47a55f07-a6f5-4220-876f-9d58fc59b041" pod="tigera-operator/tigera-operator-5bf8dfcb4-7rlwh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:50.421017 kubelet[2739]: I0813 01:34:50.420911 2739 kubelet.go:2306] "Pod admission denied" podUID="02fca28e-40c7-4be7-82ec-d0a736e11e83" pod="tigera-operator/tigera-operator-5bf8dfcb4-w6zw6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:50.524302 kubelet[2739]: I0813 01:34:50.524223 2739 kubelet.go:2306] "Pod admission denied" podUID="c1e13f1e-fee7-4ab9-a1e6-4e971dc7db7d" pod="tigera-operator/tigera-operator-5bf8dfcb4-qdcsd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:50.622062 kubelet[2739]: I0813 01:34:50.622010 2739 kubelet.go:2306] "Pod admission denied" podUID="76d33201-6adb-42d8-a048-f8516fa1a913" pod="tigera-operator/tigera-operator-5bf8dfcb4-67268" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:50.730607 kubelet[2739]: I0813 01:34:50.730571 2739 kubelet.go:2306] "Pod admission denied" podUID="32a37028-4d25-4d05-83e0-4b8713926ed8" pod="tigera-operator/tigera-operator-5bf8dfcb4-j9d8w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:50.823483 containerd[1544]: time="2025-08-13T01:34:50.823408051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,}" Aug 13 01:34:50.885151 kubelet[2739]: I0813 01:34:50.884978 2739 kubelet.go:2306] "Pod admission denied" podUID="e6234475-4f5c-4319-a258-4ec8d452779c" pod="tigera-operator/tigera-operator-5bf8dfcb4-zdbdj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:50.979699 containerd[1544]: time="2025-08-13T01:34:50.979652362Z" level=error msg="Failed to destroy network for sandbox \"917d68eaf688611023cfc76a0dce1c1c8d1436f1709dd628877f385e9673925c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:50.984278 containerd[1544]: time="2025-08-13T01:34:50.983837208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"917d68eaf688611023cfc76a0dce1c1c8d1436f1709dd628877f385e9673925c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:50.984378 kubelet[2739]: E0813 01:34:50.984028 2739 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"917d68eaf688611023cfc76a0dce1c1c8d1436f1709dd628877f385e9673925c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:34:50.984378 kubelet[2739]: E0813 01:34:50.984079 2739 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"917d68eaf688611023cfc76a0dce1c1c8d1436f1709dd628877f385e9673925c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:34:50.984378 kubelet[2739]: E0813 01:34:50.984109 2739 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"917d68eaf688611023cfc76a0dce1c1c8d1436f1709dd628877f385e9673925c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:34:50.984378 kubelet[2739]: E0813 01:34:50.984197 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"917d68eaf688611023cfc76a0dce1c1c8d1436f1709dd628877f385e9673925c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:34:50.983164 systemd[1]: run-netns-cni\x2de485e4cf\x2d391e\x2dc1ca\x2d83a3\x2d876fd07219ce.mount: Deactivated successfully. Aug 13 01:34:50.990163 kubelet[2739]: I0813 01:34:50.990113 2739 kubelet.go:2306] "Pod admission denied" podUID="1acdfcbf-2d71-439e-95b4-d279ba59226a" pod="tigera-operator/tigera-operator-5bf8dfcb4-qr699" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:51.130178 kubelet[2739]: I0813 01:34:51.130102 2739 kubelet.go:2306] "Pod admission denied" podUID="d2888782-2a76-4725-90b1-e52effec2440" pod="tigera-operator/tigera-operator-5bf8dfcb4-g6hrv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:51.289999 kubelet[2739]: I0813 01:34:51.289500 2739 kubelet.go:2306] "Pod admission denied" podUID="380382f0-dc04-4930-b99c-39d5940cab65" pod="tigera-operator/tigera-operator-5bf8dfcb4-5qdcv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:51.353648 kubelet[2739]: I0813 01:34:51.353546 2739 kubelet.go:2306] "Pod admission denied" podUID="03f1ab8d-882f-459f-af38-fc081d3e38e1" pod="tigera-operator/tigera-operator-5bf8dfcb4-fk4px" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:51.489663 kubelet[2739]: I0813 01:34:51.488417 2739 kubelet.go:2306] "Pod admission denied" podUID="ff8fa979-b4d9-4852-90c9-85d381dbb7b5" pod="tigera-operator/tigera-operator-5bf8dfcb4-bcxbg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:51.620795 kubelet[2739]: I0813 01:34:51.620433 2739 kubelet.go:2306] "Pod admission denied" podUID="6a95b909-bd5a-4118-9bf4-df7cfea09a08" pod="tigera-operator/tigera-operator-5bf8dfcb4-zq5tj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:51.723309 kubelet[2739]: I0813 01:34:51.723202 2739 kubelet.go:2306] "Pod admission denied" podUID="072f53ff-6af8-457c-8978-41f3850ac888" pod="tigera-operator/tigera-operator-5bf8dfcb4-t77jb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:51.848939 kubelet[2739]: I0813 01:34:51.848881 2739 kubelet.go:2306] "Pod admission denied" podUID="3ae10b80-cc76-48ad-a662-73af031b7dae" pod="tigera-operator/tigera-operator-5bf8dfcb4-6jwmg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:51.974657 kubelet[2739]: I0813 01:34:51.974628 2739 kubelet.go:2306] "Pod admission denied" podUID="94402b50-d110-4969-b5ed-bdb978ad51c6" pod="tigera-operator/tigera-operator-5bf8dfcb4-kt9pr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:52.084507 kubelet[2739]: I0813 01:34:52.084472 2739 kubelet.go:2306] "Pod admission denied" podUID="b39f5e61-bc71-441e-afbf-026581c7fe44" pod="tigera-operator/tigera-operator-5bf8dfcb4-zhhzr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:52.275043 kubelet[2739]: I0813 01:34:52.274941 2739 kubelet.go:2306] "Pod admission denied" podUID="4b3db4bd-2dd8-442b-8959-5fa56a8b3520" pod="tigera-operator/tigera-operator-5bf8dfcb4-mc9rj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:52.369390 kubelet[2739]: I0813 01:34:52.369351 2739 kubelet.go:2306] "Pod admission denied" podUID="1fe3ce9b-be6e-442e-94cc-27a522d27e90" pod="tigera-operator/tigera-operator-5bf8dfcb4-spqzb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:52.482782 kubelet[2739]: I0813 01:34:52.482652 2739 kubelet.go:2306] "Pod admission denied" podUID="2394d82b-5542-4e90-b21d-55dfc050c004" pod="tigera-operator/tigera-operator-5bf8dfcb4-vq7z8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:52.571801 kubelet[2739]: I0813 01:34:52.571453 2739 kubelet.go:2306] "Pod admission denied" podUID="cd2daa2c-8d56-4c78-913f-6172a5b05a57" pod="tigera-operator/tigera-operator-5bf8dfcb4-ccrpk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:52.698028 kubelet[2739]: I0813 01:34:52.697028 2739 kubelet.go:2306] "Pod admission denied" podUID="92ae0bfa-3d9c-4a15-a661-40cb4a8c334a" pod="tigera-operator/tigera-operator-5bf8dfcb4-kf9ml" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:52.798775 kubelet[2739]: E0813 01:34:52.798738 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:34:52.874483 kubelet[2739]: I0813 01:34:52.874398 2739 kubelet.go:2306] "Pod admission denied" podUID="b5df8003-5f65-49bb-a5a6-501ffddfb005" pod="tigera-operator/tigera-operator-5bf8dfcb4-pr6r8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:52.976304 kubelet[2739]: I0813 01:34:52.976247 2739 kubelet.go:2306] "Pod admission denied" podUID="6190c0af-6cb9-4f48-81fb-55939c5bbfbe" pod="tigera-operator/tigera-operator-5bf8dfcb4-gxh2t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:53.114183 kubelet[2739]: I0813 01:34:53.114117 2739 kubelet.go:2306] "Pod admission denied" podUID="91db78cd-d631-4b4d-b4aa-88a7f44edbed" pod="tigera-operator/tigera-operator-5bf8dfcb4-7rz2f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:53.188924 kubelet[2739]: I0813 01:34:53.187615 2739 kubelet.go:2306] "Pod admission denied" podUID="79ef0275-c662-480e-a3eb-04c0a74d575d" pod="tigera-operator/tigera-operator-5bf8dfcb4-jcskp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:53.359269 kubelet[2739]: I0813 01:34:53.358930 2739 kubelet.go:2306] "Pod admission denied" podUID="e2af2697-9914-481f-942b-55559d15a1f9" pod="tigera-operator/tigera-operator-5bf8dfcb4-9nzg5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:53.457533 kubelet[2739]: I0813 01:34:53.457397 2739 kubelet.go:2306] "Pod admission denied" podUID="5e1aab74-2706-42d7-8e83-98b6c585789e" pod="tigera-operator/tigera-operator-5bf8dfcb4-2cgpw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:53.527465 kubelet[2739]: I0813 01:34:53.527421 2739 kubelet.go:2306] "Pod admission denied" podUID="3941be00-6dfa-437e-be99-9e6de0ff01cd" pod="tigera-operator/tigera-operator-5bf8dfcb4-lxl6g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:53.726726 kubelet[2739]: I0813 01:34:53.726675 2739 kubelet.go:2306] "Pod admission denied" podUID="22c0c8c9-9257-415f-ade2-88a3855fbea9" pod="tigera-operator/tigera-operator-5bf8dfcb4-r46fm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:53.863627 kubelet[2739]: I0813 01:34:53.863589 2739 kubelet.go:2306] "Pod admission denied" podUID="23d08f9c-f215-4aa2-8fa4-57cfe9ab788f" pod="tigera-operator/tigera-operator-5bf8dfcb4-jszpp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:53.960047 kubelet[2739]: I0813 01:34:53.959981 2739 kubelet.go:2306] "Pod admission denied" podUID="e822c5d7-0126-4445-9958-bb35b59222ab" pod="tigera-operator/tigera-operator-5bf8dfcb4-bh4jb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:54.069308 kubelet[2739]: I0813 01:34:54.068503 2739 kubelet.go:2306] "Pod admission denied" podUID="d86e798f-98b7-467c-8c23-6ecb1f9afd88" pod="tigera-operator/tigera-operator-5bf8dfcb4-9nnns" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:54.198988 kubelet[2739]: I0813 01:34:54.197676 2739 kubelet.go:2306] "Pod admission denied" podUID="df29df33-af47-4f38-8448-aed23c116f7f" pod="tigera-operator/tigera-operator-5bf8dfcb4-x5jtk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:54.296924 kubelet[2739]: I0813 01:34:54.296853 2739 kubelet.go:2306] "Pod admission denied" podUID="dfbe45a2-7d40-4a43-80b4-5f21ad46df0d" pod="tigera-operator/tigera-operator-5bf8dfcb4-bq862" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:54.339652 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3541792006.mount: Deactivated successfully. Aug 13 01:34:54.373330 containerd[1544]: time="2025-08-13T01:34:54.373271297Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:34:54.374300 containerd[1544]: time="2025-08-13T01:34:54.374242288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:34:54.375159 containerd[1544]: time="2025-08-13T01:34:54.375036209Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:34:54.376661 containerd[1544]: time="2025-08-13T01:34:54.376618501Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:34:54.378369 containerd[1544]: time="2025-08-13T01:34:54.378344654Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 5.575670761s" Aug 13 01:34:54.378417 containerd[1544]: time="2025-08-13T01:34:54.378373434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 01:34:54.393018 containerd[1544]: time="2025-08-13T01:34:54.392979636Z" level=info msg="CreateContainer within sandbox \"c240561ffd890c1b5476094a7248023a31db54fc62397aa7089467e118977fc3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 01:34:54.409389 containerd[1544]: time="2025-08-13T01:34:54.409348722Z" level=info msg="Container b5a29eb80b0bc5089fcefc178e897c50dab7a245f3568439d3b5679ef6257511: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:34:54.414876 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2737355377.mount: Deactivated successfully. Aug 13 01:34:54.430999 containerd[1544]: time="2025-08-13T01:34:54.430955224Z" level=info msg="CreateContainer within sandbox \"c240561ffd890c1b5476094a7248023a31db54fc62397aa7089467e118977fc3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"b5a29eb80b0bc5089fcefc178e897c50dab7a245f3568439d3b5679ef6257511\"" Aug 13 01:34:54.431646 containerd[1544]: time="2025-08-13T01:34:54.431518326Z" level=info msg="StartContainer for \"b5a29eb80b0bc5089fcefc178e897c50dab7a245f3568439d3b5679ef6257511\"" Aug 13 01:34:54.432978 containerd[1544]: time="2025-08-13T01:34:54.432951027Z" level=info msg="connecting to shim b5a29eb80b0bc5089fcefc178e897c50dab7a245f3568439d3b5679ef6257511" address="unix:///run/containerd/s/bb601809de608d4f6267fd6d2370633f52b680b0e7eaa9483a2b5d67d920826e" protocol=ttrpc version=3 Aug 13 01:34:54.444898 kubelet[2739]: I0813 01:34:54.444628 2739 kubelet.go:2306] "Pod admission denied" podUID="0815dab3-d964-46f8-9ae2-b68d81a8a0b3" pod="tigera-operator/tigera-operator-5bf8dfcb4-wtqdc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:54.470665 systemd[1]: Started cri-containerd-b5a29eb80b0bc5089fcefc178e897c50dab7a245f3568439d3b5679ef6257511.scope - libcontainer container b5a29eb80b0bc5089fcefc178e897c50dab7a245f3568439d3b5679ef6257511. Aug 13 01:34:54.528203 containerd[1544]: time="2025-08-13T01:34:54.528162492Z" level=info msg="StartContainer for \"b5a29eb80b0bc5089fcefc178e897c50dab7a245f3568439d3b5679ef6257511\" returns successfully" Aug 13 01:34:54.611173 kubelet[2739]: I0813 01:34:54.609812 2739 kubelet.go:2306] "Pod admission denied" podUID="d0264f22-3714-4177-8c20-2d75e2e54440" pod="tigera-operator/tigera-operator-5bf8dfcb4-hn6kb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:54.669476 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 01:34:54.669676 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 01:34:54.731201 kubelet[2739]: I0813 01:34:54.731163 2739 kubelet.go:2306] "Pod admission denied" podUID="7396ed88-891e-47bc-bb24-55233f5b7531" pod="tigera-operator/tigera-operator-5bf8dfcb4-7l74h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:54.819299 kubelet[2739]: I0813 01:34:54.819268 2739 kubelet.go:2306] "Pod admission denied" podUID="18fc7222-108c-474c-a003-eb5d52b21738" pod="tigera-operator/tigera-operator-5bf8dfcb4-f4vl4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:54.925652 kubelet[2739]: I0813 01:34:54.924481 2739 kubelet.go:2306] "Pod admission denied" podUID="5ce3f7f5-03e4-4263-bcf8-3691e23bcf8d" pod="tigera-operator/tigera-operator-5bf8dfcb4-54hh4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:55.014813 kubelet[2739]: I0813 01:34:55.014749 2739 kubelet.go:2306] "Pod admission denied" podUID="fdb3a537-9890-47de-a70b-a0a6671d1323" pod="tigera-operator/tigera-operator-5bf8dfcb4-mq6q2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:55.118960 kubelet[2739]: I0813 01:34:55.118701 2739 kubelet.go:2306] "Pod admission denied" podUID="5480c593-6ff0-4179-b608-eeec003ef77b" pod="tigera-operator/tigera-operator-5bf8dfcb4-vr4nx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:55.185257 kubelet[2739]: I0813 01:34:55.183848 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-5c47r" podStartSLOduration=1.492373657 podStartE2EDuration="1m55.183832768s" podCreationTimestamp="2025-08-13 01:33:00 +0000 UTC" firstStartedPulling="2025-08-13 01:33:00.687784015 +0000 UTC m=+19.976101133" lastFinishedPulling="2025-08-13 01:34:54.379243126 +0000 UTC m=+133.667560244" observedRunningTime="2025-08-13 01:34:55.17889434 +0000 UTC m=+134.467211458" watchObservedRunningTime="2025-08-13 01:34:55.183832768 +0000 UTC m=+134.472149886" Aug 13 01:34:55.254333 containerd[1544]: time="2025-08-13T01:34:55.254303584Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5a29eb80b0bc5089fcefc178e897c50dab7a245f3568439d3b5679ef6257511\" id:\"ef2c1430a0584c4b11273cb6701a6327daa9c600e5b7e1564b14add019e82234\" pid:4783 exit_status:1 exited_at:{seconds:1755048895 nanos:253222771}" Aug 13 01:34:55.264819 kubelet[2739]: I0813 01:34:55.264773 2739 kubelet.go:2306] "Pod admission denied" podUID="9b701fff-d331-4fa1-9d3b-9faec046c31e" pod="tigera-operator/tigera-operator-5bf8dfcb4-z565q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:55.327240 kubelet[2739]: I0813 01:34:55.327183 2739 kubelet.go:2306] "Pod admission denied" podUID="23000660-f3f1-4ca3-806f-6bab46b0dd3f" pod="tigera-operator/tigera-operator-5bf8dfcb4-jlwvm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:55.436312 kubelet[2739]: I0813 01:34:55.436193 2739 kubelet.go:2306] "Pod admission denied" podUID="0e7ee371-332a-4fc7-aed2-02109c0d06ec" pod="tigera-operator/tigera-operator-5bf8dfcb4-zvlhp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:55.459094 containerd[1544]: time="2025-08-13T01:34:55.459014161Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5a29eb80b0bc5089fcefc178e897c50dab7a245f3568439d3b5679ef6257511\" id:\"794e266b68bea56617f76299f41863451ca4ab0114cd69cc16ec7e5310f7b231\" pid:4806 exit_status:1 exited_at:{seconds:1755048895 nanos:458735711}" Aug 13 01:34:55.673009 kubelet[2739]: I0813 01:34:55.672944 2739 kubelet.go:2306] "Pod admission denied" podUID="f9ab7bf5-7f84-4f39-8fef-60db316527c6" pod="tigera-operator/tigera-operator-5bf8dfcb4-2dqkk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:55.765028 kubelet[2739]: I0813 01:34:55.764984 2739 kubelet.go:2306] "Pod admission denied" podUID="1e35b588-2d47-4f22-adc5-fff70f03606b" pod="tigera-operator/tigera-operator-5bf8dfcb4-g7qb6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:55.881966 kubelet[2739]: I0813 01:34:55.881355 2739 kubelet.go:2306] "Pod admission denied" podUID="d1937d21-0f43-4ce0-a712-f29104087d7b" pod="tigera-operator/tigera-operator-5bf8dfcb4-vpzfh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:55.970466 kubelet[2739]: I0813 01:34:55.970201 2739 kubelet.go:2306] "Pod admission denied" podUID="7992ab1f-617f-4746-a7ea-de4f44c103c3" pod="tigera-operator/tigera-operator-5bf8dfcb4-pnv2x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:56.068300 kubelet[2739]: I0813 01:34:56.068197 2739 kubelet.go:2306] "Pod admission denied" podUID="72595452-c630-485b-9793-237b75d95307" pod="tigera-operator/tigera-operator-5bf8dfcb4-ldqvj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:56.184600 kubelet[2739]: I0813 01:34:56.184529 2739 kubelet.go:2306] "Pod admission denied" podUID="d16bbec7-c020-4b65-8fff-e19c0f9bcbef" pod="tigera-operator/tigera-operator-5bf8dfcb4-lcxmf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:56.273411 kubelet[2739]: I0813 01:34:56.273360 2739 kubelet.go:2306] "Pod admission denied" podUID="2aaaef7c-fab2-4599-b1cc-afefd201aa4b" pod="tigera-operator/tigera-operator-5bf8dfcb4-9wqjg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:56.415614 kubelet[2739]: I0813 01:34:56.415298 2739 kubelet.go:2306] "Pod admission denied" podUID="673aa427-9f22-4b09-ac6d-2629bf14bc08" pod="tigera-operator/tigera-operator-5bf8dfcb4-nw2p2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:56.510660 containerd[1544]: time="2025-08-13T01:34:56.510611794Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5a29eb80b0bc5089fcefc178e897c50dab7a245f3568439d3b5679ef6257511\" id:\"f289368e6f7eb1023e24df2a61034ef2982328eb560ed0ccb38d2bdc7e0defbf\" pid:4915 exit_status:1 exited_at:{seconds:1755048896 nanos:510208063}" Aug 13 01:34:56.624496 kubelet[2739]: I0813 01:34:56.624445 2739 kubelet.go:2306] "Pod admission denied" podUID="70f7c959-a13a-4fab-a851-477846d840f0" pod="tigera-operator/tigera-operator-5bf8dfcb4-pfkxs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:56.699395 kubelet[2739]: I0813 01:34:56.698704 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:34:56.699395 kubelet[2739]: I0813 01:34:56.698935 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:34:56.701760 kubelet[2739]: I0813 01:34:56.701742 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:34:56.729652 kubelet[2739]: I0813 01:34:56.729624 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:34:56.729746 kubelet[2739]: I0813 01:34:56.729719 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/csi-node-driver-7bj49","calico-system/calico-typha-79464475b5-bbrtw","kube-system/kube-controller-manager-172-234-27-175","calico-system/calico-node-5c47r","kube-system/kube-proxy-kfjpt","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:34:56.729816 kubelet[2739]: E0813 01:34:56.729754 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:34:56.729816 kubelet[2739]: E0813 01:34:56.729764 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:34:56.729816 kubelet[2739]: E0813 01:34:56.729771 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:34:56.729816 kubelet[2739]: E0813 01:34:56.729778 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:34:56.729816 kubelet[2739]: E0813 01:34:56.729787 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:34:56.729816 kubelet[2739]: E0813 01:34:56.729795 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:34:56.729816 kubelet[2739]: E0813 01:34:56.729803 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:34:56.729816 kubelet[2739]: E0813 01:34:56.729810 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:34:56.729816 kubelet[2739]: E0813 01:34:56.729818 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:34:56.730027 kubelet[2739]: E0813 01:34:56.729826 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:34:56.730027 kubelet[2739]: I0813 01:34:56.729835 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:34:56.754736 kubelet[2739]: I0813 01:34:56.754683 2739 kubelet.go:2306] "Pod admission denied" podUID="968e0fad-6818-4b27-bed6-32dbada8038d" pod="tigera-operator/tigera-operator-5bf8dfcb4-dcbzk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:56.820753 kubelet[2739]: I0813 01:34:56.820706 2739 kubelet.go:2306] "Pod admission denied" podUID="5e7af92d-ade3-4c4f-9209-6575f309257b" pod="tigera-operator/tigera-operator-5bf8dfcb4-czkpq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:56.932225 systemd-networkd[1469]: vxlan.calico: Link UP Aug 13 01:34:56.932233 systemd-networkd[1469]: vxlan.calico: Gained carrier Aug 13 01:34:56.945003 kubelet[2739]: I0813 01:34:56.943321 2739 kubelet.go:2306] "Pod admission denied" podUID="9881336f-f10c-4fe6-b9dd-e3475c7d7ab5" pod="tigera-operator/tigera-operator-5bf8dfcb4-7jzxl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:57.066210 kubelet[2739]: I0813 01:34:57.064357 2739 kubelet.go:2306] "Pod admission denied" podUID="1e5641d7-3ea7-481b-a8bf-1e212829e6a8" pod="tigera-operator/tigera-operator-5bf8dfcb4-gc2m5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:57.171334 kubelet[2739]: I0813 01:34:57.171291 2739 kubelet.go:2306] "Pod admission denied" podUID="bd53d346-274b-4d28-92a9-d303fdac5565" pod="tigera-operator/tigera-operator-5bf8dfcb4-jf9rn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:57.277521 kubelet[2739]: I0813 01:34:57.276799 2739 kubelet.go:2306] "Pod admission denied" podUID="46703028-8a78-4505-aebc-d8fcdcd73696" pod="tigera-operator/tigera-operator-5bf8dfcb4-vdxfd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:57.519700 kubelet[2739]: I0813 01:34:57.519643 2739 kubelet.go:2306] "Pod admission denied" podUID="817f2568-2d96-4b6a-86b5-cdcd5333347e" pod="tigera-operator/tigera-operator-5bf8dfcb4-sv2pb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:57.618558 kubelet[2739]: I0813 01:34:57.618502 2739 kubelet.go:2306] "Pod admission denied" podUID="09379096-0f9a-4d8b-b0cc-4d8385145fb8" pod="tigera-operator/tigera-operator-5bf8dfcb4-jk499" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:57.716516 kubelet[2739]: I0813 01:34:57.716461 2739 kubelet.go:2306] "Pod admission denied" podUID="ff668e27-b923-462f-8777-cfb919caea7a" pod="tigera-operator/tigera-operator-5bf8dfcb4-bdzrd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:57.820640 kubelet[2739]: I0813 01:34:57.820513 2739 kubelet.go:2306] "Pod admission denied" podUID="5fcfb7e1-4c60-465b-96b5-90a03cb6dfca" pod="tigera-operator/tigera-operator-5bf8dfcb4-xwz2b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:57.942153 kubelet[2739]: I0813 01:34:57.941303 2739 kubelet.go:2306] "Pod admission denied" podUID="82f1a464-0b07-4bd4-87b3-ade1bd14fb15" pod="tigera-operator/tigera-operator-5bf8dfcb4-fgnsw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:58.067296 kubelet[2739]: I0813 01:34:58.067235 2739 kubelet.go:2306] "Pod admission denied" podUID="f8a559ef-0355-4db9-bcb8-ccdb4db62d61" pod="tigera-operator/tigera-operator-5bf8dfcb4-tj6mq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:58.169380 kubelet[2739]: I0813 01:34:58.169268 2739 kubelet.go:2306] "Pod admission denied" podUID="b48ee428-b442-42fd-a87d-71c64c8a38b2" pod="tigera-operator/tigera-operator-5bf8dfcb4-ppw6k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:58.264543 kubelet[2739]: I0813 01:34:58.264482 2739 kubelet.go:2306] "Pod admission denied" podUID="293bfb07-6d4e-4920-8e41-f8387a9db838" pod="tigera-operator/tigera-operator-5bf8dfcb4-ccbnc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:58.368923 kubelet[2739]: I0813 01:34:58.368867 2739 kubelet.go:2306] "Pod admission denied" podUID="1b0fc293-ee25-4641-b7f6-3b68db9ba646" pod="tigera-operator/tigera-operator-5bf8dfcb4-49gcc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:58.488384 kubelet[2739]: I0813 01:34:58.488346 2739 kubelet.go:2306] "Pod admission denied" podUID="82fbf265-ff8e-4f72-a7fa-d6cd02ee816e" pod="tigera-operator/tigera-operator-5bf8dfcb4-jtm7g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:58.619305 kubelet[2739]: I0813 01:34:58.619247 2739 kubelet.go:2306] "Pod admission denied" podUID="88b9c18b-810f-45d8-bb47-ac9239b51eb3" pod="tigera-operator/tigera-operator-5bf8dfcb4-cx66x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:58.719696 kubelet[2739]: I0813 01:34:58.719645 2739 kubelet.go:2306] "Pod admission denied" podUID="426ac217-d1f3-44cb-87dd-31aacf10c636" pod="tigera-operator/tigera-operator-5bf8dfcb4-47bdk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:58.773337 systemd-networkd[1469]: vxlan.calico: Gained IPv6LL Aug 13 01:34:58.825614 kubelet[2739]: I0813 01:34:58.825578 2739 kubelet.go:2306] "Pod admission denied" podUID="06e15c98-f88d-4745-b1ad-85a74eff4f58" pod="tigera-operator/tigera-operator-5bf8dfcb4-72jh5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:58.918548 kubelet[2739]: I0813 01:34:58.918481 2739 kubelet.go:2306] "Pod admission denied" podUID="8f4f5625-f271-4862-9394-d1c0e7853c7e" pod="tigera-operator/tigera-operator-5bf8dfcb4-5prsz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:59.141229 kubelet[2739]: I0813 01:34:59.140821 2739 kubelet.go:2306] "Pod admission denied" podUID="861ada9b-4f6f-4d42-acfa-7cecd6c66df9" pod="tigera-operator/tigera-operator-5bf8dfcb4-8csx7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:59.372638 kubelet[2739]: I0813 01:34:59.372600 2739 kubelet.go:2306] "Pod admission denied" podUID="36493f1d-bb96-4804-a794-b1ee746c3ada" pod="tigera-operator/tigera-operator-5bf8dfcb4-2vhlk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:59.471749 kubelet[2739]: I0813 01:34:59.471691 2739 kubelet.go:2306] "Pod admission denied" podUID="2e266aa0-f32a-4c60-8b29-ed38712b3c6f" pod="tigera-operator/tigera-operator-5bf8dfcb4-7mpcm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:59.567159 kubelet[2739]: I0813 01:34:59.567101 2739 kubelet.go:2306] "Pod admission denied" podUID="9fa4e962-25a4-47f1-a935-557174a3cd69" pod="tigera-operator/tigera-operator-5bf8dfcb4-2wl9g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:34:59.797931 kubelet[2739]: E0813 01:34:59.797621 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:34:59.798794 containerd[1544]: time="2025-08-13T01:34:59.798763309Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5v9,Uid:9cb65184-4613-43ed-9fa1-0cf23f1e0e56,Namespace:kube-system,Attempt:0,}" Aug 13 01:34:59.931746 systemd-networkd[1469]: calia48bfa5a08d: Link UP Aug 13 01:34:59.932578 systemd-networkd[1469]: calia48bfa5a08d: Gained carrier Aug 13 01:34:59.955923 containerd[1544]: 2025-08-13 01:34:59.852 [INFO][5037] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--27--175-k8s-coredns--7c65d6cfc9--mx5v9-eth0 coredns-7c65d6cfc9- kube-system 9cb65184-4613-43ed-9fa1-0cf23f1e0e56 848 0 2025-08-13 01:32:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-27-175 coredns-7c65d6cfc9-mx5v9 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia48bfa5a08d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mx5v9" WorkloadEndpoint="172--234--27--175-k8s-coredns--7c65d6cfc9--mx5v9-" Aug 13 01:34:59.955923 containerd[1544]: 2025-08-13 01:34:59.853 [INFO][5037] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mx5v9" WorkloadEndpoint="172--234--27--175-k8s-coredns--7c65d6cfc9--mx5v9-eth0" Aug 13 01:34:59.955923 containerd[1544]: 2025-08-13 01:34:59.887 [INFO][5048] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" HandleID="k8s-pod-network.a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" Workload="172--234--27--175-k8s-coredns--7c65d6cfc9--mx5v9-eth0" Aug 13 01:34:59.956274 containerd[1544]: 2025-08-13 01:34:59.887 [INFO][5048] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" HandleID="k8s-pod-network.a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" Workload="172--234--27--175-k8s-coredns--7c65d6cfc9--mx5v9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f640), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-27-175", "pod":"coredns-7c65d6cfc9-mx5v9", "timestamp":"2025-08-13 01:34:59.887181557 +0000 UTC"}, Hostname:"172-234-27-175", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:34:59.956274 containerd[1544]: 2025-08-13 01:34:59.887 [INFO][5048] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:34:59.956274 containerd[1544]: 2025-08-13 01:34:59.887 [INFO][5048] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:34:59.956274 containerd[1544]: 2025-08-13 01:34:59.887 [INFO][5048] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-27-175' Aug 13 01:34:59.956274 containerd[1544]: 2025-08-13 01:34:59.893 [INFO][5048] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" host="172-234-27-175" Aug 13 01:34:59.956274 containerd[1544]: 2025-08-13 01:34:59.904 [INFO][5048] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-27-175" Aug 13 01:34:59.956274 containerd[1544]: 2025-08-13 01:34:59.908 [INFO][5048] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="172-234-27-175" Aug 13 01:34:59.956274 containerd[1544]: 2025-08-13 01:34:59.910 [INFO][5048] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="172-234-27-175" Aug 13 01:34:59.956274 containerd[1544]: 2025-08-13 01:34:59.913 [INFO][5048] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="172-234-27-175" Aug 13 01:34:59.956274 containerd[1544]: 2025-08-13 01:34:59.913 [INFO][5048] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" host="172-234-27-175" Aug 13 01:34:59.958100 containerd[1544]: 2025-08-13 01:34:59.914 [INFO][5048] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128 Aug 13 01:34:59.958100 containerd[1544]: 2025-08-13 01:34:59.918 [INFO][5048] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" host="172-234-27-175" Aug 13 01:34:59.958100 containerd[1544]: 2025-08-13 01:34:59.922 [INFO][5048] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.1/26] block=192.168.59.0/26 handle="k8s-pod-network.a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" host="172-234-27-175" Aug 13 01:34:59.958100 containerd[1544]: 2025-08-13 01:34:59.922 [INFO][5048] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.1/26] handle="k8s-pod-network.a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" host="172-234-27-175" Aug 13 01:34:59.958100 containerd[1544]: 2025-08-13 01:34:59.922 [INFO][5048] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:34:59.958100 containerd[1544]: 2025-08-13 01:34:59.922 [INFO][5048] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.1/26] IPv6=[] ContainerID="a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" HandleID="k8s-pod-network.a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" Workload="172--234--27--175-k8s-coredns--7c65d6cfc9--mx5v9-eth0" Aug 13 01:34:59.958273 containerd[1544]: 2025-08-13 01:34:59.927 [INFO][5037] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mx5v9" WorkloadEndpoint="172--234--27--175-k8s-coredns--7c65d6cfc9--mx5v9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--27--175-k8s-coredns--7c65d6cfc9--mx5v9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9cb65184-4613-43ed-9fa1-0cf23f1e0e56", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-27-175", ContainerID:"", Pod:"coredns-7c65d6cfc9-mx5v9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia48bfa5a08d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:34:59.958273 containerd[1544]: 2025-08-13 01:34:59.927 [INFO][5037] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.1/32] ContainerID="a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mx5v9" WorkloadEndpoint="172--234--27--175-k8s-coredns--7c65d6cfc9--mx5v9-eth0" Aug 13 01:34:59.958273 containerd[1544]: 2025-08-13 01:34:59.927 [INFO][5037] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia48bfa5a08d ContainerID="a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mx5v9" WorkloadEndpoint="172--234--27--175-k8s-coredns--7c65d6cfc9--mx5v9-eth0" Aug 13 01:34:59.958273 containerd[1544]: 2025-08-13 01:34:59.933 [INFO][5037] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mx5v9" WorkloadEndpoint="172--234--27--175-k8s-coredns--7c65d6cfc9--mx5v9-eth0" Aug 13 01:34:59.958273 containerd[1544]: 2025-08-13 01:34:59.933 [INFO][5037] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mx5v9" WorkloadEndpoint="172--234--27--175-k8s-coredns--7c65d6cfc9--mx5v9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--27--175-k8s-coredns--7c65d6cfc9--mx5v9-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"9cb65184-4613-43ed-9fa1-0cf23f1e0e56", ResourceVersion:"848", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-27-175", ContainerID:"a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128", Pod:"coredns-7c65d6cfc9-mx5v9", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia48bfa5a08d", MAC:"92:63:26:51:6a:fa", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:34:59.958273 containerd[1544]: 2025-08-13 01:34:59.945 [INFO][5037] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" Namespace="kube-system" Pod="coredns-7c65d6cfc9-mx5v9" WorkloadEndpoint="172--234--27--175-k8s-coredns--7c65d6cfc9--mx5v9-eth0" Aug 13 01:35:00.003826 containerd[1544]: time="2025-08-13T01:35:00.003759504Z" level=info msg="connecting to shim a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128" address="unix:///run/containerd/s/f78de32d7c533eaaa801d89dfeee9fdc1b4f81448d1dec3ca8fefe7fd1933b90" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:35:00.010367 systemd[1]: Started sshd@9-172.234.27.175:22-147.75.109.163:32872.service - OpenSSH per-connection server daemon (147.75.109.163:32872). Aug 13 01:35:00.067264 systemd[1]: Started cri-containerd-a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128.scope - libcontainer container a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128. Aug 13 01:35:00.151511 containerd[1544]: time="2025-08-13T01:35:00.150440602Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-mx5v9,Uid:9cb65184-4613-43ed-9fa1-0cf23f1e0e56,Namespace:kube-system,Attempt:0,} returns sandbox id \"a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128\"" Aug 13 01:35:00.154389 kubelet[2739]: E0813 01:35:00.154370 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:35:00.157474 containerd[1544]: time="2025-08-13T01:35:00.157194222Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:35:00.359643 sshd[5086]: Accepted publickey for core from 147.75.109.163 port 32872 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:35:00.360960 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:35:00.366265 systemd-logind[1529]: New session 8 of user core. Aug 13 01:35:00.369262 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 01:35:00.674062 sshd[5115]: Connection closed by 147.75.109.163 port 32872 Aug 13 01:35:00.674389 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Aug 13 01:35:00.680160 systemd[1]: sshd@9-172.234.27.175:22-147.75.109.163:32872.service: Deactivated successfully. Aug 13 01:35:00.683040 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:35:00.684525 systemd-logind[1529]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:35:00.686346 systemd-logind[1529]: Removed session 8. Aug 13 01:35:00.805813 kubelet[2739]: E0813 01:35:00.805592 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:35:00.816174 containerd[1544]: time="2025-08-13T01:35:00.816114628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-994jv,Uid:3bd5a2e2-42ee-4e27-a412-724b2f0527b4,Namespace:kube-system,Attempt:0,}" Aug 13 01:35:00.968327 systemd-networkd[1469]: calib1ec8eb995f: Link UP Aug 13 01:35:00.969281 systemd-networkd[1469]: calib1ec8eb995f: Gained carrier Aug 13 01:35:00.978392 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1032505890.mount: Deactivated successfully. Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.868 [INFO][5127] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--27--175-k8s-coredns--7c65d6cfc9--994jv-eth0 coredns-7c65d6cfc9- kube-system 3bd5a2e2-42ee-4e27-a412-724b2f0527b4 835 0 2025-08-13 01:32:48 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-234-27-175 coredns-7c65d6cfc9-994jv eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calib1ec8eb995f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" Namespace="kube-system" Pod="coredns-7c65d6cfc9-994jv" WorkloadEndpoint="172--234--27--175-k8s-coredns--7c65d6cfc9--994jv-" Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.868 [INFO][5127] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" Namespace="kube-system" Pod="coredns-7c65d6cfc9-994jv" WorkloadEndpoint="172--234--27--175-k8s-coredns--7c65d6cfc9--994jv-eth0" Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.910 [INFO][5140] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" HandleID="k8s-pod-network.8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" Workload="172--234--27--175-k8s-coredns--7c65d6cfc9--994jv-eth0" Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.911 [INFO][5140] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" HandleID="k8s-pod-network.8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" Workload="172--234--27--175-k8s-coredns--7c65d6cfc9--994jv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f610), Attrs:map[string]string{"namespace":"kube-system", "node":"172-234-27-175", "pod":"coredns-7c65d6cfc9-994jv", "timestamp":"2025-08-13 01:35:00.910656493 +0000 UTC"}, Hostname:"172-234-27-175", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.911 [INFO][5140] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.911 [INFO][5140] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.911 [INFO][5140] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-27-175' Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.917 [INFO][5140] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" host="172-234-27-175" Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.924 [INFO][5140] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-27-175" Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.928 [INFO][5140] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="172-234-27-175" Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.930 [INFO][5140] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="172-234-27-175" Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.932 [INFO][5140] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="172-234-27-175" Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.932 [INFO][5140] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" host="172-234-27-175" Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.933 [INFO][5140] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75 Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.938 [INFO][5140] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" host="172-234-27-175" Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.950 [INFO][5140] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.2/26] block=192.168.59.0/26 handle="k8s-pod-network.8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" host="172-234-27-175" Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.950 [INFO][5140] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.2/26] handle="k8s-pod-network.8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" host="172-234-27-175" Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.950 [INFO][5140] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:35:01.010612 containerd[1544]: 2025-08-13 01:35:00.951 [INFO][5140] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.2/26] IPv6=[] ContainerID="8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" HandleID="k8s-pod-network.8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" Workload="172--234--27--175-k8s-coredns--7c65d6cfc9--994jv-eth0" Aug 13 01:35:01.012775 containerd[1544]: 2025-08-13 01:35:00.958 [INFO][5127] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" Namespace="kube-system" Pod="coredns-7c65d6cfc9-994jv" WorkloadEndpoint="172--234--27--175-k8s-coredns--7c65d6cfc9--994jv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--27--175-k8s-coredns--7c65d6cfc9--994jv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3bd5a2e2-42ee-4e27-a412-724b2f0527b4", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-27-175", ContainerID:"", Pod:"coredns-7c65d6cfc9-994jv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1ec8eb995f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:35:01.012775 containerd[1544]: 2025-08-13 01:35:00.959 [INFO][5127] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.2/32] ContainerID="8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" Namespace="kube-system" Pod="coredns-7c65d6cfc9-994jv" WorkloadEndpoint="172--234--27--175-k8s-coredns--7c65d6cfc9--994jv-eth0" Aug 13 01:35:01.012775 containerd[1544]: 2025-08-13 01:35:00.960 [INFO][5127] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1ec8eb995f ContainerID="8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" Namespace="kube-system" Pod="coredns-7c65d6cfc9-994jv" WorkloadEndpoint="172--234--27--175-k8s-coredns--7c65d6cfc9--994jv-eth0" Aug 13 01:35:01.012775 containerd[1544]: 2025-08-13 01:35:00.971 [INFO][5127] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" Namespace="kube-system" Pod="coredns-7c65d6cfc9-994jv" WorkloadEndpoint="172--234--27--175-k8s-coredns--7c65d6cfc9--994jv-eth0" Aug 13 01:35:01.012775 containerd[1544]: 2025-08-13 01:35:00.971 [INFO][5127] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" Namespace="kube-system" Pod="coredns-7c65d6cfc9-994jv" WorkloadEndpoint="172--234--27--175-k8s-coredns--7c65d6cfc9--994jv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--27--175-k8s-coredns--7c65d6cfc9--994jv-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"3bd5a2e2-42ee-4e27-a412-724b2f0527b4", ResourceVersion:"835", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 32, 48, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-27-175", ContainerID:"8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75", Pod:"coredns-7c65d6cfc9-994jv", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.59.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calib1ec8eb995f", MAC:"7a:c4:08:6e:7f:51", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:35:01.012775 containerd[1544]: 2025-08-13 01:35:00.991 [INFO][5127] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" Namespace="kube-system" Pod="coredns-7c65d6cfc9-994jv" WorkloadEndpoint="172--234--27--175-k8s-coredns--7c65d6cfc9--994jv-eth0" Aug 13 01:35:01.059401 containerd[1544]: time="2025-08-13T01:35:01.059344263Z" level=info msg="connecting to shim 8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75" address="unix:///run/containerd/s/19c733b317febda3e9979ccd143b7daf076f724d1fe422bc998026f0cf832504" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:35:01.111898 systemd[1]: Started cri-containerd-8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75.scope - libcontainer container 8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75. Aug 13 01:35:01.193000 containerd[1544]: time="2025-08-13T01:35:01.192971991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-994jv,Uid:3bd5a2e2-42ee-4e27-a412-724b2f0527b4,Namespace:kube-system,Attempt:0,} returns sandbox id \"8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75\"" Aug 13 01:35:01.194163 kubelet[2739]: E0813 01:35:01.194049 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:35:01.202729 systemd-networkd[1469]: calia48bfa5a08d: Gained IPv6LL Aug 13 01:35:01.861257 containerd[1544]: time="2025-08-13T01:35:01.861214980Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:01.862378 containerd[1544]: time="2025-08-13T01:35:01.862278851Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 01:35:01.863045 containerd[1544]: time="2025-08-13T01:35:01.863021593Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:01.865232 containerd[1544]: time="2025-08-13T01:35:01.865111445Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:01.866075 containerd[1544]: time="2025-08-13T01:35:01.866053526Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.708825933s" Aug 13 01:35:01.866171 containerd[1544]: time="2025-08-13T01:35:01.866152106Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:35:01.868255 containerd[1544]: time="2025-08-13T01:35:01.868230879Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:35:01.869537 containerd[1544]: time="2025-08-13T01:35:01.868800811Z" level=info msg="CreateContainer within sandbox \"a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:35:01.878428 containerd[1544]: time="2025-08-13T01:35:01.878406534Z" level=info msg="Container a6fa54ac3e80688e6778b67e4385525cd1c5b7c5dea3f08ab6aa3b055bd25832: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:35:01.886901 containerd[1544]: time="2025-08-13T01:35:01.886865436Z" level=info msg="CreateContainer within sandbox \"a2a30ed72e9c1ecce248e42b3138fcf4b781b03eaf3c0351693f9aedd14fd128\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a6fa54ac3e80688e6778b67e4385525cd1c5b7c5dea3f08ab6aa3b055bd25832\"" Aug 13 01:35:01.887481 containerd[1544]: time="2025-08-13T01:35:01.887414947Z" level=info msg="StartContainer for \"a6fa54ac3e80688e6778b67e4385525cd1c5b7c5dea3f08ab6aa3b055bd25832\"" Aug 13 01:35:01.888426 containerd[1544]: time="2025-08-13T01:35:01.888404449Z" level=info msg="connecting to shim a6fa54ac3e80688e6778b67e4385525cd1c5b7c5dea3f08ab6aa3b055bd25832" address="unix:///run/containerd/s/f78de32d7c533eaaa801d89dfeee9fdc1b4f81448d1dec3ca8fefe7fd1933b90" protocol=ttrpc version=3 Aug 13 01:35:01.911261 systemd[1]: Started cri-containerd-a6fa54ac3e80688e6778b67e4385525cd1c5b7c5dea3f08ab6aa3b055bd25832.scope - libcontainer container a6fa54ac3e80688e6778b67e4385525cd1c5b7c5dea3f08ab6aa3b055bd25832. Aug 13 01:35:01.940938 containerd[1544]: time="2025-08-13T01:35:01.940861202Z" level=info msg="StartContainer for \"a6fa54ac3e80688e6778b67e4385525cd1c5b7c5dea3f08ab6aa3b055bd25832\" returns successfully" Aug 13 01:35:02.092594 containerd[1544]: time="2025-08-13T01:35:02.092537754Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:02.093296 containerd[1544]: time="2025-08-13T01:35:02.093268035Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" Aug 13 01:35:02.095530 containerd[1544]: time="2025-08-13T01:35:02.095505128Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 227.245019ms" Aug 13 01:35:02.095578 containerd[1544]: time="2025-08-13T01:35:02.095537088Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:35:02.097981 containerd[1544]: time="2025-08-13T01:35:02.097949221Z" level=info msg="CreateContainer within sandbox \"8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:35:02.103148 containerd[1544]: time="2025-08-13T01:35:02.102972518Z" level=info msg="Container 954a8b2cdf1181bd89213e34fe9c9a71c256968b33cf65956d6504889285f6e1: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:35:02.118949 containerd[1544]: time="2025-08-13T01:35:02.118842461Z" level=info msg="CreateContainer within sandbox \"8a395ea29f4baf52a63f04b7000b7d6f4a1a4dd2e7727b3b2a8140ec4de72b75\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"954a8b2cdf1181bd89213e34fe9c9a71c256968b33cf65956d6504889285f6e1\"" Aug 13 01:35:02.119471 containerd[1544]: time="2025-08-13T01:35:02.119427791Z" level=info msg="StartContainer for \"954a8b2cdf1181bd89213e34fe9c9a71c256968b33cf65956d6504889285f6e1\"" Aug 13 01:35:02.120658 containerd[1544]: time="2025-08-13T01:35:02.120637103Z" level=info msg="connecting to shim 954a8b2cdf1181bd89213e34fe9c9a71c256968b33cf65956d6504889285f6e1" address="unix:///run/containerd/s/19c733b317febda3e9979ccd143b7daf076f724d1fe422bc998026f0cf832504" protocol=ttrpc version=3 Aug 13 01:35:02.142281 systemd[1]: Started cri-containerd-954a8b2cdf1181bd89213e34fe9c9a71c256968b33cf65956d6504889285f6e1.scope - libcontainer container 954a8b2cdf1181bd89213e34fe9c9a71c256968b33cf65956d6504889285f6e1. Aug 13 01:35:02.182031 containerd[1544]: time="2025-08-13T01:35:02.181976728Z" level=info msg="StartContainer for \"954a8b2cdf1181bd89213e34fe9c9a71c256968b33cf65956d6504889285f6e1\" returns successfully" Aug 13 01:35:02.189089 kubelet[2739]: E0813 01:35:02.189059 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:35:02.195705 kubelet[2739]: E0813 01:35:02.195670 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:35:02.237429 kubelet[2739]: I0813 01:35:02.236731 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-994jv" podStartSLOduration=133.336837383 podStartE2EDuration="2m14.236710965s" podCreationTimestamp="2025-08-13 01:32:48 +0000 UTC" firstStartedPulling="2025-08-13 01:35:01.196348426 +0000 UTC m=+140.484665554" lastFinishedPulling="2025-08-13 01:35:02.096222018 +0000 UTC m=+141.384539136" observedRunningTime="2025-08-13 01:35:02.211038169 +0000 UTC m=+141.499355287" watchObservedRunningTime="2025-08-13 01:35:02.236710965 +0000 UTC m=+141.525028093" Aug 13 01:35:02.261791 kubelet[2739]: I0813 01:35:02.261730 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-mx5v9" podStartSLOduration=132.550697951 podStartE2EDuration="2m14.261706789s" podCreationTimestamp="2025-08-13 01:32:48 +0000 UTC" firstStartedPulling="2025-08-13 01:35:00.1561012 +0000 UTC m=+139.444418318" lastFinishedPulling="2025-08-13 01:35:01.867110038 +0000 UTC m=+141.155427156" observedRunningTime="2025-08-13 01:35:02.229292754 +0000 UTC m=+141.517609872" watchObservedRunningTime="2025-08-13 01:35:02.261706789 +0000 UTC m=+141.550023907" Aug 13 01:35:02.737300 systemd-networkd[1469]: calib1ec8eb995f: Gained IPv6LL Aug 13 01:35:02.798432 containerd[1544]: time="2025-08-13T01:35:02.798149234Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,}" Aug 13 01:35:02.908870 systemd-networkd[1469]: cali3b3c9864f41: Link UP Aug 13 01:35:02.910577 systemd-networkd[1469]: cali3b3c9864f41: Gained carrier Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.832 [INFO][5333] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--27--175-k8s-csi--node--driver--7bj49-eth0 csi-node-driver- calico-system 4e5845c9-626c-4c83-900a-0da0bae2daed 750 0 2025-08-13 01:33:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-234-27-175 csi-node-driver-7bj49 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3b3c9864f41 [] [] }} ContainerID="7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" Namespace="calico-system" Pod="csi-node-driver-7bj49" WorkloadEndpoint="172--234--27--175-k8s-csi--node--driver--7bj49-" Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.832 [INFO][5333] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" Namespace="calico-system" Pod="csi-node-driver-7bj49" WorkloadEndpoint="172--234--27--175-k8s-csi--node--driver--7bj49-eth0" Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.860 [INFO][5345] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" HandleID="k8s-pod-network.7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" Workload="172--234--27--175-k8s-csi--node--driver--7bj49-eth0" Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.861 [INFO][5345] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" HandleID="k8s-pod-network.7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" Workload="172--234--27--175-k8s-csi--node--driver--7bj49-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f870), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-27-175", "pod":"csi-node-driver-7bj49", "timestamp":"2025-08-13 01:35:02.860895951 +0000 UTC"}, Hostname:"172-234-27-175", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.861 [INFO][5345] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.861 [INFO][5345] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.861 [INFO][5345] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-27-175' Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.866 [INFO][5345] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" host="172-234-27-175" Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.871 [INFO][5345] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-27-175" Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.880 [INFO][5345] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="172-234-27-175" Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.883 [INFO][5345] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="172-234-27-175" Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.887 [INFO][5345] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="172-234-27-175" Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.887 [INFO][5345] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" host="172-234-27-175" Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.890 [INFO][5345] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.894 [INFO][5345] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" host="172-234-27-175" Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.900 [INFO][5345] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.3/26] block=192.168.59.0/26 handle="k8s-pod-network.7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" host="172-234-27-175" Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.900 [INFO][5345] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.3/26] handle="k8s-pod-network.7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" host="172-234-27-175" Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.900 [INFO][5345] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:35:02.928393 containerd[1544]: 2025-08-13 01:35:02.900 [INFO][5345] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.3/26] IPv6=[] ContainerID="7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" HandleID="k8s-pod-network.7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" Workload="172--234--27--175-k8s-csi--node--driver--7bj49-eth0" Aug 13 01:35:02.932267 containerd[1544]: 2025-08-13 01:35:02.903 [INFO][5333] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" Namespace="calico-system" Pod="csi-node-driver-7bj49" WorkloadEndpoint="172--234--27--175-k8s-csi--node--driver--7bj49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--27--175-k8s-csi--node--driver--7bj49-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4e5845c9-626c-4c83-900a-0da0bae2daed", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 33, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-27-175", ContainerID:"", Pod:"csi-node-driver-7bj49", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b3c9864f41", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:35:02.932267 containerd[1544]: 2025-08-13 01:35:02.903 [INFO][5333] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.3/32] ContainerID="7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" Namespace="calico-system" Pod="csi-node-driver-7bj49" WorkloadEndpoint="172--234--27--175-k8s-csi--node--driver--7bj49-eth0" Aug 13 01:35:02.932267 containerd[1544]: 2025-08-13 01:35:02.903 [INFO][5333] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b3c9864f41 ContainerID="7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" Namespace="calico-system" Pod="csi-node-driver-7bj49" WorkloadEndpoint="172--234--27--175-k8s-csi--node--driver--7bj49-eth0" Aug 13 01:35:02.932267 containerd[1544]: 2025-08-13 01:35:02.910 [INFO][5333] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" Namespace="calico-system" Pod="csi-node-driver-7bj49" WorkloadEndpoint="172--234--27--175-k8s-csi--node--driver--7bj49-eth0" Aug 13 01:35:02.932267 containerd[1544]: 2025-08-13 01:35:02.911 [INFO][5333] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" Namespace="calico-system" Pod="csi-node-driver-7bj49" WorkloadEndpoint="172--234--27--175-k8s-csi--node--driver--7bj49-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--27--175-k8s-csi--node--driver--7bj49-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4e5845c9-626c-4c83-900a-0da0bae2daed", ResourceVersion:"750", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 33, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-27-175", ContainerID:"7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b", Pod:"csi-node-driver-7bj49", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.59.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3b3c9864f41", MAC:"72:3e:81:3b:81:e1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:35:02.932267 containerd[1544]: 2025-08-13 01:35:02.923 [INFO][5333] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" Namespace="calico-system" Pod="csi-node-driver-7bj49" WorkloadEndpoint="172--234--27--175-k8s-csi--node--driver--7bj49-eth0" Aug 13 01:35:02.973416 containerd[1544]: time="2025-08-13T01:35:02.973373778Z" level=info msg="connecting to shim 7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b" address="unix:///run/containerd/s/d9c9a9ca8186b1d51a4b806a87269fc81532c45a55fc2baf33dab765092a56ac" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:35:03.004283 systemd[1]: Started cri-containerd-7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b.scope - libcontainer container 7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b. Aug 13 01:35:03.044593 containerd[1544]: time="2025-08-13T01:35:03.044542876Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-7bj49,Uid:4e5845c9-626c-4c83-900a-0da0bae2daed,Namespace:calico-system,Attempt:0,} returns sandbox id \"7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b\"" Aug 13 01:35:03.050035 containerd[1544]: time="2025-08-13T01:35:03.049979434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 01:35:03.198692 kubelet[2739]: E0813 01:35:03.198661 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:35:03.200230 kubelet[2739]: E0813 01:35:03.199359 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:35:03.841148 containerd[1544]: time="2025-08-13T01:35:03.841089202Z" level=error msg="failed to cleanup \"extract-638526923-JPnD sha256:b9ef7dd6a35219f4a228033a2203729df0524b8a0f5d5fa60bc737b8afda552d\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:35:03.841714 containerd[1544]: time="2025-08-13T01:35:03.841679354Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/9d264e5a926b6ebde73e8aae5e882ad1826cb0e6776be9851ad98dd74dac7412/data: no space left on device" Aug 13 01:35:03.841798 containerd[1544]: time="2025-08-13T01:35:03.841759004Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=2101173" Aug 13 01:35:03.841955 kubelet[2739]: E0813 01:35:03.841918 2739 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/9d264e5a926b6ebde73e8aae5e882ad1826cb0e6776be9851ad98dd74dac7412/data: no space left on device" image="ghcr.io/flatcar/calico/csi:v3.30.2" Aug 13 01:35:03.842021 kubelet[2739]: E0813 01:35:03.841968 2739 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/9d264e5a926b6ebde73e8aae5e882ad1826cb0e6776be9851ad98dd74dac7412/data: no space left on device" image="ghcr.io/flatcar/calico/csi:v3.30.2" Aug 13 01:35:03.842126 kubelet[2739]: E0813 01:35:03.842086 2739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.2,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vtn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/9d264e5a926b6ebde73e8aae5e882ad1826cb0e6776be9851ad98dd74dac7412/data: no space left on device" logger="UnhandledError" Aug 13 01:35:03.845011 containerd[1544]: time="2025-08-13T01:35:03.844949318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 01:35:04.201501 kubelet[2739]: E0813 01:35:04.200524 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:35:04.201501 kubelet[2739]: E0813 01:35:04.200524 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:35:04.273976 systemd-networkd[1469]: cali3b3c9864f41: Gained IPv6LL Aug 13 01:35:04.579435 containerd[1544]: time="2025-08-13T01:35:04.579370321Z" level=error msg="failed to cleanup \"extract-454150121-NbXn sha256:a6200c63e2a03c9e19bca689383dae051e67c8fbd246c7e3961b6330b68b8256\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:35:04.580116 containerd[1544]: time="2025-08-13T01:35:04.580062242Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/ae6bd1a6716632c7ab980fa864a57306719e1a677f8205329efeaf092032e0a4/data: no space left on device" Aug 13 01:35:04.580116 containerd[1544]: time="2025-08-13T01:35:04.580089452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=2101556" Aug 13 01:35:04.580469 kubelet[2739]: E0813 01:35:04.580423 2739 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/ae6bd1a6716632c7ab980fa864a57306719e1a677f8205329efeaf092032e0a4/data: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2" Aug 13 01:35:04.580543 kubelet[2739]: E0813 01:35:04.580491 2739 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/ae6bd1a6716632c7ab980fa864a57306719e1a677f8205329efeaf092032e0a4/data: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2" Aug 13 01:35:04.580673 kubelet[2739]: E0813 01:35:04.580638 2739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9vtn6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-7bj49_calico-system(4e5845c9-626c-4c83-900a-0da0bae2daed): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/ae6bd1a6716632c7ab980fa864a57306719e1a677f8205329efeaf092032e0a4/data: no space left on device" logger="UnhandledError" Aug 13 01:35:04.582017 kubelet[2739]: E0813 01:35:04.581975 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/9d264e5a926b6ebde73e8aae5e882ad1826cb0e6776be9851ad98dd74dac7412/data: no space left on device\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/ae6bd1a6716632c7ab980fa864a57306719e1a677f8205329efeaf092032e0a4/data: no space left on device\"]" pod="calico-system/csi-node-driver-7bj49" podUID="4e5845c9-626c-4c83-900a-0da0bae2daed" Aug 13 01:35:04.798870 containerd[1544]: time="2025-08-13T01:35:04.798716780Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,}" Aug 13 01:35:04.897225 systemd-networkd[1469]: cali8f33573a0c9: Link UP Aug 13 01:35:04.899222 systemd-networkd[1469]: cali8f33573a0c9: Gained carrier Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.836 [INFO][5413] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--234--27--175-k8s-calico--kube--controllers--85fbc76f96--d5vf4-eth0 calico-kube-controllers-85fbc76f96- calico-system 56caad57-6b4a-4069-b011-1059db183012 845 0 2025-08-13 01:33:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:85fbc76f96 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-234-27-175 calico-kube-controllers-85fbc76f96-d5vf4 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali8f33573a0c9 [] [] }} ContainerID="1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" Namespace="calico-system" Pod="calico-kube-controllers-85fbc76f96-d5vf4" WorkloadEndpoint="172--234--27--175-k8s-calico--kube--controllers--85fbc76f96--d5vf4-" Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.837 [INFO][5413] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" Namespace="calico-system" Pod="calico-kube-controllers-85fbc76f96-d5vf4" WorkloadEndpoint="172--234--27--175-k8s-calico--kube--controllers--85fbc76f96--d5vf4-eth0" Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.860 [INFO][5421] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" HandleID="k8s-pod-network.1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" Workload="172--234--27--175-k8s-calico--kube--controllers--85fbc76f96--d5vf4-eth0" Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.860 [INFO][5421] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" HandleID="k8s-pod-network.1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" Workload="172--234--27--175-k8s-calico--kube--controllers--85fbc76f96--d5vf4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f180), Attrs:map[string]string{"namespace":"calico-system", "node":"172-234-27-175", "pod":"calico-kube-controllers-85fbc76f96-d5vf4", "timestamp":"2025-08-13 01:35:04.860433284 +0000 UTC"}, Hostname:"172-234-27-175", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.860 [INFO][5421] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.861 [INFO][5421] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.861 [INFO][5421] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-234-27-175' Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.867 [INFO][5421] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" host="172-234-27-175" Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.871 [INFO][5421] ipam/ipam.go 394: Looking up existing affinities for host host="172-234-27-175" Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.875 [INFO][5421] ipam/ipam.go 511: Trying affinity for 192.168.59.0/26 host="172-234-27-175" Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.876 [INFO][5421] ipam/ipam.go 158: Attempting to load block cidr=192.168.59.0/26 host="172-234-27-175" Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.878 [INFO][5421] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.59.0/26 host="172-234-27-175" Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.878 [INFO][5421] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.59.0/26 handle="k8s-pod-network.1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" host="172-234-27-175" Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.879 [INFO][5421] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3 Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.883 [INFO][5421] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.59.0/26 handle="k8s-pod-network.1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" host="172-234-27-175" Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.888 [INFO][5421] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.59.4/26] block=192.168.59.0/26 handle="k8s-pod-network.1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" host="172-234-27-175" Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.889 [INFO][5421] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.59.4/26] handle="k8s-pod-network.1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" host="172-234-27-175" Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.889 [INFO][5421] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:35:04.922087 containerd[1544]: 2025-08-13 01:35:04.889 [INFO][5421] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.59.4/26] IPv6=[] ContainerID="1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" HandleID="k8s-pod-network.1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" Workload="172--234--27--175-k8s-calico--kube--controllers--85fbc76f96--d5vf4-eth0" Aug 13 01:35:04.922976 containerd[1544]: 2025-08-13 01:35:04.892 [INFO][5413] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" Namespace="calico-system" Pod="calico-kube-controllers-85fbc76f96-d5vf4" WorkloadEndpoint="172--234--27--175-k8s-calico--kube--controllers--85fbc76f96--d5vf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--27--175-k8s-calico--kube--controllers--85fbc76f96--d5vf4-eth0", GenerateName:"calico-kube-controllers-85fbc76f96-", Namespace:"calico-system", SelfLink:"", UID:"56caad57-6b4a-4069-b011-1059db183012", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 33, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85fbc76f96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-27-175", ContainerID:"", Pod:"calico-kube-controllers-85fbc76f96-d5vf4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8f33573a0c9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:35:04.922976 containerd[1544]: 2025-08-13 01:35:04.892 [INFO][5413] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.59.4/32] ContainerID="1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" Namespace="calico-system" Pod="calico-kube-controllers-85fbc76f96-d5vf4" WorkloadEndpoint="172--234--27--175-k8s-calico--kube--controllers--85fbc76f96--d5vf4-eth0" Aug 13 01:35:04.922976 containerd[1544]: 2025-08-13 01:35:04.892 [INFO][5413] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8f33573a0c9 ContainerID="1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" Namespace="calico-system" Pod="calico-kube-controllers-85fbc76f96-d5vf4" WorkloadEndpoint="172--234--27--175-k8s-calico--kube--controllers--85fbc76f96--d5vf4-eth0" Aug 13 01:35:04.922976 containerd[1544]: 2025-08-13 01:35:04.900 [INFO][5413] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" Namespace="calico-system" Pod="calico-kube-controllers-85fbc76f96-d5vf4" WorkloadEndpoint="172--234--27--175-k8s-calico--kube--controllers--85fbc76f96--d5vf4-eth0" Aug 13 01:35:04.922976 containerd[1544]: 2025-08-13 01:35:04.900 [INFO][5413] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" Namespace="calico-system" Pod="calico-kube-controllers-85fbc76f96-d5vf4" WorkloadEndpoint="172--234--27--175-k8s-calico--kube--controllers--85fbc76f96--d5vf4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--234--27--175-k8s-calico--kube--controllers--85fbc76f96--d5vf4-eth0", GenerateName:"calico-kube-controllers-85fbc76f96-", Namespace:"calico-system", SelfLink:"", UID:"56caad57-6b4a-4069-b011-1059db183012", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 33, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"85fbc76f96", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-234-27-175", ContainerID:"1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3", Pod:"calico-kube-controllers-85fbc76f96-d5vf4", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.59.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali8f33573a0c9", MAC:"c2:c8:7c:7c:22:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:35:04.922976 containerd[1544]: 2025-08-13 01:35:04.912 [INFO][5413] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" Namespace="calico-system" Pod="calico-kube-controllers-85fbc76f96-d5vf4" WorkloadEndpoint="172--234--27--175-k8s-calico--kube--controllers--85fbc76f96--d5vf4-eth0" Aug 13 01:35:04.960450 containerd[1544]: time="2025-08-13T01:35:04.960383560Z" level=info msg="connecting to shim 1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3" address="unix:///run/containerd/s/52fbc73ab4004d99e96729b3e840b49340ea1d6f449bb5e3cca8dc6584e254f4" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:35:04.983259 systemd[1]: Started cri-containerd-1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3.scope - libcontainer container 1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3. Aug 13 01:35:05.029742 containerd[1544]: time="2025-08-13T01:35:05.029712084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-85fbc76f96-d5vf4,Uid:56caad57-6b4a-4069-b011-1059db183012,Namespace:calico-system,Attempt:0,} returns sandbox id \"1588c8d679d0ef4d78d8d23e0525a7b8d42032a07339b513f5ca23db5ec0eaa3\"" Aug 13 01:35:05.031575 containerd[1544]: time="2025-08-13T01:35:05.031557677Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:35:05.207463 kubelet[2739]: E0813 01:35:05.206766 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.2\\\"\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\"\"]" pod="calico-system/csi-node-driver-7bj49" podUID="4e5845c9-626c-4c83-900a-0da0bae2daed" Aug 13 01:35:05.740392 systemd[1]: Started sshd@10-172.234.27.175:22-147.75.109.163:32874.service - OpenSSH per-connection server daemon (147.75.109.163:32874). Aug 13 01:35:05.920449 containerd[1544]: time="2025-08-13T01:35:05.920392345Z" level=error msg="failed to cleanup \"extract-718732353-ezyX sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:35:05.921385 containerd[1544]: time="2025-08-13T01:35:05.921247517Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 01:35:05.921385 containerd[1544]: time="2025-08-13T01:35:05.921291017Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=2101482" Aug 13 01:35:05.921562 kubelet[2739]: E0813 01:35:05.921491 2739 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:35:05.921645 kubelet[2739]: E0813 01:35:05.921584 2739 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:35:05.921834 kubelet[2739]: E0813 01:35:05.921777 2739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwl76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 01:35:05.923478 kubelet[2739]: E0813 01:35:05.923393 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:35:06.088827 sshd[5481]: Accepted publickey for core from 147.75.109.163 port 32874 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:35:06.090518 sshd-session[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:35:06.096039 systemd-logind[1529]: New session 9 of user core. Aug 13 01:35:06.104290 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 01:35:06.206765 kubelet[2739]: E0813 01:35:06.206730 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:35:06.439547 sshd[5483]: Connection closed by 147.75.109.163 port 32874 Aug 13 01:35:06.440510 sshd-session[5481]: pam_unix(sshd:session): session closed for user core Aug 13 01:35:06.446274 systemd[1]: sshd@10-172.234.27.175:22-147.75.109.163:32874.service: Deactivated successfully. Aug 13 01:35:06.449502 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:35:06.450613 systemd-logind[1529]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:35:06.452873 systemd-logind[1529]: Removed session 9. Aug 13 01:35:06.766634 kubelet[2739]: I0813 01:35:06.766591 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:35:06.766634 kubelet[2739]: I0813 01:35:06.766638 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:35:06.769067 kubelet[2739]: I0813 01:35:06.769048 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:35:06.790228 kubelet[2739]: I0813 01:35:06.790193 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:35:06.790427 kubelet[2739]: I0813 01:35:06.790390 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/csi-node-driver-7bj49","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:35:06.790516 kubelet[2739]: E0813 01:35:06.790480 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:35:06.790516 kubelet[2739]: E0813 01:35:06.790493 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:35:06.790516 kubelet[2739]: E0813 01:35:06.790506 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:35:06.790516 kubelet[2739]: E0813 01:35:06.790517 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:35:06.790601 kubelet[2739]: E0813 01:35:06.790526 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:35:06.790601 kubelet[2739]: E0813 01:35:06.790553 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:35:06.790601 kubelet[2739]: E0813 01:35:06.790565 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:35:06.790601 kubelet[2739]: E0813 01:35:06.790573 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:35:06.790601 kubelet[2739]: E0813 01:35:06.790583 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:35:06.790601 kubelet[2739]: E0813 01:35:06.790592 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:35:06.790601 kubelet[2739]: I0813 01:35:06.790601 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:35:06.799171 kubelet[2739]: E0813 01:35:06.798210 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:35:06.897358 systemd-networkd[1469]: cali8f33573a0c9: Gained IPv6LL Aug 13 01:35:11.500285 systemd[1]: Started sshd@11-172.234.27.175:22-147.75.109.163:35292.service - OpenSSH per-connection server daemon (147.75.109.163:35292). Aug 13 01:35:11.837781 sshd[5506]: Accepted publickey for core from 147.75.109.163 port 35292 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:35:11.842044 sshd-session[5506]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:35:11.847924 systemd-logind[1529]: New session 10 of user core. Aug 13 01:35:11.858295 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 01:35:12.141418 sshd[5508]: Connection closed by 147.75.109.163 port 35292 Aug 13 01:35:12.142182 sshd-session[5506]: pam_unix(sshd:session): session closed for user core Aug 13 01:35:12.146667 systemd[1]: sshd@11-172.234.27.175:22-147.75.109.163:35292.service: Deactivated successfully. Aug 13 01:35:12.148802 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:35:12.150330 systemd-logind[1529]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:35:12.152258 systemd-logind[1529]: Removed session 10. Aug 13 01:35:16.820014 kubelet[2739]: I0813 01:35:16.819775 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:35:16.820014 kubelet[2739]: I0813 01:35:16.820009 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:35:16.825008 kubelet[2739]: I0813 01:35:16.824993 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:35:16.828245 kubelet[2739]: I0813 01:35:16.828227 2739 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93" size=25052538 runtimeHandler="" Aug 13 01:35:16.829242 containerd[1544]: time="2025-08-13T01:35:16.829195111Z" level=info msg="RemoveImage \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:35:16.830244 containerd[1544]: time="2025-08-13T01:35:16.830224352Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator:v1.38.3\"" Aug 13 01:35:16.830781 containerd[1544]: time="2025-08-13T01:35:16.830762383Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\"" Aug 13 01:35:16.831389 containerd[1544]: time="2025-08-13T01:35:16.831366764Z" level=info msg="RemoveImage \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" returns successfully" Aug 13 01:35:16.831473 containerd[1544]: time="2025-08-13T01:35:16.831429394Z" level=info msg="ImageDelete event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:35:16.842439 kubelet[2739]: I0813 01:35:16.842208 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:35:16.842439 kubelet[2739]: I0813 01:35:16.842317 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/csi-node-driver-7bj49","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:35:16.842439 kubelet[2739]: E0813 01:35:16.842342 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:35:16.842439 kubelet[2739]: E0813 01:35:16.842351 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:35:16.842439 kubelet[2739]: E0813 01:35:16.842361 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:35:16.842439 kubelet[2739]: E0813 01:35:16.842370 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:35:16.842439 kubelet[2739]: E0813 01:35:16.842378 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:35:16.842439 kubelet[2739]: E0813 01:35:16.842388 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:35:16.842439 kubelet[2739]: E0813 01:35:16.842396 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:35:16.842439 kubelet[2739]: E0813 01:35:16.842403 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:35:16.842439 kubelet[2739]: E0813 01:35:16.842410 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:35:16.842439 kubelet[2739]: E0813 01:35:16.842418 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:35:16.842439 kubelet[2739]: I0813 01:35:16.842427 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:35:17.201769 systemd[1]: Started sshd@12-172.234.27.175:22-147.75.109.163:35294.service - OpenSSH per-connection server daemon (147.75.109.163:35294). Aug 13 01:35:17.536318 sshd[5524]: Accepted publickey for core from 147.75.109.163 port 35294 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:35:17.537870 sshd-session[5524]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:35:17.542657 systemd-logind[1529]: New session 11 of user core. Aug 13 01:35:17.548318 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 01:35:17.800239 containerd[1544]: time="2025-08-13T01:35:17.799867235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 01:35:17.843609 sshd[5532]: Connection closed by 147.75.109.163 port 35294 Aug 13 01:35:17.845220 sshd-session[5524]: pam_unix(sshd:session): session closed for user core Aug 13 01:35:17.852325 systemd[1]: sshd@12-172.234.27.175:22-147.75.109.163:35294.service: Deactivated successfully. Aug 13 01:35:17.854362 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:35:17.855088 systemd-logind[1529]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:35:17.857109 systemd-logind[1529]: Removed session 11. Aug 13 01:35:17.902729 systemd[1]: Started sshd@13-172.234.27.175:22-147.75.109.163:35296.service - OpenSSH per-connection server daemon (147.75.109.163:35296). Aug 13 01:35:18.234175 sshd[5546]: Accepted publickey for core from 147.75.109.163 port 35296 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:35:18.235608 sshd-session[5546]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:35:18.241052 systemd-logind[1529]: New session 12 of user core. Aug 13 01:35:18.248270 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 01:35:18.494524 containerd[1544]: time="2025-08-13T01:35:18.493555875Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:18.497151 containerd[1544]: time="2025-08-13T01:35:18.496870269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 01:35:18.499070 containerd[1544]: time="2025-08-13T01:35:18.498478681Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:18.504166 containerd[1544]: time="2025-08-13T01:35:18.502637856Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:18.504262 containerd[1544]: time="2025-08-13T01:35:18.504243008Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 703.136792ms" Aug 13 01:35:18.504384 containerd[1544]: time="2025-08-13T01:35:18.504322388Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 01:35:18.511374 containerd[1544]: time="2025-08-13T01:35:18.511324366Z" level=info msg="CreateContainer within sandbox \"7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 01:35:18.523788 containerd[1544]: time="2025-08-13T01:35:18.523757601Z" level=info msg="Container 8b5fadab9d8f7c965c3b5ebce3f2e46be975971c1a26267915bb0cbc90ac216c: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:35:18.535716 containerd[1544]: time="2025-08-13T01:35:18.535681146Z" level=info msg="CreateContainer within sandbox \"7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8b5fadab9d8f7c965c3b5ebce3f2e46be975971c1a26267915bb0cbc90ac216c\"" Aug 13 01:35:18.536747 containerd[1544]: time="2025-08-13T01:35:18.536713987Z" level=info msg="StartContainer for \"8b5fadab9d8f7c965c3b5ebce3f2e46be975971c1a26267915bb0cbc90ac216c\"" Aug 13 01:35:18.542168 containerd[1544]: time="2025-08-13T01:35:18.539619750Z" level=info msg="connecting to shim 8b5fadab9d8f7c965c3b5ebce3f2e46be975971c1a26267915bb0cbc90ac216c" address="unix:///run/containerd/s/d9c9a9ca8186b1d51a4b806a87269fc81532c45a55fc2baf33dab765092a56ac" protocol=ttrpc version=3 Aug 13 01:35:18.572298 systemd[1]: Started cri-containerd-8b5fadab9d8f7c965c3b5ebce3f2e46be975971c1a26267915bb0cbc90ac216c.scope - libcontainer container 8b5fadab9d8f7c965c3b5ebce3f2e46be975971c1a26267915bb0cbc90ac216c. Aug 13 01:35:18.617154 sshd[5548]: Connection closed by 147.75.109.163 port 35296 Aug 13 01:35:18.617751 sshd-session[5546]: pam_unix(sshd:session): session closed for user core Aug 13 01:35:18.622381 systemd[1]: sshd@13-172.234.27.175:22-147.75.109.163:35296.service: Deactivated successfully. Aug 13 01:35:18.622985 systemd-logind[1529]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:35:18.626893 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:35:18.633230 containerd[1544]: time="2025-08-13T01:35:18.632729313Z" level=info msg="StartContainer for \"8b5fadab9d8f7c965c3b5ebce3f2e46be975971c1a26267915bb0cbc90ac216c\" returns successfully" Aug 13 01:35:18.634094 systemd-logind[1529]: Removed session 12. Aug 13 01:35:18.635335 containerd[1544]: time="2025-08-13T01:35:18.635305666Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 01:35:18.684357 systemd[1]: Started sshd@14-172.234.27.175:22-147.75.109.163:50956.service - OpenSSH per-connection server daemon (147.75.109.163:50956). Aug 13 01:35:18.798075 kubelet[2739]: E0813 01:35:18.797946 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:35:19.033538 sshd[5593]: Accepted publickey for core from 147.75.109.163 port 50956 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:35:19.035921 sshd-session[5593]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:35:19.043094 systemd-logind[1529]: New session 13 of user core. Aug 13 01:35:19.048319 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 01:35:19.359343 sshd[5597]: Connection closed by 147.75.109.163 port 50956 Aug 13 01:35:19.360228 sshd-session[5593]: pam_unix(sshd:session): session closed for user core Aug 13 01:35:19.366478 systemd[1]: sshd@14-172.234.27.175:22-147.75.109.163:50956.service: Deactivated successfully. Aug 13 01:35:19.372764 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:35:19.374091 systemd-logind[1529]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:35:19.376381 systemd-logind[1529]: Removed session 13. Aug 13 01:35:19.658355 containerd[1544]: time="2025-08-13T01:35:19.658189997Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:19.659287 containerd[1544]: time="2025-08-13T01:35:19.659072259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 01:35:19.659747 containerd[1544]: time="2025-08-13T01:35:19.659715979Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:19.661058 containerd[1544]: time="2025-08-13T01:35:19.661036581Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:35:19.662153 containerd[1544]: time="2025-08-13T01:35:19.661842262Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.026503636s" Aug 13 01:35:19.662153 containerd[1544]: time="2025-08-13T01:35:19.661873582Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 01:35:19.663999 containerd[1544]: time="2025-08-13T01:35:19.663964834Z" level=info msg="CreateContainer within sandbox \"7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 01:35:19.674367 containerd[1544]: time="2025-08-13T01:35:19.674302006Z" level=info msg="Container a9bc6ffc039db5e6d08fb9e91895ab7ccab033b29dc970e055860f61947824f4: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:35:19.696927 containerd[1544]: time="2025-08-13T01:35:19.696881594Z" level=info msg="CreateContainer within sandbox \"7b12422a7723eb069af5deac91ffed3e9e152d931f1b6ec9b3e93ae2af0e486b\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"a9bc6ffc039db5e6d08fb9e91895ab7ccab033b29dc970e055860f61947824f4\"" Aug 13 01:35:19.697632 containerd[1544]: time="2025-08-13T01:35:19.697604365Z" level=info msg="StartContainer for \"a9bc6ffc039db5e6d08fb9e91895ab7ccab033b29dc970e055860f61947824f4\"" Aug 13 01:35:19.698822 containerd[1544]: time="2025-08-13T01:35:19.698798077Z" level=info msg="connecting to shim a9bc6ffc039db5e6d08fb9e91895ab7ccab033b29dc970e055860f61947824f4" address="unix:///run/containerd/s/d9c9a9ca8186b1d51a4b806a87269fc81532c45a55fc2baf33dab765092a56ac" protocol=ttrpc version=3 Aug 13 01:35:19.725304 systemd[1]: Started cri-containerd-a9bc6ffc039db5e6d08fb9e91895ab7ccab033b29dc970e055860f61947824f4.scope - libcontainer container a9bc6ffc039db5e6d08fb9e91895ab7ccab033b29dc970e055860f61947824f4. Aug 13 01:35:19.773780 containerd[1544]: time="2025-08-13T01:35:19.773741706Z" level=info msg="StartContainer for \"a9bc6ffc039db5e6d08fb9e91895ab7ccab033b29dc970e055860f61947824f4\" returns successfully" Aug 13 01:35:19.798706 containerd[1544]: time="2025-08-13T01:35:19.798558486Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:35:19.977606 kubelet[2739]: I0813 01:35:19.977566 2739 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 01:35:19.978591 kubelet[2739]: I0813 01:35:19.977702 2739 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 01:35:20.270075 kubelet[2739]: I0813 01:35:20.269586 2739 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-7bj49" podStartSLOduration=123.653760286 podStartE2EDuration="2m20.26957149s" podCreationTimestamp="2025-08-13 01:33:00 +0000 UTC" firstStartedPulling="2025-08-13 01:35:03.046985689 +0000 UTC m=+142.335302807" lastFinishedPulling="2025-08-13 01:35:19.662796893 +0000 UTC m=+158.951114011" observedRunningTime="2025-08-13 01:35:20.257606155 +0000 UTC m=+159.545923273" watchObservedRunningTime="2025-08-13 01:35:20.26957149 +0000 UTC m=+159.557888608" Aug 13 01:35:20.547332 containerd[1544]: time="2025-08-13T01:35:20.546071869Z" level=error msg="failed to cleanup \"extract-267556237-hy2h sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:35:20.548061 containerd[1544]: time="2025-08-13T01:35:20.548008401Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 01:35:20.548122 containerd[1544]: time="2025-08-13T01:35:20.548070742Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=44044522" Aug 13 01:35:20.548414 kubelet[2739]: E0813 01:35:20.548337 2739 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:35:20.548500 kubelet[2739]: E0813 01:35:20.548409 2739 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:35:20.549013 kubelet[2739]: E0813 01:35:20.548555 2739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwl76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 01:35:20.550312 kubelet[2739]: E0813 01:35:20.550188 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:35:20.754455 systemd[1]: Started sshd@15-172.234.27.175:22-103.90.226.175:52490.service - OpenSSH per-connection server daemon (103.90.226.175:52490). Aug 13 01:35:21.310342 systemd[1]: Started sshd@16-172.234.27.175:22-122.166.49.42:40276.service - OpenSSH per-connection server daemon (122.166.49.42:40276). Aug 13 01:35:22.560310 sshd[5645]: Received disconnect from 103.90.226.175 port 52490:11: Bye Bye [preauth] Aug 13 01:35:22.560310 sshd[5645]: Disconnected from authenticating user root 103.90.226.175 port 52490 [preauth] Aug 13 01:35:22.562749 systemd[1]: sshd@15-172.234.27.175:22-103.90.226.175:52490.service: Deactivated successfully. Aug 13 01:35:22.971199 sshd[5648]: Received disconnect from 122.166.49.42 port 40276:11: Bye Bye [preauth] Aug 13 01:35:22.971199 sshd[5648]: Disconnected from authenticating user root 122.166.49.42 port 40276 [preauth] Aug 13 01:35:22.972969 systemd[1]: sshd@16-172.234.27.175:22-122.166.49.42:40276.service: Deactivated successfully. Aug 13 01:35:24.417459 systemd[1]: Started sshd@17-172.234.27.175:22-147.75.109.163:50972.service - OpenSSH per-connection server daemon (147.75.109.163:50972). Aug 13 01:35:24.746969 sshd[5655]: Accepted publickey for core from 147.75.109.163 port 50972 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:35:24.748552 sshd-session[5655]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:35:24.753440 systemd-logind[1529]: New session 14 of user core. Aug 13 01:35:24.760278 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 01:35:25.067062 sshd[5657]: Connection closed by 147.75.109.163 port 50972 Aug 13 01:35:25.068028 sshd-session[5655]: pam_unix(sshd:session): session closed for user core Aug 13 01:35:25.072755 systemd[1]: sshd@17-172.234.27.175:22-147.75.109.163:50972.service: Deactivated successfully. Aug 13 01:35:25.075387 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:35:25.076474 systemd-logind[1529]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:35:25.078406 systemd-logind[1529]: Removed session 14. Aug 13 01:35:25.424105 containerd[1544]: time="2025-08-13T01:35:25.424006790Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5a29eb80b0bc5089fcefc178e897c50dab7a245f3568439d3b5679ef6257511\" id:\"5e8dd304ff256592be84906ada605d98196486cdef1c32167829fe3d0dd70a9f\" pid:5682 exited_at:{seconds:1755048925 nanos:423667270}" Aug 13 01:35:26.863236 kubelet[2739]: I0813 01:35:26.863206 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:35:26.863236 kubelet[2739]: I0813 01:35:26.863243 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:35:26.865539 kubelet[2739]: I0813 01:35:26.865503 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:35:26.881969 kubelet[2739]: I0813 01:35:26.881943 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:35:26.882104 kubelet[2739]: I0813 01:35:26.882077 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-mx5v9","kube-system/coredns-7c65d6cfc9-994jv","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:35:26.882196 kubelet[2739]: E0813 01:35:26.882108 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:35:26.882196 kubelet[2739]: E0813 01:35:26.882121 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:35:26.882196 kubelet[2739]: E0813 01:35:26.882147 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:35:26.882196 kubelet[2739]: E0813 01:35:26.882157 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:35:26.882196 kubelet[2739]: E0813 01:35:26.882165 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:35:26.882196 kubelet[2739]: E0813 01:35:26.882173 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:35:26.882196 kubelet[2739]: E0813 01:35:26.882181 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:35:26.882196 kubelet[2739]: E0813 01:35:26.882191 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:35:26.882196 kubelet[2739]: E0813 01:35:26.882199 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:35:26.882418 kubelet[2739]: E0813 01:35:26.882206 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:35:26.882418 kubelet[2739]: I0813 01:35:26.882215 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:35:30.126337 systemd[1]: Started sshd@18-172.234.27.175:22-147.75.109.163:37036.service - OpenSSH per-connection server daemon (147.75.109.163:37036). Aug 13 01:35:30.460863 sshd[5700]: Accepted publickey for core from 147.75.109.163 port 37036 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:35:30.462728 sshd-session[5700]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:35:30.469065 systemd-logind[1529]: New session 15 of user core. Aug 13 01:35:30.474272 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 01:35:30.772042 sshd[5702]: Connection closed by 147.75.109.163 port 37036 Aug 13 01:35:30.773379 sshd-session[5700]: pam_unix(sshd:session): session closed for user core Aug 13 01:35:30.778013 systemd-logind[1529]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:35:30.778767 systemd[1]: sshd@18-172.234.27.175:22-147.75.109.163:37036.service: Deactivated successfully. Aug 13 01:35:30.780990 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:35:30.783315 systemd-logind[1529]: Removed session 15. Aug 13 01:35:32.797870 kubelet[2739]: E0813 01:35:32.797522 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:35:32.802462 kubelet[2739]: E0813 01:35:32.801647 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:35:35.845972 systemd[1]: Started sshd@19-172.234.27.175:22-147.75.109.163:37042.service - OpenSSH per-connection server daemon (147.75.109.163:37042). Aug 13 01:35:36.198219 sshd[5716]: Accepted publickey for core from 147.75.109.163 port 37042 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:35:36.199679 sshd-session[5716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:35:36.204696 systemd-logind[1529]: New session 16 of user core. Aug 13 01:35:36.208273 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 01:35:36.520145 sshd[5718]: Connection closed by 147.75.109.163 port 37042 Aug 13 01:35:36.521813 sshd-session[5716]: pam_unix(sshd:session): session closed for user core Aug 13 01:35:36.526363 systemd-logind[1529]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:35:36.527124 systemd[1]: sshd@19-172.234.27.175:22-147.75.109.163:37042.service: Deactivated successfully. Aug 13 01:35:36.529444 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:35:36.531233 systemd-logind[1529]: Removed session 16. Aug 13 01:35:36.901681 kubelet[2739]: I0813 01:35:36.901558 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:35:36.901681 kubelet[2739]: I0813 01:35:36.901598 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:35:36.903590 kubelet[2739]: I0813 01:35:36.903571 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:35:36.916579 kubelet[2739]: I0813 01:35:36.916557 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:35:36.916700 kubelet[2739]: I0813 01:35:36.916678 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:35:36.916786 kubelet[2739]: E0813 01:35:36.916710 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:35:36.916786 kubelet[2739]: E0813 01:35:36.916723 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:35:36.916786 kubelet[2739]: E0813 01:35:36.916731 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:35:36.916786 kubelet[2739]: E0813 01:35:36.916739 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:35:36.916786 kubelet[2739]: E0813 01:35:36.916748 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:35:36.916786 kubelet[2739]: E0813 01:35:36.916755 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:35:36.916786 kubelet[2739]: E0813 01:35:36.916763 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:35:36.916786 kubelet[2739]: E0813 01:35:36.916773 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:35:36.916786 kubelet[2739]: E0813 01:35:36.916781 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:35:36.916786 kubelet[2739]: E0813 01:35:36.916788 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:35:36.916786 kubelet[2739]: I0813 01:35:36.916797 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:35:41.594538 systemd[1]: Started sshd@20-172.234.27.175:22-147.75.109.163:42306.service - OpenSSH per-connection server daemon (147.75.109.163:42306). Aug 13 01:35:41.941600 sshd[5743]: Accepted publickey for core from 147.75.109.163 port 42306 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:35:41.942093 sshd-session[5743]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:35:41.948769 systemd-logind[1529]: New session 17 of user core. Aug 13 01:35:41.955679 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 01:35:42.260542 sshd[5745]: Connection closed by 147.75.109.163 port 42306 Aug 13 01:35:42.261341 sshd-session[5743]: pam_unix(sshd:session): session closed for user core Aug 13 01:35:42.265991 systemd[1]: sshd@20-172.234.27.175:22-147.75.109.163:42306.service: Deactivated successfully. Aug 13 01:35:42.268360 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 01:35:42.269388 systemd-logind[1529]: Session 17 logged out. Waiting for processes to exit. Aug 13 01:35:42.272782 systemd-logind[1529]: Removed session 17. Aug 13 01:35:43.798682 containerd[1544]: time="2025-08-13T01:35:43.798494427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:35:44.503553 containerd[1544]: time="2025-08-13T01:35:44.503497802Z" level=error msg="failed to cleanup \"extract-307776386-v3dD sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:35:44.504323 containerd[1544]: time="2025-08-13T01:35:44.504257623Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 01:35:44.504383 containerd[1544]: time="2025-08-13T01:35:44.504355003Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=42995946" Aug 13 01:35:44.504608 kubelet[2739]: E0813 01:35:44.504565 2739 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:35:44.505427 kubelet[2739]: E0813 01:35:44.504616 2739 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:35:44.505427 kubelet[2739]: E0813 01:35:44.504738 2739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwl76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 01:35:44.505922 kubelet[2739]: E0813 01:35:44.505880 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:35:46.935762 kubelet[2739]: I0813 01:35:46.935728 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:35:46.935762 kubelet[2739]: I0813 01:35:46.935765 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:35:46.937147 kubelet[2739]: I0813 01:35:46.937115 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:35:46.950985 kubelet[2739]: I0813 01:35:46.950967 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:35:46.951254 kubelet[2739]: I0813 01:35:46.951233 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-mx5v9","kube-system/coredns-7c65d6cfc9-994jv","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:35:46.951350 kubelet[2739]: E0813 01:35:46.951266 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:35:46.951350 kubelet[2739]: E0813 01:35:46.951279 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:35:46.951350 kubelet[2739]: E0813 01:35:46.951289 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:35:46.951350 kubelet[2739]: E0813 01:35:46.951298 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:35:46.951350 kubelet[2739]: E0813 01:35:46.951306 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:35:46.951350 kubelet[2739]: E0813 01:35:46.951313 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:35:46.951350 kubelet[2739]: E0813 01:35:46.951321 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:35:46.951350 kubelet[2739]: E0813 01:35:46.951331 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:35:46.951350 kubelet[2739]: E0813 01:35:46.951339 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:35:46.951350 kubelet[2739]: E0813 01:35:46.951347 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:35:46.951350 kubelet[2739]: I0813 01:35:46.951356 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:35:47.327437 systemd[1]: Started sshd@21-172.234.27.175:22-147.75.109.163:42316.service - OpenSSH per-connection server daemon (147.75.109.163:42316). Aug 13 01:35:47.661579 sshd[5765]: Accepted publickey for core from 147.75.109.163 port 42316 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:35:47.663280 sshd-session[5765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:35:47.668777 systemd-logind[1529]: New session 18 of user core. Aug 13 01:35:47.672251 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 01:35:47.979506 sshd[5767]: Connection closed by 147.75.109.163 port 42316 Aug 13 01:35:47.980339 sshd-session[5765]: pam_unix(sshd:session): session closed for user core Aug 13 01:35:47.984456 systemd[1]: sshd@21-172.234.27.175:22-147.75.109.163:42316.service: Deactivated successfully. Aug 13 01:35:47.986964 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 01:35:47.988294 systemd-logind[1529]: Session 18 logged out. Waiting for processes to exit. Aug 13 01:35:47.990199 systemd-logind[1529]: Removed session 18. Aug 13 01:35:53.044344 systemd[1]: Started sshd@22-172.234.27.175:22-147.75.109.163:44378.service - OpenSSH per-connection server daemon (147.75.109.163:44378). Aug 13 01:35:53.387256 sshd[5782]: Accepted publickey for core from 147.75.109.163 port 44378 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:35:53.388467 sshd-session[5782]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:35:53.394104 systemd-logind[1529]: New session 19 of user core. Aug 13 01:35:53.400256 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 01:35:53.697305 sshd[5784]: Connection closed by 147.75.109.163 port 44378 Aug 13 01:35:53.697877 sshd-session[5782]: pam_unix(sshd:session): session closed for user core Aug 13 01:35:53.703567 systemd-logind[1529]: Session 19 logged out. Waiting for processes to exit. Aug 13 01:35:53.704418 systemd[1]: sshd@22-172.234.27.175:22-147.75.109.163:44378.service: Deactivated successfully. Aug 13 01:35:53.707093 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 01:35:53.709748 systemd-logind[1529]: Removed session 19. Aug 13 01:35:55.431504 containerd[1544]: time="2025-08-13T01:35:55.431439834Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5a29eb80b0bc5089fcefc178e897c50dab7a245f3568439d3b5679ef6257511\" id:\"370fcb094ff0cc4cf0def14a4f50bee8d7904f0f2d109fee42c93f5aa5fc5a3c\" pid:5807 exited_at:{seconds:1755048955 nanos:430946893}" Aug 13 01:35:56.973596 kubelet[2739]: I0813 01:35:56.973545 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:35:56.973596 kubelet[2739]: I0813 01:35:56.973578 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:35:56.975315 kubelet[2739]: I0813 01:35:56.975286 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:35:56.997089 kubelet[2739]: I0813 01:35:56.997064 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:35:56.997624 kubelet[2739]: I0813 01:35:56.997598 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:35:56.997707 kubelet[2739]: E0813 01:35:56.997637 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:35:56.997707 kubelet[2739]: E0813 01:35:56.997654 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:35:56.997707 kubelet[2739]: E0813 01:35:56.997664 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:35:56.997707 kubelet[2739]: E0813 01:35:56.997672 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:35:56.997707 kubelet[2739]: E0813 01:35:56.997680 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:35:56.997707 kubelet[2739]: E0813 01:35:56.997688 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:35:56.997707 kubelet[2739]: E0813 01:35:56.997696 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:35:56.997707 kubelet[2739]: E0813 01:35:56.997707 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:35:56.997707 kubelet[2739]: E0813 01:35:56.997715 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:35:56.997913 kubelet[2739]: E0813 01:35:56.997723 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:35:56.997913 kubelet[2739]: I0813 01:35:56.997732 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:35:57.799251 kubelet[2739]: E0813 01:35:57.799212 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:35:58.757084 systemd[1]: Started sshd@23-172.234.27.175:22-147.75.109.163:53000.service - OpenSSH per-connection server daemon (147.75.109.163:53000). Aug 13 01:35:59.094252 sshd[5819]: Accepted publickey for core from 147.75.109.163 port 53000 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:35:59.095322 sshd-session[5819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:35:59.100921 systemd-logind[1529]: New session 20 of user core. Aug 13 01:35:59.105276 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 01:35:59.408065 sshd[5821]: Connection closed by 147.75.109.163 port 53000 Aug 13 01:35:59.408810 sshd-session[5819]: pam_unix(sshd:session): session closed for user core Aug 13 01:35:59.413015 systemd-logind[1529]: Session 20 logged out. Waiting for processes to exit. Aug 13 01:35:59.413637 systemd[1]: sshd@23-172.234.27.175:22-147.75.109.163:53000.service: Deactivated successfully. Aug 13 01:35:59.416022 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 01:35:59.418613 systemd-logind[1529]: Removed session 20. Aug 13 01:35:59.798149 kubelet[2739]: E0813 01:35:59.798088 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:36:01.797189 kubelet[2739]: E0813 01:36:01.797156 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:36:04.477935 systemd[1]: Started sshd@24-172.234.27.175:22-147.75.109.163:53008.service - OpenSSH per-connection server daemon (147.75.109.163:53008). Aug 13 01:36:04.820555 sshd[5834]: Accepted publickey for core from 147.75.109.163 port 53008 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:36:04.822739 sshd-session[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:36:04.828369 systemd-logind[1529]: New session 21 of user core. Aug 13 01:36:04.834294 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 01:36:05.143301 sshd[5836]: Connection closed by 147.75.109.163 port 53008 Aug 13 01:36:05.144958 sshd-session[5834]: pam_unix(sshd:session): session closed for user core Aug 13 01:36:05.149175 systemd[1]: sshd@24-172.234.27.175:22-147.75.109.163:53008.service: Deactivated successfully. Aug 13 01:36:05.152223 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 01:36:05.154117 systemd-logind[1529]: Session 21 logged out. Waiting for processes to exit. Aug 13 01:36:05.155707 systemd-logind[1529]: Removed session 21. Aug 13 01:36:07.020280 kubelet[2739]: I0813 01:36:07.020248 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:07.020280 kubelet[2739]: I0813 01:36:07.020286 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:36:07.022730 kubelet[2739]: I0813 01:36:07.022647 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:36:07.038613 kubelet[2739]: I0813 01:36:07.038582 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:07.038794 kubelet[2739]: I0813 01:36:07.038768 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:36:07.038891 kubelet[2739]: E0813 01:36:07.038809 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:36:07.038891 kubelet[2739]: E0813 01:36:07.038826 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:36:07.038891 kubelet[2739]: E0813 01:36:07.038837 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:36:07.038891 kubelet[2739]: E0813 01:36:07.038849 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:36:07.038891 kubelet[2739]: E0813 01:36:07.038864 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:36:07.038891 kubelet[2739]: E0813 01:36:07.038873 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:36:07.038891 kubelet[2739]: E0813 01:36:07.038880 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:36:07.038891 kubelet[2739]: E0813 01:36:07.038894 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:36:07.038891 kubelet[2739]: E0813 01:36:07.038907 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:36:07.039159 kubelet[2739]: E0813 01:36:07.038915 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:36:07.039159 kubelet[2739]: I0813 01:36:07.038925 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:36:10.203073 systemd[1]: Started sshd@25-172.234.27.175:22-147.75.109.163:33668.service - OpenSSH per-connection server daemon (147.75.109.163:33668). Aug 13 01:36:10.533974 sshd[5850]: Accepted publickey for core from 147.75.109.163 port 33668 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:36:10.535965 sshd-session[5850]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:36:10.542062 systemd-logind[1529]: New session 22 of user core. Aug 13 01:36:10.548255 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 01:36:10.838999 sshd[5852]: Connection closed by 147.75.109.163 port 33668 Aug 13 01:36:10.839699 sshd-session[5850]: pam_unix(sshd:session): session closed for user core Aug 13 01:36:10.844211 systemd[1]: sshd@25-172.234.27.175:22-147.75.109.163:33668.service: Deactivated successfully. Aug 13 01:36:10.846559 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 01:36:10.847467 systemd-logind[1529]: Session 22 logged out. Waiting for processes to exit. Aug 13 01:36:10.849552 systemd-logind[1529]: Removed session 22. Aug 13 01:36:11.798365 kubelet[2739]: E0813 01:36:11.798303 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:36:15.903308 systemd[1]: Started sshd@26-172.234.27.175:22-147.75.109.163:33670.service - OpenSSH per-connection server daemon (147.75.109.163:33670). Aug 13 01:36:16.249225 sshd[5864]: Accepted publickey for core from 147.75.109.163 port 33670 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:36:16.250667 sshd-session[5864]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:36:16.256019 systemd-logind[1529]: New session 23 of user core. Aug 13 01:36:16.260273 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 01:36:16.580291 sshd[5866]: Connection closed by 147.75.109.163 port 33670 Aug 13 01:36:16.581031 sshd-session[5864]: pam_unix(sshd:session): session closed for user core Aug 13 01:36:16.586667 systemd-logind[1529]: Session 23 logged out. Waiting for processes to exit. Aug 13 01:36:16.587653 systemd[1]: sshd@26-172.234.27.175:22-147.75.109.163:33670.service: Deactivated successfully. Aug 13 01:36:16.591029 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 01:36:16.594553 systemd-logind[1529]: Removed session 23. Aug 13 01:36:17.061004 kubelet[2739]: I0813 01:36:17.060976 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:17.061004 kubelet[2739]: I0813 01:36:17.061009 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:36:17.062264 kubelet[2739]: I0813 01:36:17.062249 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:36:17.082050 kubelet[2739]: I0813 01:36:17.082024 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:17.082227 kubelet[2739]: I0813 01:36:17.082198 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:36:17.082661 kubelet[2739]: E0813 01:36:17.082229 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:36:17.082661 kubelet[2739]: E0813 01:36:17.082244 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:36:17.082661 kubelet[2739]: E0813 01:36:17.082254 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:36:17.082661 kubelet[2739]: E0813 01:36:17.082261 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:36:17.082661 kubelet[2739]: E0813 01:36:17.082269 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:36:17.082661 kubelet[2739]: E0813 01:36:17.082277 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:36:17.082661 kubelet[2739]: E0813 01:36:17.082287 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:36:17.082661 kubelet[2739]: E0813 01:36:17.082297 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:36:17.082661 kubelet[2739]: E0813 01:36:17.082305 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:36:17.082661 kubelet[2739]: E0813 01:36:17.082313 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:36:17.082661 kubelet[2739]: I0813 01:36:17.082324 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:36:20.803749 kubelet[2739]: E0813 01:36:20.803652 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:36:21.640440 systemd[1]: Started sshd@27-172.234.27.175:22-147.75.109.163:48714.service - OpenSSH per-connection server daemon (147.75.109.163:48714). Aug 13 01:36:21.797793 kubelet[2739]: E0813 01:36:21.797766 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:36:21.974305 sshd[5886]: Accepted publickey for core from 147.75.109.163 port 48714 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:36:21.976788 sshd-session[5886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:36:21.984375 systemd-logind[1529]: New session 24 of user core. Aug 13 01:36:21.993280 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 01:36:22.302303 sshd[5888]: Connection closed by 147.75.109.163 port 48714 Aug 13 01:36:22.304380 sshd-session[5886]: pam_unix(sshd:session): session closed for user core Aug 13 01:36:22.309227 systemd-logind[1529]: Session 24 logged out. Waiting for processes to exit. Aug 13 01:36:22.310316 systemd[1]: sshd@27-172.234.27.175:22-147.75.109.163:48714.service: Deactivated successfully. Aug 13 01:36:22.313033 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 01:36:22.315733 systemd-logind[1529]: Removed session 24. Aug 13 01:36:22.798150 kubelet[2739]: E0813 01:36:22.797948 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:36:22.799608 kubelet[2739]: E0813 01:36:22.799503 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:36:25.439631 containerd[1544]: time="2025-08-13T01:36:25.439588817Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5a29eb80b0bc5089fcefc178e897c50dab7a245f3568439d3b5679ef6257511\" id:\"1e5c11a6b180b1bd98f582b079eb0b2335cfbd1cb404e9531a18a2ecc4925073\" pid:5912 exited_at:{seconds:1755048985 nanos:439387415}" Aug 13 01:36:27.106964 kubelet[2739]: I0813 01:36:27.106921 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:27.106964 kubelet[2739]: I0813 01:36:27.106958 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:36:27.108759 kubelet[2739]: I0813 01:36:27.108731 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:36:27.125868 kubelet[2739]: I0813 01:36:27.125831 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:27.126065 kubelet[2739]: I0813 01:36:27.126049 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:36:27.126200 kubelet[2739]: E0813 01:36:27.126188 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:36:27.126274 kubelet[2739]: E0813 01:36:27.126264 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:36:27.126322 kubelet[2739]: E0813 01:36:27.126314 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:36:27.126369 kubelet[2739]: E0813 01:36:27.126362 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:36:27.126420 kubelet[2739]: E0813 01:36:27.126411 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:36:27.126471 kubelet[2739]: E0813 01:36:27.126462 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:36:27.126513 kubelet[2739]: E0813 01:36:27.126505 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:36:27.126590 kubelet[2739]: E0813 01:36:27.126581 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:36:27.126700 kubelet[2739]: E0813 01:36:27.126664 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:36:27.126700 kubelet[2739]: E0813 01:36:27.126680 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:36:27.126700 kubelet[2739]: I0813 01:36:27.126690 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:36:27.372331 systemd[1]: Started sshd@28-172.234.27.175:22-147.75.109.163:48724.service - OpenSSH per-connection server daemon (147.75.109.163:48724). Aug 13 01:36:27.720016 sshd[5924]: Accepted publickey for core from 147.75.109.163 port 48724 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:36:27.723055 sshd-session[5924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:36:27.730647 systemd-logind[1529]: New session 25 of user core. Aug 13 01:36:27.734263 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 01:36:28.051929 sshd[5926]: Connection closed by 147.75.109.163 port 48724 Aug 13 01:36:28.054143 sshd-session[5924]: pam_unix(sshd:session): session closed for user core Aug 13 01:36:28.058748 systemd-logind[1529]: Session 25 logged out. Waiting for processes to exit. Aug 13 01:36:28.059545 systemd[1]: sshd@28-172.234.27.175:22-147.75.109.163:48724.service: Deactivated successfully. Aug 13 01:36:28.062121 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 01:36:28.064051 systemd-logind[1529]: Removed session 25. Aug 13 01:36:33.120343 systemd[1]: Started sshd@29-172.234.27.175:22-147.75.109.163:36970.service - OpenSSH per-connection server daemon (147.75.109.163:36970). Aug 13 01:36:33.465405 sshd[5953]: Accepted publickey for core from 147.75.109.163 port 36970 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:36:33.466236 sshd-session[5953]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:36:33.474195 systemd-logind[1529]: New session 26 of user core. Aug 13 01:36:33.480270 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 01:36:33.786966 sshd[5955]: Connection closed by 147.75.109.163 port 36970 Aug 13 01:36:33.788311 sshd-session[5953]: pam_unix(sshd:session): session closed for user core Aug 13 01:36:33.793203 systemd[1]: sshd@29-172.234.27.175:22-147.75.109.163:36970.service: Deactivated successfully. Aug 13 01:36:33.795099 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 01:36:33.796815 systemd-logind[1529]: Session 26 logged out. Waiting for processes to exit. Aug 13 01:36:33.798875 systemd-logind[1529]: Removed session 26. Aug 13 01:36:36.798525 containerd[1544]: time="2025-08-13T01:36:36.798437692Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:36:37.149561 kubelet[2739]: I0813 01:36:37.149452 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:37.149561 kubelet[2739]: I0813 01:36:37.149489 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:36:37.151615 kubelet[2739]: I0813 01:36:37.151207 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:36:37.169721 kubelet[2739]: I0813 01:36:37.169704 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:37.169954 kubelet[2739]: I0813 01:36:37.169911 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:36:37.169954 kubelet[2739]: E0813 01:36:37.169950 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:36:37.169954 kubelet[2739]: E0813 01:36:37.169964 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:36:37.170215 kubelet[2739]: E0813 01:36:37.169979 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:36:37.170215 kubelet[2739]: E0813 01:36:37.169993 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:36:37.170215 kubelet[2739]: E0813 01:36:37.170004 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:36:37.170215 kubelet[2739]: E0813 01:36:37.170013 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:36:37.170215 kubelet[2739]: E0813 01:36:37.170021 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:36:37.170215 kubelet[2739]: E0813 01:36:37.170032 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:36:37.170215 kubelet[2739]: E0813 01:36:37.170039 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:36:37.170215 kubelet[2739]: E0813 01:36:37.170047 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:36:37.170215 kubelet[2739]: I0813 01:36:37.170055 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:36:37.439360 containerd[1544]: time="2025-08-13T01:36:37.439182960Z" level=error msg="failed to cleanup \"extract-228013949-vwEu sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:36:37.440453 containerd[1544]: time="2025-08-13T01:36:37.439832984Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 01:36:37.440510 containerd[1544]: time="2025-08-13T01:36:37.440389297Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=42995946" Aug 13 01:36:37.440664 kubelet[2739]: E0813 01:36:37.440619 2739 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:36:37.440755 kubelet[2739]: E0813 01:36:37.440668 2739 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:36:37.440833 kubelet[2739]: E0813 01:36:37.440781 2739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwl76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 01:36:37.442179 kubelet[2739]: E0813 01:36:37.442121 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:36:38.848398 systemd[1]: Started sshd@30-172.234.27.175:22-147.75.109.163:59190.service - OpenSSH per-connection server daemon (147.75.109.163:59190). Aug 13 01:36:39.183436 sshd[5968]: Accepted publickey for core from 147.75.109.163 port 59190 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:36:39.184693 sshd-session[5968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:36:39.190012 systemd-logind[1529]: New session 27 of user core. Aug 13 01:36:39.196312 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 01:36:39.505818 sshd[5970]: Connection closed by 147.75.109.163 port 59190 Aug 13 01:36:39.506389 sshd-session[5968]: pam_unix(sshd:session): session closed for user core Aug 13 01:36:39.511336 systemd-logind[1529]: Session 27 logged out. Waiting for processes to exit. Aug 13 01:36:39.512099 systemd[1]: sshd@30-172.234.27.175:22-147.75.109.163:59190.service: Deactivated successfully. Aug 13 01:36:39.514763 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 01:36:39.516712 systemd-logind[1529]: Removed session 27. Aug 13 01:36:41.398255 systemd[1]: Started sshd@31-172.234.27.175:22-122.166.49.42:38318.service - OpenSSH per-connection server daemon (122.166.49.42:38318). Aug 13 01:36:42.492542 systemd[1]: Started sshd@32-172.234.27.175:22-103.186.1.197:44042.service - OpenSSH per-connection server daemon (103.186.1.197:44042). Aug 13 01:36:42.985186 sshd[5984]: Received disconnect from 122.166.49.42 port 38318:11: Bye Bye [preauth] Aug 13 01:36:42.985186 sshd[5984]: Disconnected from authenticating user root 122.166.49.42 port 38318 [preauth] Aug 13 01:36:42.988077 systemd[1]: sshd@31-172.234.27.175:22-122.166.49.42:38318.service: Deactivated successfully. Aug 13 01:36:44.265304 sshd[5987]: Received disconnect from 103.186.1.197 port 44042:11: Bye Bye [preauth] Aug 13 01:36:44.265304 sshd[5987]: Disconnected from authenticating user root 103.186.1.197 port 44042 [preauth] Aug 13 01:36:44.267922 systemd[1]: sshd@32-172.234.27.175:22-103.186.1.197:44042.service: Deactivated successfully. Aug 13 01:36:44.571683 systemd[1]: Started sshd@33-172.234.27.175:22-147.75.109.163:59194.service - OpenSSH per-connection server daemon (147.75.109.163:59194). Aug 13 01:36:44.797446 kubelet[2739]: E0813 01:36:44.797408 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:36:44.901586 sshd[5994]: Accepted publickey for core from 147.75.109.163 port 59194 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:36:44.903294 sshd-session[5994]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:36:44.908999 systemd-logind[1529]: New session 28 of user core. Aug 13 01:36:44.917298 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 01:36:45.217590 sshd[5996]: Connection closed by 147.75.109.163 port 59194 Aug 13 01:36:45.218477 sshd-session[5994]: pam_unix(sshd:session): session closed for user core Aug 13 01:36:45.222821 systemd[1]: sshd@33-172.234.27.175:22-147.75.109.163:59194.service: Deactivated successfully. Aug 13 01:36:45.226890 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 01:36:45.229610 systemd-logind[1529]: Session 28 logged out. Waiting for processes to exit. Aug 13 01:36:45.231710 systemd-logind[1529]: Removed session 28. Aug 13 01:36:45.477498 systemd[1]: Started sshd@34-172.234.27.175:22-103.90.226.175:39558.service - OpenSSH per-connection server daemon (103.90.226.175:39558). Aug 13 01:36:47.192970 kubelet[2739]: I0813 01:36:47.192925 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:47.194013 kubelet[2739]: I0813 01:36:47.193391 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:36:47.194856 kubelet[2739]: I0813 01:36:47.194831 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:36:47.209657 kubelet[2739]: I0813 01:36:47.209632 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:47.209819 kubelet[2739]: I0813 01:36:47.209797 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:36:47.209908 kubelet[2739]: E0813 01:36:47.209834 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:36:47.209908 kubelet[2739]: E0813 01:36:47.209850 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:36:47.209908 kubelet[2739]: E0813 01:36:47.209860 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:36:47.209908 kubelet[2739]: E0813 01:36:47.209869 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:36:47.209908 kubelet[2739]: E0813 01:36:47.209877 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:36:47.209908 kubelet[2739]: E0813 01:36:47.209884 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:36:47.209908 kubelet[2739]: E0813 01:36:47.209892 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:36:47.209908 kubelet[2739]: E0813 01:36:47.209903 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:36:47.209908 kubelet[2739]: E0813 01:36:47.209911 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:36:47.210106 kubelet[2739]: E0813 01:36:47.209920 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:36:47.210106 kubelet[2739]: I0813 01:36:47.209929 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:36:47.305770 sshd[6009]: Received disconnect from 103.90.226.175 port 39558:11: Bye Bye [preauth] Aug 13 01:36:47.305770 sshd[6009]: Disconnected from authenticating user root 103.90.226.175 port 39558 [preauth] Aug 13 01:36:47.308501 systemd[1]: sshd@34-172.234.27.175:22-103.90.226.175:39558.service: Deactivated successfully. Aug 13 01:36:47.798073 kubelet[2739]: E0813 01:36:47.798013 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:36:49.798225 kubelet[2739]: E0813 01:36:49.797957 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:36:50.278482 systemd[1]: Started sshd@35-172.234.27.175:22-147.75.109.163:35462.service - OpenSSH per-connection server daemon (147.75.109.163:35462). Aug 13 01:36:50.605783 sshd[6016]: Accepted publickey for core from 147.75.109.163 port 35462 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:36:50.608018 sshd-session[6016]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:36:50.615086 systemd-logind[1529]: New session 29 of user core. Aug 13 01:36:50.619548 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 01:36:50.917214 sshd[6021]: Connection closed by 147.75.109.163 port 35462 Aug 13 01:36:50.918207 sshd-session[6016]: pam_unix(sshd:session): session closed for user core Aug 13 01:36:50.924384 systemd-logind[1529]: Session 29 logged out. Waiting for processes to exit. Aug 13 01:36:50.924406 systemd[1]: sshd@35-172.234.27.175:22-147.75.109.163:35462.service: Deactivated successfully. Aug 13 01:36:50.927887 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 01:36:50.932006 systemd-logind[1529]: Removed session 29. Aug 13 01:36:55.433677 containerd[1544]: time="2025-08-13T01:36:55.433616999Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5a29eb80b0bc5089fcefc178e897c50dab7a245f3568439d3b5679ef6257511\" id:\"045d31cfa1cf3366c338cba25bb403fb177b0bc7a3f5cc2e2709b5cd0348b51c\" pid:6045 exited_at:{seconds:1755049015 nanos:433121066}" Aug 13 01:36:55.979269 systemd[1]: Started sshd@36-172.234.27.175:22-147.75.109.163:35474.service - OpenSSH per-connection server daemon (147.75.109.163:35474). Aug 13 01:36:56.314092 sshd[6056]: Accepted publickey for core from 147.75.109.163 port 35474 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:36:56.315469 sshd-session[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:36:56.323163 systemd-logind[1529]: New session 30 of user core. Aug 13 01:36:56.327310 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 01:36:56.622294 sshd[6058]: Connection closed by 147.75.109.163 port 35474 Aug 13 01:36:56.624235 sshd-session[6056]: pam_unix(sshd:session): session closed for user core Aug 13 01:36:56.630360 systemd-logind[1529]: Session 30 logged out. Waiting for processes to exit. Aug 13 01:36:56.631282 systemd[1]: sshd@36-172.234.27.175:22-147.75.109.163:35474.service: Deactivated successfully. Aug 13 01:36:56.633416 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 01:36:56.635173 systemd-logind[1529]: Removed session 30. Aug 13 01:36:57.234782 kubelet[2739]: I0813 01:36:57.234753 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:57.234782 kubelet[2739]: I0813 01:36:57.234790 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:36:57.235277 kubelet[2739]: I0813 01:36:57.234891 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:36:57.235277 kubelet[2739]: E0813 01:36:57.234918 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:36:57.235277 kubelet[2739]: E0813 01:36:57.234930 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:36:57.235277 kubelet[2739]: E0813 01:36:57.234938 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:36:57.235277 kubelet[2739]: E0813 01:36:57.234947 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:36:57.235277 kubelet[2739]: E0813 01:36:57.234955 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:36:57.235277 kubelet[2739]: E0813 01:36:57.234962 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:36:57.235277 kubelet[2739]: E0813 01:36:57.234970 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:36:57.235277 kubelet[2739]: E0813 01:36:57.234982 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:36:57.235277 kubelet[2739]: E0813 01:36:57.234990 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:36:57.235277 kubelet[2739]: E0813 01:36:57.235031 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:36:57.235277 kubelet[2739]: I0813 01:36:57.235039 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:37:01.682947 systemd[1]: Started sshd@37-172.234.27.175:22-147.75.109.163:55958.service - OpenSSH per-connection server daemon (147.75.109.163:55958). Aug 13 01:37:02.019902 sshd[6070]: Accepted publickey for core from 147.75.109.163 port 55958 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:37:02.022372 sshd-session[6070]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:37:02.029320 systemd-logind[1529]: New session 31 of user core. Aug 13 01:37:02.035268 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 13 01:37:02.334852 sshd[6072]: Connection closed by 147.75.109.163 port 55958 Aug 13 01:37:02.335332 sshd-session[6070]: pam_unix(sshd:session): session closed for user core Aug 13 01:37:02.340963 systemd-logind[1529]: Session 31 logged out. Waiting for processes to exit. Aug 13 01:37:02.341760 systemd[1]: sshd@37-172.234.27.175:22-147.75.109.163:55958.service: Deactivated successfully. Aug 13 01:37:02.344749 systemd[1]: session-31.scope: Deactivated successfully. Aug 13 01:37:02.346594 systemd-logind[1529]: Removed session 31. Aug 13 01:37:02.800154 kubelet[2739]: E0813 01:37:02.800061 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:37:06.798089 kubelet[2739]: E0813 01:37:06.798049 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:37:07.256414 kubelet[2739]: I0813 01:37:07.256387 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:07.256571 kubelet[2739]: I0813 01:37:07.256439 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:37:07.260096 kubelet[2739]: I0813 01:37:07.259384 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:37:07.285801 kubelet[2739]: I0813 01:37:07.285742 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:07.285989 kubelet[2739]: I0813 01:37:07.285960 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:37:07.286055 kubelet[2739]: E0813 01:37:07.286017 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:37:07.286055 kubelet[2739]: E0813 01:37:07.286039 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:37:07.286055 kubelet[2739]: E0813 01:37:07.286049 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:37:07.286055 kubelet[2739]: E0813 01:37:07.286057 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:37:07.286173 kubelet[2739]: E0813 01:37:07.286084 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:37:07.286173 kubelet[2739]: E0813 01:37:07.286101 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:37:07.286173 kubelet[2739]: E0813 01:37:07.286117 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:37:07.286173 kubelet[2739]: E0813 01:37:07.286148 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:37:07.286173 kubelet[2739]: E0813 01:37:07.286161 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:37:07.286173 kubelet[2739]: E0813 01:37:07.286170 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:37:07.286173 kubelet[2739]: I0813 01:37:07.286180 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:37:07.402344 systemd[1]: Started sshd@38-172.234.27.175:22-147.75.109.163:55970.service - OpenSSH per-connection server daemon (147.75.109.163:55970). Aug 13 01:37:07.740713 sshd[6086]: Accepted publickey for core from 147.75.109.163 port 55970 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:37:07.742007 sshd-session[6086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:37:07.747212 systemd-logind[1529]: New session 32 of user core. Aug 13 01:37:07.756264 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 13 01:37:08.043787 sshd[6088]: Connection closed by 147.75.109.163 port 55970 Aug 13 01:37:08.044479 sshd-session[6086]: pam_unix(sshd:session): session closed for user core Aug 13 01:37:08.048127 systemd-logind[1529]: Session 32 logged out. Waiting for processes to exit. Aug 13 01:37:08.048897 systemd[1]: sshd@38-172.234.27.175:22-147.75.109.163:55970.service: Deactivated successfully. Aug 13 01:37:08.050820 systemd[1]: session-32.scope: Deactivated successfully. Aug 13 01:37:08.053295 systemd-logind[1529]: Removed session 32. Aug 13 01:37:13.111194 systemd[1]: Started sshd@39-172.234.27.175:22-147.75.109.163:44540.service - OpenSSH per-connection server daemon (147.75.109.163:44540). Aug 13 01:37:13.450295 sshd[6100]: Accepted publickey for core from 147.75.109.163 port 44540 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:37:13.451670 sshd-session[6100]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:37:13.456998 systemd-logind[1529]: New session 33 of user core. Aug 13 01:37:13.464259 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 13 01:37:13.759631 sshd[6102]: Connection closed by 147.75.109.163 port 44540 Aug 13 01:37:13.760241 sshd-session[6100]: pam_unix(sshd:session): session closed for user core Aug 13 01:37:13.765482 systemd-logind[1529]: Session 33 logged out. Waiting for processes to exit. Aug 13 01:37:13.766253 systemd[1]: sshd@39-172.234.27.175:22-147.75.109.163:44540.service: Deactivated successfully. Aug 13 01:37:13.770722 systemd[1]: session-33.scope: Deactivated successfully. Aug 13 01:37:13.772155 systemd-logind[1529]: Removed session 33. Aug 13 01:37:15.798813 kubelet[2739]: E0813 01:37:15.798759 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:37:17.305120 kubelet[2739]: I0813 01:37:17.305090 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:17.305120 kubelet[2739]: I0813 01:37:17.305123 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:17.305662 kubelet[2739]: I0813 01:37:17.305430 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:37:17.305662 kubelet[2739]: E0813 01:37:17.305459 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:37:17.305662 kubelet[2739]: E0813 01:37:17.305474 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:37:17.305662 kubelet[2739]: E0813 01:37:17.305483 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:37:17.305662 kubelet[2739]: E0813 01:37:17.305491 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:37:17.305662 kubelet[2739]: E0813 01:37:17.305498 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:37:17.305662 kubelet[2739]: E0813 01:37:17.305506 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:37:17.305662 kubelet[2739]: E0813 01:37:17.305514 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:37:17.305662 kubelet[2739]: E0813 01:37:17.305523 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:37:17.305662 kubelet[2739]: E0813 01:37:17.305531 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:37:17.305662 kubelet[2739]: E0813 01:37:17.305539 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:37:17.305662 kubelet[2739]: I0813 01:37:17.305547 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:37:18.827335 systemd[1]: Started sshd@40-172.234.27.175:22-147.75.109.163:34076.service - OpenSSH per-connection server daemon (147.75.109.163:34076). Aug 13 01:37:19.155260 sshd[6116]: Accepted publickey for core from 147.75.109.163 port 34076 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:37:19.157513 sshd-session[6116]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:37:19.165004 systemd-logind[1529]: New session 34 of user core. Aug 13 01:37:19.168273 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 13 01:37:19.463952 sshd[6118]: Connection closed by 147.75.109.163 port 34076 Aug 13 01:37:19.464567 sshd-session[6116]: pam_unix(sshd:session): session closed for user core Aug 13 01:37:19.470299 systemd-logind[1529]: Session 34 logged out. Waiting for processes to exit. Aug 13 01:37:19.471680 systemd[1]: sshd@40-172.234.27.175:22-147.75.109.163:34076.service: Deactivated successfully. Aug 13 01:37:19.476113 systemd[1]: session-34.scope: Deactivated successfully. Aug 13 01:37:19.480610 systemd-logind[1529]: Removed session 34. Aug 13 01:37:22.798161 kubelet[2739]: E0813 01:37:22.797966 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:37:24.529832 systemd[1]: Started sshd@41-172.234.27.175:22-147.75.109.163:34086.service - OpenSSH per-connection server daemon (147.75.109.163:34086). Aug 13 01:37:24.880358 sshd[6129]: Accepted publickey for core from 147.75.109.163 port 34086 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:37:24.882292 sshd-session[6129]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:37:24.889597 systemd-logind[1529]: New session 35 of user core. Aug 13 01:37:24.894268 systemd[1]: Started session-35.scope - Session 35 of User core. Aug 13 01:37:25.212579 sshd[6131]: Connection closed by 147.75.109.163 port 34086 Aug 13 01:37:25.214169 sshd-session[6129]: pam_unix(sshd:session): session closed for user core Aug 13 01:37:25.218012 systemd[1]: sshd@41-172.234.27.175:22-147.75.109.163:34086.service: Deactivated successfully. Aug 13 01:37:25.223402 systemd[1]: session-35.scope: Deactivated successfully. Aug 13 01:37:25.225387 systemd-logind[1529]: Session 35 logged out. Waiting for processes to exit. Aug 13 01:37:25.229593 systemd-logind[1529]: Removed session 35. Aug 13 01:37:25.438801 containerd[1544]: time="2025-08-13T01:37:25.438747977Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5a29eb80b0bc5089fcefc178e897c50dab7a245f3568439d3b5679ef6257511\" id:\"b7d8cb6759ce8190336af3d1d498fcf4dbcc891d344ede891ec8c3fb410c79fb\" pid:6155 exited_at:{seconds:1755049045 nanos:437802963}" Aug 13 01:37:27.336425 kubelet[2739]: I0813 01:37:27.336383 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:27.336425 kubelet[2739]: I0813 01:37:27.336421 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:37:27.338319 kubelet[2739]: I0813 01:37:27.338292 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:37:27.357236 kubelet[2739]: I0813 01:37:27.356909 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:27.357236 kubelet[2739]: I0813 01:37:27.357048 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:37:27.357236 kubelet[2739]: E0813 01:37:27.357075 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:37:27.357236 kubelet[2739]: E0813 01:37:27.357090 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:37:27.357236 kubelet[2739]: E0813 01:37:27.357098 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:37:27.357236 kubelet[2739]: E0813 01:37:27.357108 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:37:27.357236 kubelet[2739]: E0813 01:37:27.357115 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:37:27.357236 kubelet[2739]: E0813 01:37:27.357123 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:37:27.357236 kubelet[2739]: E0813 01:37:27.357147 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:37:27.357236 kubelet[2739]: E0813 01:37:27.357157 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:37:27.357236 kubelet[2739]: E0813 01:37:27.357164 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:37:27.357236 kubelet[2739]: E0813 01:37:27.357172 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:37:27.357236 kubelet[2739]: I0813 01:37:27.357181 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:37:30.272476 systemd[1]: Started sshd@42-172.234.27.175:22-147.75.109.163:40086.service - OpenSSH per-connection server daemon (147.75.109.163:40086). Aug 13 01:37:30.609697 sshd[6167]: Accepted publickey for core from 147.75.109.163 port 40086 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:37:30.611935 sshd-session[6167]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:37:30.617570 systemd-logind[1529]: New session 36 of user core. Aug 13 01:37:30.622268 systemd[1]: Started session-36.scope - Session 36 of User core. Aug 13 01:37:30.805414 kubelet[2739]: E0813 01:37:30.805375 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:37:30.931653 sshd[6169]: Connection closed by 147.75.109.163 port 40086 Aug 13 01:37:30.932598 sshd-session[6167]: pam_unix(sshd:session): session closed for user core Aug 13 01:37:30.937791 systemd-logind[1529]: Session 36 logged out. Waiting for processes to exit. Aug 13 01:37:30.938739 systemd[1]: sshd@42-172.234.27.175:22-147.75.109.163:40086.service: Deactivated successfully. Aug 13 01:37:30.940778 systemd[1]: session-36.scope: Deactivated successfully. Aug 13 01:37:30.942771 systemd-logind[1529]: Removed session 36. Aug 13 01:37:32.799163 kubelet[2739]: E0813 01:37:32.798323 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:37:35.995482 systemd[1]: Started sshd@43-172.234.27.175:22-147.75.109.163:40102.service - OpenSSH per-connection server daemon (147.75.109.163:40102). Aug 13 01:37:36.334599 sshd[6181]: Accepted publickey for core from 147.75.109.163 port 40102 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:37:36.337013 sshd-session[6181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:37:36.342808 systemd-logind[1529]: New session 37 of user core. Aug 13 01:37:36.347269 systemd[1]: Started session-37.scope - Session 37 of User core. Aug 13 01:37:36.645930 containerd[1544]: time="2025-08-13T01:37:36.645756118Z" level=warning msg="container event discarded" container=ff5eed12ff9911f7290032159a5579d26de0de2dfab28d7fdefd59244ba1684e type=CONTAINER_CREATED_EVENT Aug 13 01:37:36.650711 sshd[6183]: Connection closed by 147.75.109.163 port 40102 Aug 13 01:37:36.652287 sshd-session[6181]: pam_unix(sshd:session): session closed for user core Aug 13 01:37:36.656912 systemd[1]: sshd@43-172.234.27.175:22-147.75.109.163:40102.service: Deactivated successfully. Aug 13 01:37:36.657110 containerd[1544]: time="2025-08-13T01:37:36.657046012Z" level=warning msg="container event discarded" container=ff5eed12ff9911f7290032159a5579d26de0de2dfab28d7fdefd59244ba1684e type=CONTAINER_STARTED_EVENT Aug 13 01:37:36.659608 systemd[1]: session-37.scope: Deactivated successfully. Aug 13 01:37:36.660519 systemd-logind[1529]: Session 37 logged out. Waiting for processes to exit. Aug 13 01:37:36.662627 systemd-logind[1529]: Removed session 37. Aug 13 01:37:36.680419 containerd[1544]: time="2025-08-13T01:37:36.680265331Z" level=warning msg="container event discarded" container=73121c63989692fa6b1e26cb9716c059d8a7a659d97ee473a7e4c13f6a375193 type=CONTAINER_CREATED_EVENT Aug 13 01:37:36.680419 containerd[1544]: time="2025-08-13T01:37:36.680316111Z" level=warning msg="container event discarded" container=73121c63989692fa6b1e26cb9716c059d8a7a659d97ee473a7e4c13f6a375193 type=CONTAINER_STARTED_EVENT Aug 13 01:37:36.680419 containerd[1544]: time="2025-08-13T01:37:36.680326031Z" level=warning msg="container event discarded" container=9911d22eb4bc92da55dbd4e7eba01692e6e22c9e0b49f66cbdd010ddd4200681 type=CONTAINER_CREATED_EVENT Aug 13 01:37:36.708529 containerd[1544]: time="2025-08-13T01:37:36.708490329Z" level=warning msg="container event discarded" container=be17d71119d0300d5fbb86a8d436fad65f3a0945a651f7f10fe85cfe4dada3b0 type=CONTAINER_CREATED_EVENT Aug 13 01:37:36.719711 containerd[1544]: time="2025-08-13T01:37:36.719657292Z" level=warning msg="container event discarded" container=c5c4b572d07b5586469a38582952fbfb9aae352ffacee9799ef87b03c4709b21 type=CONTAINER_CREATED_EVENT Aug 13 01:37:36.719711 containerd[1544]: time="2025-08-13T01:37:36.719690302Z" level=warning msg="container event discarded" container=c5c4b572d07b5586469a38582952fbfb9aae352ffacee9799ef87b03c4709b21 type=CONTAINER_STARTED_EVENT Aug 13 01:37:36.741043 containerd[1544]: time="2025-08-13T01:37:36.741011954Z" level=warning msg="container event discarded" container=6a18018c7f1d7000fe773ee700f28b45b005f3998d12b9d5f603197603e568ff type=CONTAINER_CREATED_EVENT Aug 13 01:37:36.830145 containerd[1544]: time="2025-08-13T01:37:36.830070857Z" level=warning msg="container event discarded" container=9911d22eb4bc92da55dbd4e7eba01692e6e22c9e0b49f66cbdd010ddd4200681 type=CONTAINER_STARTED_EVENT Aug 13 01:37:36.859446 containerd[1544]: time="2025-08-13T01:37:36.859410819Z" level=warning msg="container event discarded" container=be17d71119d0300d5fbb86a8d436fad65f3a0945a651f7f10fe85cfe4dada3b0 type=CONTAINER_STARTED_EVENT Aug 13 01:37:36.900725 containerd[1544]: time="2025-08-13T01:37:36.900592747Z" level=warning msg="container event discarded" container=6a18018c7f1d7000fe773ee700f28b45b005f3998d12b9d5f603197603e568ff type=CONTAINER_STARTED_EVENT Aug 13 01:37:37.384920 kubelet[2739]: I0813 01:37:37.384891 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:37.385846 kubelet[2739]: I0813 01:37:37.385529 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:37:37.388007 kubelet[2739]: I0813 01:37:37.387981 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:37:37.407362 kubelet[2739]: I0813 01:37:37.407326 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:37.407467 kubelet[2739]: I0813 01:37:37.407449 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:37:37.407535 kubelet[2739]: E0813 01:37:37.407479 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:37:37.407535 kubelet[2739]: E0813 01:37:37.407492 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:37:37.407535 kubelet[2739]: E0813 01:37:37.407501 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:37:37.407535 kubelet[2739]: E0813 01:37:37.407509 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:37:37.407535 kubelet[2739]: E0813 01:37:37.407517 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:37:37.407535 kubelet[2739]: E0813 01:37:37.407526 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:37:37.407535 kubelet[2739]: E0813 01:37:37.407535 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:37:37.407865 kubelet[2739]: E0813 01:37:37.407546 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:37:37.407865 kubelet[2739]: E0813 01:37:37.407555 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:37:37.407865 kubelet[2739]: E0813 01:37:37.407563 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:37:37.407865 kubelet[2739]: I0813 01:37:37.407572 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:37:37.797360 kubelet[2739]: E0813 01:37:37.797330 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:37:40.794565 kubelet[2739]: I0813 01:37:40.794525 2739 image_gc_manager.go:383] "Disk usage on image filesystem is over the high threshold, trying to free bytes down to the low threshold" usage=100 highThreshold=85 amountToFree=411531673 lowThreshold=80 Aug 13 01:37:40.794565 kubelet[2739]: E0813 01:37:40.794558 2739 kubelet.go:1474] "Image garbage collection failed multiple times in a row" err="Failed to garbage collect required amount of images. Attempted to free 411531673 bytes, but only found 0 bytes eligible to free." Aug 13 01:37:41.717675 systemd[1]: Started sshd@44-172.234.27.175:22-147.75.109.163:36452.service - OpenSSH per-connection server daemon (147.75.109.163:36452). Aug 13 01:37:42.055420 sshd[6202]: Accepted publickey for core from 147.75.109.163 port 36452 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:37:42.056570 sshd-session[6202]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:37:42.061946 systemd-logind[1529]: New session 38 of user core. Aug 13 01:37:42.064464 systemd[1]: Started session-38.scope - Session 38 of User core. Aug 13 01:37:42.374630 sshd[6204]: Connection closed by 147.75.109.163 port 36452 Aug 13 01:37:42.375668 sshd-session[6202]: pam_unix(sshd:session): session closed for user core Aug 13 01:37:42.382400 systemd-logind[1529]: Session 38 logged out. Waiting for processes to exit. Aug 13 01:37:42.383762 systemd[1]: sshd@44-172.234.27.175:22-147.75.109.163:36452.service: Deactivated successfully. Aug 13 01:37:42.386035 systemd[1]: session-38.scope: Deactivated successfully. Aug 13 01:37:42.388024 systemd-logind[1529]: Removed session 38. Aug 13 01:37:42.800751 kubelet[2739]: E0813 01:37:42.800571 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:37:45.798060 kubelet[2739]: E0813 01:37:45.798014 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:37:47.434421 kubelet[2739]: I0813 01:37:47.434371 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:47.434421 kubelet[2739]: I0813 01:37:47.434414 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:37:47.436184 kubelet[2739]: I0813 01:37:47.436093 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:37:47.439673 systemd[1]: Started sshd@45-172.234.27.175:22-147.75.109.163:36462.service - OpenSSH per-connection server daemon (147.75.109.163:36462). Aug 13 01:37:47.473610 kubelet[2739]: I0813 01:37:47.473591 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:47.474367 kubelet[2739]: I0813 01:37:47.474341 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:37:47.474722 kubelet[2739]: E0813 01:37:47.474633 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:37:47.474874 kubelet[2739]: E0813 01:37:47.474777 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:37:47.474874 kubelet[2739]: E0813 01:37:47.474792 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:37:47.474874 kubelet[2739]: E0813 01:37:47.474802 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:37:47.474874 kubelet[2739]: E0813 01:37:47.474811 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:37:47.474874 kubelet[2739]: E0813 01:37:47.474819 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:37:47.475278 kubelet[2739]: E0813 01:37:47.474964 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:37:47.475278 kubelet[2739]: E0813 01:37:47.474980 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:37:47.475278 kubelet[2739]: E0813 01:37:47.475106 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:37:47.475278 kubelet[2739]: E0813 01:37:47.475121 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:37:47.475278 kubelet[2739]: I0813 01:37:47.475252 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:37:47.790359 sshd[6218]: Accepted publickey for core from 147.75.109.163 port 36462 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:37:47.791723 sshd-session[6218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:37:47.797719 systemd-logind[1529]: New session 39 of user core. Aug 13 01:37:47.801275 systemd[1]: Started session-39.scope - Session 39 of User core. Aug 13 01:37:48.100851 sshd[6220]: Connection closed by 147.75.109.163 port 36462 Aug 13 01:37:48.101794 sshd-session[6218]: pam_unix(sshd:session): session closed for user core Aug 13 01:37:48.105182 systemd[1]: sshd@45-172.234.27.175:22-147.75.109.163:36462.service: Deactivated successfully. Aug 13 01:37:48.107221 systemd[1]: session-39.scope: Deactivated successfully. Aug 13 01:37:48.109593 systemd-logind[1529]: Session 39 logged out. Waiting for processes to exit. Aug 13 01:37:48.110850 systemd-logind[1529]: Removed session 39. Aug 13 01:37:48.489718 containerd[1544]: time="2025-08-13T01:37:48.489610084Z" level=warning msg="container event discarded" container=0a8761380b0b839098631d41a2aec1349397d8cf41bbf35d6a28aa6e4e40e81d type=CONTAINER_CREATED_EVENT Aug 13 01:37:48.489718 containerd[1544]: time="2025-08-13T01:37:48.489704604Z" level=warning msg="container event discarded" container=0a8761380b0b839098631d41a2aec1349397d8cf41bbf35d6a28aa6e4e40e81d type=CONTAINER_STARTED_EVENT Aug 13 01:37:48.507934 containerd[1544]: time="2025-08-13T01:37:48.507904358Z" level=warning msg="container event discarded" container=e9303afc77f4660bad5cd327e886b5f728daebd435ff30850f78ca8d2f80efbd type=CONTAINER_CREATED_EVENT Aug 13 01:37:48.588251 containerd[1544]: time="2025-08-13T01:37:48.588190811Z" level=warning msg="container event discarded" container=e9303afc77f4660bad5cd327e886b5f728daebd435ff30850f78ca8d2f80efbd type=CONTAINER_STARTED_EVENT Aug 13 01:37:48.630529 containerd[1544]: time="2025-08-13T01:37:48.630467860Z" level=warning msg="container event discarded" container=668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0 type=CONTAINER_CREATED_EVENT Aug 13 01:37:48.630529 containerd[1544]: time="2025-08-13T01:37:48.630518380Z" level=warning msg="container event discarded" container=668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0 type=CONTAINER_STARTED_EVENT Aug 13 01:37:50.901772 containerd[1544]: time="2025-08-13T01:37:50.901707244Z" level=warning msg="container event discarded" container=df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff type=CONTAINER_CREATED_EVENT Aug 13 01:37:50.978189 containerd[1544]: time="2025-08-13T01:37:50.978078369Z" level=warning msg="container event discarded" container=df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff type=CONTAINER_STARTED_EVENT Aug 13 01:37:51.797466 kubelet[2739]: E0813 01:37:51.797432 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:37:53.164620 systemd[1]: Started sshd@46-172.234.27.175:22-147.75.109.163:57632.service - OpenSSH per-connection server daemon (147.75.109.163:57632). Aug 13 01:37:53.507088 sshd[6234]: Accepted publickey for core from 147.75.109.163 port 57632 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:37:53.508794 sshd-session[6234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:37:53.515059 systemd-logind[1529]: New session 40 of user core. Aug 13 01:37:53.520307 systemd[1]: Started session-40.scope - Session 40 of User core. Aug 13 01:37:53.842695 sshd[6236]: Connection closed by 147.75.109.163 port 57632 Aug 13 01:37:53.844339 sshd-session[6234]: pam_unix(sshd:session): session closed for user core Aug 13 01:37:53.849830 systemd[1]: sshd@46-172.234.27.175:22-147.75.109.163:57632.service: Deactivated successfully. Aug 13 01:37:53.850287 systemd-logind[1529]: Session 40 logged out. Waiting for processes to exit. Aug 13 01:37:53.852630 systemd[1]: session-40.scope: Deactivated successfully. Aug 13 01:37:53.854717 systemd-logind[1529]: Removed session 40. Aug 13 01:37:55.436463 containerd[1544]: time="2025-08-13T01:37:55.436244734Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5a29eb80b0bc5089fcefc178e897c50dab7a245f3568439d3b5679ef6257511\" id:\"fcdd1a7e2997673fc7cbb6129a6a3d5b20e9bd960b75c951418dbbdb872239ee\" pid:6263 exited_at:{seconds:1755049075 nanos:434877279}" Aug 13 01:37:57.337813 systemd[1]: Started sshd@47-172.234.27.175:22-122.166.49.42:35770.service - OpenSSH per-connection server daemon (122.166.49.42:35770). Aug 13 01:37:57.497811 kubelet[2739]: I0813 01:37:57.497765 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:57.497811 kubelet[2739]: I0813 01:37:57.497796 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:37:57.499624 kubelet[2739]: I0813 01:37:57.499607 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:37:57.514841 kubelet[2739]: I0813 01:37:57.514814 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:37:57.515004 kubelet[2739]: I0813 01:37:57.514973 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:37:57.515080 kubelet[2739]: E0813 01:37:57.515008 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:37:57.515080 kubelet[2739]: E0813 01:37:57.515026 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:37:57.515080 kubelet[2739]: E0813 01:37:57.515036 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:37:57.515080 kubelet[2739]: E0813 01:37:57.515044 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:37:57.515080 kubelet[2739]: E0813 01:37:57.515051 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:37:57.515080 kubelet[2739]: E0813 01:37:57.515059 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:37:57.515080 kubelet[2739]: E0813 01:37:57.515067 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:37:57.515080 kubelet[2739]: E0813 01:37:57.515078 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:37:57.515080 kubelet[2739]: E0813 01:37:57.515085 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:37:57.515080 kubelet[2739]: E0813 01:37:57.515094 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:37:57.515080 kubelet[2739]: I0813 01:37:57.515102 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:37:58.799581 containerd[1544]: time="2025-08-13T01:37:58.799527356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:37:58.881159 sshd[6276]: Received disconnect from 122.166.49.42 port 35770:11: Bye Bye [preauth] Aug 13 01:37:58.881159 sshd[6276]: Disconnected from authenticating user root 122.166.49.42 port 35770 [preauth] Aug 13 01:37:58.884005 systemd[1]: sshd@47-172.234.27.175:22-122.166.49.42:35770.service: Deactivated successfully. Aug 13 01:37:58.907505 systemd[1]: Started sshd@48-172.234.27.175:22-147.75.109.163:59348.service - OpenSSH per-connection server daemon (147.75.109.163:59348). Aug 13 01:37:59.255841 sshd[6281]: Accepted publickey for core from 147.75.109.163 port 59348 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:37:59.257796 sshd-session[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:37:59.263202 systemd-logind[1529]: New session 41 of user core. Aug 13 01:37:59.269505 systemd[1]: Started session-41.scope - Session 41 of User core. Aug 13 01:37:59.492733 containerd[1544]: time="2025-08-13T01:37:59.492628029Z" level=error msg="failed to cleanup \"extract-293726331-FGVC sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:37:59.493895 containerd[1544]: time="2025-08-13T01:37:59.493769763Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 01:37:59.494492 containerd[1544]: time="2025-08-13T01:37:59.493977694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=42995946" Aug 13 01:37:59.494617 kubelet[2739]: E0813 01:37:59.494577 2739 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:37:59.495356 kubelet[2739]: E0813 01:37:59.495012 2739 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:37:59.496157 kubelet[2739]: E0813 01:37:59.495909 2739 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zwl76,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-85fbc76f96-d5vf4_calico-system(56caad57-6b4a-4069-b011-1059db183012): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 01:37:59.497411 kubelet[2739]: E0813 01:37:59.497378 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:37:59.594204 sshd[6283]: Connection closed by 147.75.109.163 port 59348 Aug 13 01:37:59.594770 sshd-session[6281]: pam_unix(sshd:session): session closed for user core Aug 13 01:37:59.602050 systemd-logind[1529]: Session 41 logged out. Waiting for processes to exit. Aug 13 01:37:59.604960 systemd[1]: sshd@48-172.234.27.175:22-147.75.109.163:59348.service: Deactivated successfully. Aug 13 01:37:59.609209 systemd[1]: session-41.scope: Deactivated successfully. Aug 13 01:37:59.612915 systemd-logind[1529]: Removed session 41. Aug 13 01:38:00.240916 containerd[1544]: time="2025-08-13T01:38:00.240842602Z" level=warning msg="container event discarded" container=1303950867a319732051ad14e6c2e348c995301e491368e9b9c48e10541ac549 type=CONTAINER_CREATED_EVENT Aug 13 01:38:00.241371 containerd[1544]: time="2025-08-13T01:38:00.240927502Z" level=warning msg="container event discarded" container=1303950867a319732051ad14e6c2e348c995301e491368e9b9c48e10541ac549 type=CONTAINER_STARTED_EVENT Aug 13 01:38:00.692193 containerd[1544]: time="2025-08-13T01:38:00.692047283Z" level=warning msg="container event discarded" container=c240561ffd890c1b5476094a7248023a31db54fc62397aa7089467e118977fc3 type=CONTAINER_CREATED_EVENT Aug 13 01:38:00.692193 containerd[1544]: time="2025-08-13T01:38:00.692086903Z" level=warning msg="container event discarded" container=c240561ffd890c1b5476094a7248023a31db54fc62397aa7089467e118977fc3 type=CONTAINER_STARTED_EVENT Aug 13 01:38:01.643453 containerd[1544]: time="2025-08-13T01:38:01.643373870Z" level=warning msg="container event discarded" container=2ecd7b6f28e681a31582a13f29c2f00ddf3e436ab8f1098d193e3c5053bf5a55 type=CONTAINER_CREATED_EVENT Aug 13 01:38:01.743800 containerd[1544]: time="2025-08-13T01:38:01.743731543Z" level=warning msg="container event discarded" container=2ecd7b6f28e681a31582a13f29c2f00ddf3e436ab8f1098d193e3c5053bf5a55 type=CONTAINER_STARTED_EVENT Aug 13 01:38:02.450741 containerd[1544]: time="2025-08-13T01:38:02.450681767Z" level=warning msg="container event discarded" container=4b3a30b7a287826ac1295bcea5cc787ba5fe5d980f667908848f0efe25748812 type=CONTAINER_CREATED_EVENT Aug 13 01:38:02.532874 containerd[1544]: time="2025-08-13T01:38:02.532802310Z" level=warning msg="container event discarded" container=4b3a30b7a287826ac1295bcea5cc787ba5fe5d980f667908848f0efe25748812 type=CONTAINER_STARTED_EVENT Aug 13 01:38:02.637091 containerd[1544]: time="2025-08-13T01:38:02.637054263Z" level=warning msg="container event discarded" container=4b3a30b7a287826ac1295bcea5cc787ba5fe5d980f667908848f0efe25748812 type=CONTAINER_STOPPED_EVENT Aug 13 01:38:03.010819 systemd[1]: Started sshd@49-172.234.27.175:22-103.186.1.197:44890.service - OpenSSH per-connection server daemon (103.186.1.197:44890). Aug 13 01:38:04.659369 systemd[1]: Started sshd@50-172.234.27.175:22-147.75.109.163:59350.service - OpenSSH per-connection server daemon (147.75.109.163:59350). Aug 13 01:38:04.787262 sshd[6305]: Received disconnect from 103.186.1.197 port 44890:11: Bye Bye [preauth] Aug 13 01:38:04.787262 sshd[6305]: Disconnected from authenticating user root 103.186.1.197 port 44890 [preauth] Aug 13 01:38:04.791380 systemd[1]: sshd@49-172.234.27.175:22-103.186.1.197:44890.service: Deactivated successfully. Aug 13 01:38:04.798719 kubelet[2739]: E0813 01:38:04.798676 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:38:05.010608 sshd[6308]: Accepted publickey for core from 147.75.109.163 port 59350 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:38:05.012833 sshd-session[6308]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:05.019394 systemd-logind[1529]: New session 42 of user core. Aug 13 01:38:05.024263 systemd[1]: Started session-42.scope - Session 42 of User core. Aug 13 01:38:05.106839 containerd[1544]: time="2025-08-13T01:38:05.106776274Z" level=warning msg="container event discarded" container=a65e6689d6a03dc3711ceb587b55b5c1c1eebf060c9380a11700607c63d24ebe type=CONTAINER_CREATED_EVENT Aug 13 01:38:05.196422 containerd[1544]: time="2025-08-13T01:38:05.196373835Z" level=warning msg="container event discarded" container=a65e6689d6a03dc3711ceb587b55b5c1c1eebf060c9380a11700607c63d24ebe type=CONTAINER_STARTED_EVENT Aug 13 01:38:05.335934 sshd[6317]: Connection closed by 147.75.109.163 port 59350 Aug 13 01:38:05.336528 sshd-session[6308]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:05.341192 systemd[1]: sshd@50-172.234.27.175:22-147.75.109.163:59350.service: Deactivated successfully. Aug 13 01:38:05.343557 systemd[1]: session-42.scope: Deactivated successfully. Aug 13 01:38:05.344995 systemd-logind[1529]: Session 42 logged out. Waiting for processes to exit. Aug 13 01:38:05.346457 systemd-logind[1529]: Removed session 42. Aug 13 01:38:05.822522 containerd[1544]: time="2025-08-13T01:38:05.822453201Z" level=warning msg="container event discarded" container=a65e6689d6a03dc3711ceb587b55b5c1c1eebf060c9380a11700607c63d24ebe type=CONTAINER_STOPPED_EVENT Aug 13 01:38:07.535788 kubelet[2739]: I0813 01:38:07.535751 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:07.535788 kubelet[2739]: I0813 01:38:07.535791 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:38:07.537735 kubelet[2739]: I0813 01:38:07.537692 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:38:07.551430 kubelet[2739]: I0813 01:38:07.551399 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:07.551554 kubelet[2739]: I0813 01:38:07.551532 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-mx5v9","kube-system/coredns-7c65d6cfc9-994jv","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:38:07.551670 kubelet[2739]: E0813 01:38:07.551567 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:38:07.551670 kubelet[2739]: E0813 01:38:07.551582 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:38:07.551670 kubelet[2739]: E0813 01:38:07.551591 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:38:07.551670 kubelet[2739]: E0813 01:38:07.551600 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:38:07.551670 kubelet[2739]: E0813 01:38:07.551608 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:38:07.551670 kubelet[2739]: E0813 01:38:07.551616 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:38:07.551670 kubelet[2739]: E0813 01:38:07.551624 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:38:07.551670 kubelet[2739]: E0813 01:38:07.551634 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:38:07.551670 kubelet[2739]: E0813 01:38:07.551642 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:38:07.551670 kubelet[2739]: E0813 01:38:07.551649 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:38:07.551670 kubelet[2739]: I0813 01:38:07.551659 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:38:10.409282 systemd[1]: Started sshd@51-172.234.27.175:22-147.75.109.163:47886.service - OpenSSH per-connection server daemon (147.75.109.163:47886). Aug 13 01:38:10.743040 systemd[1]: Started sshd@52-172.234.27.175:22-103.90.226.175:47042.service - OpenSSH per-connection server daemon (103.90.226.175:47042). Aug 13 01:38:10.748790 sshd[6330]: Accepted publickey for core from 147.75.109.163 port 47886 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:38:10.750294 sshd-session[6330]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:10.759003 systemd-logind[1529]: New session 43 of user core. Aug 13 01:38:10.765245 systemd[1]: Started session-43.scope - Session 43 of User core. Aug 13 01:38:10.804840 kubelet[2739]: E0813 01:38:10.804806 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:38:11.069030 sshd[6335]: Connection closed by 147.75.109.163 port 47886 Aug 13 01:38:11.070298 sshd-session[6330]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:11.077102 systemd[1]: sshd@51-172.234.27.175:22-147.75.109.163:47886.service: Deactivated successfully. Aug 13 01:38:11.081036 systemd[1]: session-43.scope: Deactivated successfully. Aug 13 01:38:11.081835 systemd-logind[1529]: Session 43 logged out. Waiting for processes to exit. Aug 13 01:38:11.083429 systemd-logind[1529]: Removed session 43. Aug 13 01:38:11.797690 kubelet[2739]: E0813 01:38:11.797660 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:38:12.670707 sshd[6333]: Received disconnect from 103.90.226.175 port 47042:11: Bye Bye [preauth] Aug 13 01:38:12.670707 sshd[6333]: Disconnected from authenticating user root 103.90.226.175 port 47042 [preauth] Aug 13 01:38:12.675795 systemd[1]: sshd@52-172.234.27.175:22-103.90.226.175:47042.service: Deactivated successfully. Aug 13 01:38:16.131326 systemd[1]: Started sshd@53-172.234.27.175:22-147.75.109.163:47898.service - OpenSSH per-connection server daemon (147.75.109.163:47898). Aug 13 01:38:16.471939 sshd[6349]: Accepted publickey for core from 147.75.109.163 port 47898 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:38:16.474185 sshd-session[6349]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:16.481116 systemd-logind[1529]: New session 44 of user core. Aug 13 01:38:16.485255 systemd[1]: Started session-44.scope - Session 44 of User core. Aug 13 01:38:16.785475 sshd[6351]: Connection closed by 147.75.109.163 port 47898 Aug 13 01:38:16.786787 sshd-session[6349]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:16.790877 systemd[1]: sshd@53-172.234.27.175:22-147.75.109.163:47898.service: Deactivated successfully. Aug 13 01:38:16.794395 systemd[1]: session-44.scope: Deactivated successfully. Aug 13 01:38:16.795627 systemd-logind[1529]: Session 44 logged out. Waiting for processes to exit. Aug 13 01:38:16.796948 systemd-logind[1529]: Removed session 44. Aug 13 01:38:17.573242 kubelet[2739]: I0813 01:38:17.573147 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:17.573906 kubelet[2739]: I0813 01:38:17.573609 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:38:17.575792 kubelet[2739]: I0813 01:38:17.575771 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:38:17.589202 kubelet[2739]: I0813 01:38:17.589167 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:17.589332 kubelet[2739]: I0813 01:38:17.589290 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:38:17.589332 kubelet[2739]: E0813 01:38:17.589320 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:38:17.589332 kubelet[2739]: E0813 01:38:17.589333 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:38:17.589498 kubelet[2739]: E0813 01:38:17.589343 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:38:17.589498 kubelet[2739]: E0813 01:38:17.589351 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:38:17.589498 kubelet[2739]: E0813 01:38:17.589359 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:38:17.589498 kubelet[2739]: E0813 01:38:17.589366 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:38:17.589498 kubelet[2739]: E0813 01:38:17.589374 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:38:17.589498 kubelet[2739]: E0813 01:38:17.589384 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:38:17.589498 kubelet[2739]: E0813 01:38:17.589391 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:38:17.589498 kubelet[2739]: E0813 01:38:17.589399 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:38:17.589498 kubelet[2739]: I0813 01:38:17.589408 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:38:21.853782 systemd[1]: Started sshd@54-172.234.27.175:22-147.75.109.163:39044.service - OpenSSH per-connection server daemon (147.75.109.163:39044). Aug 13 01:38:22.194964 sshd[6366]: Accepted publickey for core from 147.75.109.163 port 39044 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:38:22.196482 sshd-session[6366]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:22.201740 systemd-logind[1529]: New session 45 of user core. Aug 13 01:38:22.206264 systemd[1]: Started session-45.scope - Session 45 of User core. Aug 13 01:38:22.512473 sshd[6368]: Connection closed by 147.75.109.163 port 39044 Aug 13 01:38:22.513329 sshd-session[6366]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:22.517214 systemd[1]: sshd@54-172.234.27.175:22-147.75.109.163:39044.service: Deactivated successfully. Aug 13 01:38:22.519576 systemd[1]: session-45.scope: Deactivated successfully. Aug 13 01:38:22.523322 systemd-logind[1529]: Session 45 logged out. Waiting for processes to exit. Aug 13 01:38:22.526117 systemd-logind[1529]: Removed session 45. Aug 13 01:38:24.799375 kubelet[2739]: E0813 01:38:24.799321 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:38:25.307405 containerd[1544]: time="2025-08-13T01:38:25.307306538Z" level=warning msg="container event discarded" container=df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff type=CONTAINER_STOPPED_EVENT Aug 13 01:38:25.363531 containerd[1544]: time="2025-08-13T01:38:25.363451576Z" level=warning msg="container event discarded" container=668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0 type=CONTAINER_STOPPED_EVENT Aug 13 01:38:25.437879 containerd[1544]: time="2025-08-13T01:38:25.437660804Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b5a29eb80b0bc5089fcefc178e897c50dab7a245f3568439d3b5679ef6257511\" id:\"5e78debda912f74b8932a7f398230b76f31c07c12390878825d6e0664776fee1\" pid:6392 exited_at:{seconds:1755049105 nanos:436664042}" Aug 13 01:38:25.984008 containerd[1544]: time="2025-08-13T01:38:25.983951669Z" level=warning msg="container event discarded" container=df8d1a77736586c1f4aa872648e945c5f97799bd041785ac26d4703233746dff type=CONTAINER_DELETED_EVENT Aug 13 01:38:27.574328 systemd[1]: Started sshd@55-172.234.27.175:22-147.75.109.163:39046.service - OpenSSH per-connection server daemon (147.75.109.163:39046). Aug 13 01:38:27.620005 kubelet[2739]: I0813 01:38:27.619617 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:27.620005 kubelet[2739]: I0813 01:38:27.619647 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:27.620005 kubelet[2739]: I0813 01:38:27.619751 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:38:27.620005 kubelet[2739]: E0813 01:38:27.619777 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:38:27.620005 kubelet[2739]: E0813 01:38:27.619794 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:38:27.620005 kubelet[2739]: E0813 01:38:27.619803 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:38:27.620005 kubelet[2739]: E0813 01:38:27.619811 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:38:27.620005 kubelet[2739]: E0813 01:38:27.619819 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:38:27.620005 kubelet[2739]: E0813 01:38:27.619826 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:38:27.620005 kubelet[2739]: E0813 01:38:27.619834 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:38:27.620005 kubelet[2739]: E0813 01:38:27.619843 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:38:27.620005 kubelet[2739]: E0813 01:38:27.619850 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:38:27.620005 kubelet[2739]: E0813 01:38:27.619858 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:38:27.620005 kubelet[2739]: I0813 01:38:27.619867 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:38:27.904746 sshd[6405]: Accepted publickey for core from 147.75.109.163 port 39046 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:38:27.906664 sshd-session[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:27.914796 systemd-logind[1529]: New session 46 of user core. Aug 13 01:38:27.919269 systemd[1]: Started session-46.scope - Session 46 of User core. Aug 13 01:38:28.234953 sshd[6407]: Connection closed by 147.75.109.163 port 39046 Aug 13 01:38:28.236322 sshd-session[6405]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:28.241373 systemd-logind[1529]: Session 46 logged out. Waiting for processes to exit. Aug 13 01:38:28.242114 systemd[1]: sshd@55-172.234.27.175:22-147.75.109.163:39046.service: Deactivated successfully. Aug 13 01:38:28.244405 systemd[1]: session-46.scope: Deactivated successfully. Aug 13 01:38:28.246639 systemd-logind[1529]: Removed session 46. Aug 13 01:38:28.798157 kubelet[2739]: E0813 01:38:28.797454 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20" Aug 13 01:38:33.296772 systemd[1]: Started sshd@56-172.234.27.175:22-147.75.109.163:33210.service - OpenSSH per-connection server daemon (147.75.109.163:33210). Aug 13 01:38:33.631956 sshd[6421]: Accepted publickey for core from 147.75.109.163 port 33210 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:38:33.633639 sshd-session[6421]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:33.639344 systemd-logind[1529]: New session 47 of user core. Aug 13 01:38:33.644254 systemd[1]: Started session-47.scope - Session 47 of User core. Aug 13 01:38:33.942721 sshd[6423]: Connection closed by 147.75.109.163 port 33210 Aug 13 01:38:33.943401 sshd-session[6421]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:33.946530 systemd[1]: sshd@56-172.234.27.175:22-147.75.109.163:33210.service: Deactivated successfully. Aug 13 01:38:33.948474 systemd[1]: session-47.scope: Deactivated successfully. Aug 13 01:38:33.949774 systemd-logind[1529]: Session 47 logged out. Waiting for processes to exit. Aug 13 01:38:33.951906 systemd-logind[1529]: Removed session 47. Aug 13 01:38:36.416514 containerd[1544]: time="2025-08-13T01:38:36.416442468Z" level=warning msg="container event discarded" container=668529f6fd73ada3cc9287699d05d5bba60cbe442981cb14c0b6de5f7d3dc4a0 type=CONTAINER_DELETED_EVENT Aug 13 01:38:36.799457 kubelet[2739]: E0813 01:38:36.799323 2739 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" podUID="56caad57-6b4a-4069-b011-1059db183012" Aug 13 01:38:37.648536 kubelet[2739]: I0813 01:38:37.648488 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:37.648536 kubelet[2739]: I0813 01:38:37.648522 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:37.648828 kubelet[2739]: I0813 01:38:37.648628 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-994jv","kube-system/coredns-7c65d6cfc9-mx5v9","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:38:37.648828 kubelet[2739]: E0813 01:38:37.648653 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:38:37.648828 kubelet[2739]: E0813 01:38:37.648665 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:38:37.648828 kubelet[2739]: E0813 01:38:37.648674 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:38:37.648828 kubelet[2739]: E0813 01:38:37.648683 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:38:37.648828 kubelet[2739]: E0813 01:38:37.648692 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:38:37.648828 kubelet[2739]: E0813 01:38:37.648699 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:38:37.648828 kubelet[2739]: E0813 01:38:37.648707 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:38:37.648828 kubelet[2739]: E0813 01:38:37.648718 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:38:37.648828 kubelet[2739]: E0813 01:38:37.648726 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:38:37.648828 kubelet[2739]: E0813 01:38:37.648734 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:38:37.648828 kubelet[2739]: I0813 01:38:37.648743 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:38:39.008319 systemd[1]: Started sshd@57-172.234.27.175:22-147.75.109.163:48666.service - OpenSSH per-connection server daemon (147.75.109.163:48666). Aug 13 01:38:39.358292 sshd[6435]: Accepted publickey for core from 147.75.109.163 port 48666 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:38:39.359974 sshd-session[6435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:39.365186 systemd-logind[1529]: New session 48 of user core. Aug 13 01:38:39.369844 systemd[1]: Started session-48.scope - Session 48 of User core. Aug 13 01:38:39.694400 sshd[6437]: Connection closed by 147.75.109.163 port 48666 Aug 13 01:38:39.695088 sshd-session[6435]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:39.699922 systemd[1]: sshd@57-172.234.27.175:22-147.75.109.163:48666.service: Deactivated successfully. Aug 13 01:38:39.702413 systemd[1]: session-48.scope: Deactivated successfully. Aug 13 01:38:39.703564 systemd-logind[1529]: Session 48 logged out. Waiting for processes to exit. Aug 13 01:38:39.705648 systemd-logind[1529]: Removed session 48. Aug 13 01:38:39.760104 systemd[1]: Started sshd@58-172.234.27.175:22-147.75.109.163:48682.service - OpenSSH per-connection server daemon (147.75.109.163:48682). Aug 13 01:38:40.098300 sshd[6450]: Accepted publickey for core from 147.75.109.163 port 48682 ssh2: RSA SHA256:r4aVwGcRWirUnoa2gT2aTsV0OcegqRzu+Xc09vJuKwo Aug 13 01:38:40.100424 sshd-session[6450]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:38:40.105962 systemd-logind[1529]: New session 49 of user core. Aug 13 01:38:40.111270 systemd[1]: Started session-49.scope - Session 49 of User core. Aug 13 01:38:40.428413 sshd[6452]: Connection closed by 147.75.109.163 port 48682 Aug 13 01:38:40.429571 sshd-session[6450]: pam_unix(sshd:session): session closed for user core Aug 13 01:38:40.438061 systemd[1]: sshd@58-172.234.27.175:22-147.75.109.163:48682.service: Deactivated successfully. Aug 13 01:38:40.441115 systemd[1]: session-49.scope: Deactivated successfully. Aug 13 01:38:40.442209 systemd-logind[1529]: Session 49 logged out. Waiting for processes to exit. Aug 13 01:38:40.444356 systemd-logind[1529]: Removed session 49. Aug 13 01:38:47.669171 kubelet[2739]: I0813 01:38:47.669109 2739 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:47.669979 kubelet[2739]: I0813 01:38:47.669221 2739 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:38:47.670497 kubelet[2739]: I0813 01:38:47.670466 2739 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:38:47.691386 kubelet[2739]: I0813 01:38:47.691344 2739 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:38:47.691499 kubelet[2739]: I0813 01:38:47.691481 2739 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-85fbc76f96-d5vf4","calico-system/calico-typha-79464475b5-bbrtw","kube-system/coredns-7c65d6cfc9-mx5v9","kube-system/coredns-7c65d6cfc9-994jv","calico-system/calico-node-5c47r","kube-system/kube-controller-manager-172-234-27-175","kube-system/kube-proxy-kfjpt","calico-system/csi-node-driver-7bj49","kube-system/kube-apiserver-172-234-27-175","kube-system/kube-scheduler-172-234-27-175"] Aug 13 01:38:47.691597 kubelet[2739]: E0813 01:38:47.691509 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-85fbc76f96-d5vf4" Aug 13 01:38:47.691597 kubelet[2739]: E0813 01:38:47.691523 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-79464475b5-bbrtw" Aug 13 01:38:47.691597 kubelet[2739]: E0813 01:38:47.691533 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-mx5v9" Aug 13 01:38:47.691597 kubelet[2739]: E0813 01:38:47.691541 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-994jv" Aug 13 01:38:47.691597 kubelet[2739]: E0813 01:38:47.691549 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-5c47r" Aug 13 01:38:47.691597 kubelet[2739]: E0813 01:38:47.691557 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-234-27-175" Aug 13 01:38:47.691597 kubelet[2739]: E0813 01:38:47.691565 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-kfjpt" Aug 13 01:38:47.691597 kubelet[2739]: E0813 01:38:47.691577 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-7bj49" Aug 13 01:38:47.691597 kubelet[2739]: E0813 01:38:47.691584 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-234-27-175" Aug 13 01:38:47.691597 kubelet[2739]: E0813 01:38:47.691592 2739 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-234-27-175" Aug 13 01:38:47.691597 kubelet[2739]: I0813 01:38:47.691601 2739 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:38:47.797289 kubelet[2739]: E0813 01:38:47.797252 2739 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.9 172.232.0.19 172.232.0.20"