Aug 13 01:11:12.825529 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 01:11:12.825552 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:11:12.825560 kernel: BIOS-provided physical RAM map: Aug 13 01:11:12.825570 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 01:11:12.825575 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 01:11:12.825581 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 01:11:12.825587 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 01:11:12.825593 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 01:11:12.826107 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 01:11:12.826119 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 01:11:12.826126 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 01:11:12.826132 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 01:11:12.826144 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 01:11:12.826150 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 01:11:12.826157 kernel: NX (Execute Disable) protection: active Aug 13 01:11:12.826163 kernel: APIC: Static calls initialized Aug 13 01:11:12.826169 kernel: SMBIOS 2.8 present. Aug 13 01:11:12.826178 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 01:11:12.826184 kernel: DMI: Memory slots populated: 1/1 Aug 13 01:11:12.826190 kernel: Hypervisor detected: KVM Aug 13 01:11:12.826196 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 01:11:12.826202 kernel: kvm-clock: using sched offset of 5658678094 cycles Aug 13 01:11:12.826208 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 01:11:12.826215 kernel: tsc: Detected 1999.999 MHz processor Aug 13 01:11:12.826221 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:11:12.826228 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:11:12.826234 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 01:11:12.826243 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 01:11:12.826249 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:11:12.826255 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 01:11:12.826261 kernel: Using GB pages for direct mapping Aug 13 01:11:12.826268 kernel: ACPI: Early table checksum verification disabled Aug 13 01:11:12.826274 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 01:11:12.826280 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:11:12.826286 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:11:12.826292 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:11:12.826301 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 01:11:12.826307 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:11:12.826313 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:11:12.826319 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:11:12.826328 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:11:12.826335 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 01:11:12.826343 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 01:11:12.826350 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 01:11:12.826356 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 01:11:12.826362 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 01:11:12.826369 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 01:11:12.826375 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 01:11:12.826382 kernel: No NUMA configuration found Aug 13 01:11:12.826388 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 01:11:12.826396 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Aug 13 01:11:12.826403 kernel: Zone ranges: Aug 13 01:11:12.826409 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:11:12.826415 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 01:11:12.826422 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:11:12.826428 kernel: Device empty Aug 13 01:11:12.826435 kernel: Movable zone start for each node Aug 13 01:11:12.826441 kernel: Early memory node ranges Aug 13 01:11:12.826447 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 01:11:12.826454 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 01:11:12.826462 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:11:12.826468 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 01:11:12.826474 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:11:12.826481 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 01:11:12.826487 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 01:11:12.826494 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 01:11:12.826500 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 01:11:12.826507 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:11:12.826513 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 01:11:12.826522 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 01:11:12.826528 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:11:12.826534 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 01:11:12.826541 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 01:11:12.826547 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:11:12.826554 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 01:11:12.826561 kernel: TSC deadline timer available Aug 13 01:11:12.826567 kernel: CPU topo: Max. logical packages: 1 Aug 13 01:11:12.826574 kernel: CPU topo: Max. logical dies: 1 Aug 13 01:11:12.826582 kernel: CPU topo: Max. dies per package: 1 Aug 13 01:11:12.826588 kernel: CPU topo: Max. threads per core: 1 Aug 13 01:11:12.826595 kernel: CPU topo: Num. cores per package: 2 Aug 13 01:11:12.826601 kernel: CPU topo: Num. threads per package: 2 Aug 13 01:11:12.826607 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 01:11:12.826613 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 01:11:12.826620 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 01:11:12.826626 kernel: kvm-guest: setup PV sched yield Aug 13 01:11:12.826633 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 01:11:12.826641 kernel: Booting paravirtualized kernel on KVM Aug 13 01:11:12.826647 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:11:12.826654 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 01:11:12.826660 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 01:11:12.826666 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 01:11:12.826673 kernel: pcpu-alloc: [0] 0 1 Aug 13 01:11:12.826679 kernel: kvm-guest: PV spinlocks enabled Aug 13 01:11:12.826685 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 01:11:12.826693 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:11:12.826701 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:11:12.826708 kernel: random: crng init done Aug 13 01:11:12.826714 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 01:11:12.826720 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:11:12.826727 kernel: Fallback order for Node 0: 0 Aug 13 01:11:12.826733 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Aug 13 01:11:12.826739 kernel: Policy zone: Normal Aug 13 01:11:12.826746 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:11:12.826752 kernel: software IO TLB: area num 2. Aug 13 01:11:12.826760 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 01:11:12.826766 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 01:11:12.826773 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 01:11:12.826779 kernel: Dynamic Preempt: voluntary Aug 13 01:11:12.826785 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 01:11:12.826792 kernel: rcu: RCU event tracing is enabled. Aug 13 01:11:12.826799 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 01:11:12.826806 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 01:11:12.826812 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:11:12.826820 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:11:12.826827 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:11:12.826833 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 01:11:12.826840 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:11:12.826852 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:11:12.826861 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:11:12.826868 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 01:11:12.826874 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 01:11:12.826881 kernel: Console: colour VGA+ 80x25 Aug 13 01:11:12.826888 kernel: printk: legacy console [tty0] enabled Aug 13 01:11:12.826921 kernel: printk: legacy console [ttyS0] enabled Aug 13 01:11:12.826928 kernel: ACPI: Core revision 20240827 Aug 13 01:11:12.826938 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 01:11:12.826945 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:11:12.826951 kernel: x2apic enabled Aug 13 01:11:12.826958 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 01:11:12.826965 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 01:11:12.826974 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 01:11:12.826980 kernel: kvm-guest: setup PV IPIs Aug 13 01:11:12.826987 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:11:12.826994 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Aug 13 01:11:12.827000 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Aug 13 01:11:12.827007 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 01:11:12.827014 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 01:11:12.827020 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 01:11:12.827029 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:11:12.827035 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 01:11:12.827042 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 01:11:12.827049 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 01:11:12.827056 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:11:12.827062 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 01:11:12.827069 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 01:11:12.827076 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 01:11:12.827083 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 01:11:12.827092 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 01:11:12.827098 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:11:12.827105 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:11:12.827111 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:11:12.827118 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:11:12.827125 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 01:11:12.827131 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:11:12.827138 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 01:11:12.827147 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 01:11:12.827154 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:11:12.827161 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:11:12.827167 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 01:11:12.827174 kernel: landlock: Up and running. Aug 13 01:11:12.827180 kernel: SELinux: Initializing. Aug 13 01:11:12.827187 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:11:12.827194 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:11:12.827200 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 01:11:12.827209 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 01:11:12.827215 kernel: ... version: 0 Aug 13 01:11:12.827222 kernel: ... bit width: 48 Aug 13 01:11:12.827229 kernel: ... generic registers: 6 Aug 13 01:11:12.827235 kernel: ... value mask: 0000ffffffffffff Aug 13 01:11:12.827242 kernel: ... max period: 00007fffffffffff Aug 13 01:11:12.827248 kernel: ... fixed-purpose events: 0 Aug 13 01:11:12.827255 kernel: ... event mask: 000000000000003f Aug 13 01:11:12.827261 kernel: signal: max sigframe size: 3376 Aug 13 01:11:12.827268 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:11:12.827276 kernel: rcu: Max phase no-delay instances is 400. Aug 13 01:11:12.827283 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 01:11:12.827290 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:11:12.827296 kernel: smpboot: x86: Booting SMP configuration: Aug 13 01:11:12.827303 kernel: .... node #0, CPUs: #1 Aug 13 01:11:12.827309 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:11:12.827316 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Aug 13 01:11:12.827323 kernel: Memory: 3961048K/4193772K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 227296K reserved, 0K cma-reserved) Aug 13 01:11:12.827331 kernel: devtmpfs: initialized Aug 13 01:11:12.827338 kernel: x86/mm: Memory block size: 128MB Aug 13 01:11:12.827345 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:11:12.827351 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 01:11:12.827358 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:11:12.827365 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:11:12.827371 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:11:12.827378 kernel: audit: type=2000 audit(1755047469.870:1): state=initialized audit_enabled=0 res=1 Aug 13 01:11:12.827385 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:11:12.827393 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:11:12.827399 kernel: cpuidle: using governor menu Aug 13 01:11:12.827406 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:11:12.827413 kernel: dca service started, version 1.12.1 Aug 13 01:11:12.827419 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Aug 13 01:11:12.827426 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 01:11:12.827433 kernel: PCI: Using configuration type 1 for base access Aug 13 01:11:12.827440 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:11:12.827446 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:11:12.827455 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 01:11:12.827462 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:11:12.827468 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 01:11:12.827475 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:11:12.827482 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:11:12.827488 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:11:12.827495 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 01:11:12.827502 kernel: ACPI: Interpreter enabled Aug 13 01:11:12.827508 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 01:11:12.827517 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:11:12.827524 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:11:12.827530 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 01:11:12.827537 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 01:11:12.827544 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 01:11:12.827736 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:11:12.827850 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 01:11:12.828088 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 01:11:12.828129 kernel: PCI host bridge to bus 0000:00 Aug 13 01:11:12.828300 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:11:12.828400 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:11:12.828497 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:11:12.828592 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 01:11:12.828691 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 01:11:12.828954 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 01:11:12.829066 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 01:11:12.829199 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 01:11:12.829360 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 01:11:12.829471 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Aug 13 01:11:12.829575 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Aug 13 01:11:12.829680 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Aug 13 01:11:12.829788 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:11:12.829928 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 01:11:12.830041 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Aug 13 01:11:12.830146 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Aug 13 01:11:12.830251 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 01:11:12.830369 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 01:11:12.830476 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Aug 13 01:11:12.830586 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Aug 13 01:11:12.830693 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 01:11:12.832106 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Aug 13 01:11:12.832353 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 01:11:12.832496 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 01:11:12.832625 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 01:11:12.832749 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Aug 13 01:11:12.832855 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Aug 13 01:11:12.836020 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 01:11:12.836139 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Aug 13 01:11:12.836151 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 01:11:12.836159 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 01:11:12.836166 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:11:12.836173 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 01:11:12.836185 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 01:11:12.836193 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 01:11:12.836200 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 01:11:12.836208 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 01:11:12.836215 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 01:11:12.836223 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 01:11:12.836230 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 01:11:12.836237 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 01:11:12.836244 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 01:11:12.836255 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 01:11:12.836262 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 01:11:12.836270 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 01:11:12.836277 kernel: iommu: Default domain type: Translated Aug 13 01:11:12.836284 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:11:12.836291 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:11:12.836298 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:11:12.836306 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 01:11:12.836313 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 01:11:12.836432 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 01:11:12.836539 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 01:11:12.836645 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:11:12.836655 kernel: vgaarb: loaded Aug 13 01:11:12.836662 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 01:11:12.836670 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 01:11:12.836678 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 01:11:12.836685 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:11:12.836697 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:11:12.836705 kernel: pnp: PnP ACPI init Aug 13 01:11:12.836832 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 01:11:12.836844 kernel: pnp: PnP ACPI: found 5 devices Aug 13 01:11:12.836852 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:11:12.836859 kernel: NET: Registered PF_INET protocol family Aug 13 01:11:12.836867 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:11:12.836874 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 01:11:12.836885 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:11:12.836910 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:11:12.836918 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 01:11:12.836926 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 01:11:12.836933 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:11:12.836941 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:11:12.836948 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:11:12.836956 kernel: NET: Registered PF_XDP protocol family Aug 13 01:11:12.837076 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:11:12.837182 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:11:12.837278 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:11:12.837374 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 01:11:12.837468 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 01:11:12.837563 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 01:11:12.837571 kernel: PCI: CLS 0 bytes, default 64 Aug 13 01:11:12.837579 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 01:11:12.837587 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 01:11:12.837595 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Aug 13 01:11:12.837606 kernel: Initialise system trusted keyrings Aug 13 01:11:12.837613 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 01:11:12.837621 kernel: Key type asymmetric registered Aug 13 01:11:12.837628 kernel: Asymmetric key parser 'x509' registered Aug 13 01:11:12.837635 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 01:11:12.837642 kernel: io scheduler mq-deadline registered Aug 13 01:11:12.837650 kernel: io scheduler kyber registered Aug 13 01:11:12.837657 kernel: io scheduler bfq registered Aug 13 01:11:12.837664 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:11:12.837676 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 01:11:12.837683 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 01:11:12.837690 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:11:12.837697 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:11:12.837705 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 01:11:12.837712 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:11:12.837720 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:11:12.837830 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 01:11:12.837855 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 01:11:12.840057 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 01:11:12.840166 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T01:11:12 UTC (1755047472) Aug 13 01:11:12.840271 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 01:11:12.840282 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 01:11:12.840290 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:11:12.840298 kernel: Segment Routing with IPv6 Aug 13 01:11:12.840305 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:11:12.840316 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:11:12.840323 kernel: Key type dns_resolver registered Aug 13 01:11:12.840330 kernel: IPI shorthand broadcast: enabled Aug 13 01:11:12.840337 kernel: sched_clock: Marking stable (2570003022, 210394940)->(2813581246, -33183284) Aug 13 01:11:12.840345 kernel: registered taskstats version 1 Aug 13 01:11:12.840353 kernel: Loading compiled-in X.509 certificates Aug 13 01:11:12.840360 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 01:11:12.840368 kernel: Demotion targets for Node 0: null Aug 13 01:11:12.840375 kernel: Key type .fscrypt registered Aug 13 01:11:12.840385 kernel: Key type fscrypt-provisioning registered Aug 13 01:11:12.840393 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:11:12.840400 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:11:12.840408 kernel: ima: No architecture policies found Aug 13 01:11:12.840415 kernel: clk: Disabling unused clocks Aug 13 01:11:12.840422 kernel: Warning: unable to open an initial console. Aug 13 01:11:12.840430 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 01:11:12.840437 kernel: Write protecting the kernel read-only data: 24576k Aug 13 01:11:12.840445 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 01:11:12.840454 kernel: Run /init as init process Aug 13 01:11:12.840461 kernel: with arguments: Aug 13 01:11:12.840468 kernel: /init Aug 13 01:11:12.840476 kernel: with environment: Aug 13 01:11:12.840483 kernel: HOME=/ Aug 13 01:11:12.840510 kernel: TERM=linux Aug 13 01:11:12.840521 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:11:12.840530 systemd[1]: Successfully made /usr/ read-only. Aug 13 01:11:12.840542 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:11:12.840553 systemd[1]: Detected virtualization kvm. Aug 13 01:11:12.840561 systemd[1]: Detected architecture x86-64. Aug 13 01:11:12.840568 systemd[1]: Running in initrd. Aug 13 01:11:12.840576 systemd[1]: No hostname configured, using default hostname. Aug 13 01:11:12.840584 systemd[1]: Hostname set to . Aug 13 01:11:12.840592 systemd[1]: Initializing machine ID from random generator. Aug 13 01:11:12.840600 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:11:12.840610 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:11:12.840618 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:11:12.840626 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 01:11:12.840634 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:11:12.840642 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 01:11:12.840651 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 01:11:12.840660 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 01:11:12.840670 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 01:11:12.840678 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:11:12.840686 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:11:12.840694 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:11:12.840705 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:11:12.840712 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:11:12.840720 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:11:12.840727 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:11:12.840738 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:11:12.840746 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 01:11:12.840961 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 01:11:12.840969 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:11:12.840977 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:11:12.840984 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:11:12.840992 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:11:12.841004 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 01:11:12.841011 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:11:12.841019 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 01:11:12.841027 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 01:11:12.841035 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:11:12.841042 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:11:12.841050 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:11:12.841061 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:11:12.841069 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 01:11:12.841109 systemd-journald[206]: Collecting audit messages is disabled. Aug 13 01:11:12.841139 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:11:12.841147 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:11:12.841155 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:11:12.841165 systemd-journald[206]: Journal started Aug 13 01:11:12.841186 systemd-journald[206]: Runtime Journal (/run/log/journal/1a730ce3bbb744ccacc887746f83c567) is 8M, max 78.5M, 70.5M free. Aug 13 01:11:12.847000 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:11:12.850187 systemd-modules-load[207]: Inserted module 'overlay' Aug 13 01:11:12.875042 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:11:12.887763 systemd-tmpfiles[221]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 01:11:12.948423 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:11:12.948466 kernel: Bridge firewalling registered Aug 13 01:11:12.889013 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:11:12.891083 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:11:12.896928 systemd-modules-load[207]: Inserted module 'br_netfilter' Aug 13 01:11:12.950864 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:11:12.951744 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:11:12.953266 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:11:12.954545 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:11:12.959043 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:11:12.962148 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:11:12.976367 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:11:12.980063 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:11:12.981550 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:11:12.984039 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 01:11:13.002028 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:11:13.020951 systemd-resolved[245]: Positive Trust Anchors: Aug 13 01:11:13.020963 systemd-resolved[245]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:11:13.020989 systemd-resolved[245]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:11:13.026854 systemd-resolved[245]: Defaulting to hostname 'linux'. Aug 13 01:11:13.027977 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:11:13.028793 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:11:13.091942 kernel: SCSI subsystem initialized Aug 13 01:11:13.099926 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:11:13.110097 kernel: iscsi: registered transport (tcp) Aug 13 01:11:13.129445 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:11:13.129468 kernel: QLogic iSCSI HBA Driver Aug 13 01:11:13.149568 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:11:13.166743 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:11:13.169161 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:11:13.212929 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 01:11:13.214842 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 01:11:13.260924 kernel: raid6: avx2x4 gen() 31715 MB/s Aug 13 01:11:13.278920 kernel: raid6: avx2x2 gen() 30855 MB/s Aug 13 01:11:13.297559 kernel: raid6: avx2x1 gen() 22198 MB/s Aug 13 01:11:13.297583 kernel: raid6: using algorithm avx2x4 gen() 31715 MB/s Aug 13 01:11:13.316450 kernel: raid6: .... xor() 4763 MB/s, rmw enabled Aug 13 01:11:13.316487 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:11:13.334923 kernel: xor: automatically using best checksumming function avx Aug 13 01:11:13.467943 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 01:11:13.474776 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:11:13.476890 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:11:13.505323 systemd-udevd[456]: Using default interface naming scheme 'v255'. Aug 13 01:11:13.510246 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:11:13.513151 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 01:11:13.537198 dracut-pre-trigger[462]: rd.md=0: removing MD RAID activation Aug 13 01:11:13.560087 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:11:13.562526 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:11:13.624029 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:11:13.627808 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 01:11:13.691947 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Aug 13 01:11:13.700211 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:11:13.712935 kernel: scsi host0: Virtio SCSI HBA Aug 13 01:11:13.723924 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 01:11:13.724506 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:11:13.724602 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:11:13.727373 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:11:13.730357 kernel: AES CTR mode by8 optimization enabled Aug 13 01:11:13.736631 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:11:13.858201 kernel: libata version 3.00 loaded. Aug 13 01:11:13.857029 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:11:13.879000 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 01:11:13.931994 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 01:11:13.934977 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 01:11:13.935975 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 01:11:13.936153 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 01:11:13.936290 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 01:11:13.936426 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 01:11:13.936556 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 01:11:13.938151 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 01:11:13.938304 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 01:11:13.938432 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 01:11:13.940920 kernel: scsi host1: ahci Aug 13 01:11:13.942963 kernel: scsi host2: ahci Aug 13 01:11:13.943439 kernel: scsi host3: ahci Aug 13 01:11:13.943917 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:11:13.943977 kernel: GPT:9289727 != 9297919 Aug 13 01:11:13.944177 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:11:13.944189 kernel: GPT:9289727 != 9297919 Aug 13 01:11:13.944198 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:11:13.944207 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:11:13.944216 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 01:11:13.946949 kernel: scsi host4: ahci Aug 13 01:11:13.948934 kernel: scsi host5: ahci Aug 13 01:11:13.951934 kernel: scsi host6: ahci Aug 13 01:11:13.952094 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Aug 13 01:11:13.952107 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Aug 13 01:11:13.952116 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Aug 13 01:11:13.952125 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Aug 13 01:11:13.952134 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Aug 13 01:11:13.952148 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Aug 13 01:11:14.023672 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 01:11:14.059824 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:11:14.073919 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 01:11:14.082068 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:11:14.088790 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 01:11:14.089431 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 01:11:14.092450 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 01:11:14.118813 disk-uuid[624]: Primary Header is updated. Aug 13 01:11:14.118813 disk-uuid[624]: Secondary Entries is updated. Aug 13 01:11:14.118813 disk-uuid[624]: Secondary Header is updated. Aug 13 01:11:14.126926 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:11:14.139931 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:11:14.267651 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 01:11:14.267715 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 01:11:14.267726 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 01:11:14.267735 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 01:11:14.267744 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 01:11:14.267753 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 01:11:14.291706 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 01:11:14.313684 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:11:14.314292 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:11:14.315554 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:11:14.317407 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 01:11:14.333324 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:11:15.140934 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:11:15.141380 disk-uuid[625]: The operation has completed successfully. Aug 13 01:11:15.198241 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:11:15.198371 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 01:11:15.219261 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 01:11:15.235505 sh[652]: Success Aug 13 01:11:15.253301 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:11:15.253335 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:11:15.253949 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 01:11:15.264935 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 01:11:15.310781 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 01:11:15.314970 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 01:11:15.323004 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 01:11:15.336918 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 01:11:15.336955 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (664) Aug 13 01:11:15.340104 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 01:11:15.340139 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:11:15.342803 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 01:11:15.350252 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 01:11:15.351202 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:11:15.352079 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 01:11:15.352738 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 01:11:15.355188 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 01:11:15.385928 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (697) Aug 13 01:11:15.392839 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:11:15.392869 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:11:15.392880 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:11:15.401977 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:11:15.403247 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 01:11:15.404655 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 01:11:15.484592 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:11:15.488182 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:11:15.513396 ignition[758]: Ignition 2.21.0 Aug 13 01:11:15.514087 ignition[758]: Stage: fetch-offline Aug 13 01:11:15.514123 ignition[758]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:11:15.514132 ignition[758]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:11:15.514211 ignition[758]: parsed url from cmdline: "" Aug 13 01:11:15.514214 ignition[758]: no config URL provided Aug 13 01:11:15.514219 ignition[758]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:11:15.514227 ignition[758]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:11:15.514231 ignition[758]: failed to fetch config: resource requires networking Aug 13 01:11:15.520621 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:11:15.514373 ignition[758]: Ignition finished successfully Aug 13 01:11:15.526646 systemd-networkd[839]: lo: Link UP Aug 13 01:11:15.526657 systemd-networkd[839]: lo: Gained carrier Aug 13 01:11:15.528086 systemd-networkd[839]: Enumeration completed Aug 13 01:11:15.528182 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:11:15.528834 systemd[1]: Reached target network.target - Network. Aug 13 01:11:15.529209 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:11:15.529213 systemd-networkd[839]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:11:15.530462 systemd-networkd[839]: eth0: Link UP Aug 13 01:11:15.532088 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 01:11:15.532965 systemd-networkd[839]: eth0: Gained carrier Aug 13 01:11:15.532976 systemd-networkd[839]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:11:15.552642 ignition[843]: Ignition 2.21.0 Aug 13 01:11:15.552665 ignition[843]: Stage: fetch Aug 13 01:11:15.552998 ignition[843]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:11:15.553013 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:11:15.553196 ignition[843]: parsed url from cmdline: "" Aug 13 01:11:15.553204 ignition[843]: no config URL provided Aug 13 01:11:15.553210 ignition[843]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:11:15.553219 ignition[843]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:11:15.553295 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 01:11:15.553827 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:11:15.754019 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 01:11:15.754190 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:11:16.063984 systemd-networkd[839]: eth0: DHCPv4 address 172.233.214.103/24, gateway 172.233.214.1 acquired from 23.40.197.129 Aug 13 01:11:16.155356 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 01:11:16.276535 ignition[843]: PUT result: OK Aug 13 01:11:16.276642 ignition[843]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 01:11:16.407506 ignition[843]: GET result: OK Aug 13 01:11:16.407648 ignition[843]: parsing config with SHA512: f17428d8a5176cdedb815e3aadbbec3c9c60c19182a5724943555a085d895f4b6c1b969b31d1de59c9e2298fa966e4eddd69c4641acc8fcd73e6dd00b9ca90a1 Aug 13 01:11:16.411608 unknown[843]: fetched base config from "system" Aug 13 01:11:16.411617 unknown[843]: fetched base config from "system" Aug 13 01:11:16.412396 ignition[843]: fetch: fetch complete Aug 13 01:11:16.411622 unknown[843]: fetched user config from "akamai" Aug 13 01:11:16.412402 ignition[843]: fetch: fetch passed Aug 13 01:11:16.412443 ignition[843]: Ignition finished successfully Aug 13 01:11:16.416694 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 01:11:16.441085 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 01:11:16.480077 ignition[850]: Ignition 2.21.0 Aug 13 01:11:16.480737 ignition[850]: Stage: kargs Aug 13 01:11:16.480885 ignition[850]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:11:16.481106 ignition[850]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:11:16.484130 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 01:11:16.482195 ignition[850]: kargs: kargs passed Aug 13 01:11:16.482237 ignition[850]: Ignition finished successfully Aug 13 01:11:16.487211 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 01:11:16.510988 ignition[856]: Ignition 2.21.0 Aug 13 01:11:16.511000 ignition[856]: Stage: disks Aug 13 01:11:16.511144 ignition[856]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:11:16.511154 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:11:16.513749 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 01:11:16.511990 ignition[856]: disks: disks passed Aug 13 01:11:16.514774 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 01:11:16.512030 ignition[856]: Ignition finished successfully Aug 13 01:11:16.515628 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 01:11:16.516811 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:11:16.518414 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:11:16.519421 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:11:16.521554 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 01:11:16.563261 systemd-fsck[864]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 01:11:16.565010 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 01:11:16.567637 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 01:11:16.671925 kernel: EXT4-fs (sda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 01:11:16.672407 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 01:11:16.673474 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 01:11:16.675351 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:11:16.677961 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 01:11:16.679596 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 01:11:16.680220 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:11:16.680244 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:11:16.684859 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 01:11:16.686946 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 01:11:16.694260 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (872) Aug 13 01:11:16.694285 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:11:16.698033 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:11:16.698055 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:11:16.704314 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:11:16.731745 initrd-setup-root[896]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:11:16.736505 initrd-setup-root[903]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:11:16.741636 initrd-setup-root[910]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:11:16.745534 initrd-setup-root[917]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:11:16.825328 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 01:11:16.827503 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 01:11:16.829949 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 01:11:16.838815 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 01:11:16.842121 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:11:16.856396 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 01:11:16.865599 ignition[986]: INFO : Ignition 2.21.0 Aug 13 01:11:16.866924 ignition[986]: INFO : Stage: mount Aug 13 01:11:16.866924 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:11:16.866924 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:11:16.870041 ignition[986]: INFO : mount: mount passed Aug 13 01:11:16.870041 ignition[986]: INFO : Ignition finished successfully Aug 13 01:11:16.868435 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 01:11:16.870582 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 01:11:17.387060 systemd-networkd[839]: eth0: Gained IPv6LL Aug 13 01:11:17.674034 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:11:17.693960 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (997) Aug 13 01:11:17.698267 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:11:17.698288 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:11:17.698299 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:11:17.705727 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:11:17.731210 ignition[1014]: INFO : Ignition 2.21.0 Aug 13 01:11:17.731210 ignition[1014]: INFO : Stage: files Aug 13 01:11:17.732397 ignition[1014]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:11:17.732397 ignition[1014]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:11:17.732397 ignition[1014]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:11:17.734503 ignition[1014]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:11:17.734503 ignition[1014]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:11:17.737108 ignition[1014]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:11:17.738115 ignition[1014]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:11:17.738115 ignition[1014]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:11:17.737592 unknown[1014]: wrote ssh authorized keys file for user: core Aug 13 01:11:17.740332 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 01:11:17.740332 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 13 01:11:18.035819 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 01:11:22.906080 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 01:11:22.906080 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:11:22.908470 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:11:22.908470 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:11:22.908470 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:11:22.908470 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:11:22.908470 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:11:22.908470 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:11:22.908470 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:11:22.908470 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:11:22.915084 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:11:22.915084 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:11:22.915084 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:11:22.915084 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:11:22.915084 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 01:11:23.315784 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 01:11:23.710436 ignition[1014]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:11:23.710436 ignition[1014]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 01:11:23.712737 ignition[1014]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:11:23.713960 ignition[1014]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:11:23.713960 ignition[1014]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 01:11:23.713960 ignition[1014]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 13 01:11:23.713960 ignition[1014]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:11:23.713960 ignition[1014]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:11:23.713960 ignition[1014]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 13 01:11:23.713960 ignition[1014]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:11:23.713960 ignition[1014]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:11:23.713960 ignition[1014]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:11:23.725946 ignition[1014]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:11:23.725946 ignition[1014]: INFO : files: files passed Aug 13 01:11:23.725946 ignition[1014]: INFO : Ignition finished successfully Aug 13 01:11:23.716786 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 01:11:23.721399 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 01:11:23.726846 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 01:11:23.735917 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:11:23.736660 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 01:11:23.745461 initrd-setup-root-after-ignition[1044]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:11:23.745461 initrd-setup-root-after-ignition[1044]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:11:23.748045 initrd-setup-root-after-ignition[1048]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:11:23.747549 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:11:23.748973 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 01:11:23.750721 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 01:11:23.789670 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:11:23.789805 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 01:11:23.791391 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 01:11:23.792250 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 01:11:23.793503 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 01:11:23.794266 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 01:11:23.812481 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:11:23.814286 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 01:11:23.834640 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:11:23.836018 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:11:23.837362 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 01:11:23.838143 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:11:23.838239 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:11:23.839644 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 01:11:23.840375 systemd[1]: Stopped target basic.target - Basic System. Aug 13 01:11:23.841602 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 01:11:23.842660 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:11:23.843752 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 01:11:23.845256 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:11:23.846573 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 01:11:23.847851 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:11:23.849441 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 01:11:23.850705 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 01:11:23.852270 systemd[1]: Stopped target swap.target - Swaps. Aug 13 01:11:23.853587 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:11:23.853719 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:11:23.854967 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:11:23.855924 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:11:23.857218 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 01:11:23.857309 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:11:23.858434 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:11:23.858532 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 01:11:23.860135 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:11:23.860285 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:11:23.861023 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:11:23.861152 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 01:11:23.863973 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 01:11:23.865356 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:11:23.865466 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:11:23.868559 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 01:11:23.870725 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:11:23.870835 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:11:23.871957 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:11:23.872066 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:11:23.878012 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:11:23.878117 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 01:11:23.894098 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:11:23.896083 ignition[1068]: INFO : Ignition 2.21.0 Aug 13 01:11:23.896083 ignition[1068]: INFO : Stage: umount Aug 13 01:11:23.896083 ignition[1068]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:11:23.896083 ignition[1068]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:11:23.920965 ignition[1068]: INFO : umount: umount passed Aug 13 01:11:23.920965 ignition[1068]: INFO : Ignition finished successfully Aug 13 01:11:23.901765 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:11:23.901912 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 01:11:23.920513 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:11:23.920613 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 01:11:23.922537 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:11:23.922589 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 01:11:23.923620 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:11:23.923672 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 01:11:23.924614 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 01:11:23.924662 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 01:11:23.925641 systemd[1]: Stopped target network.target - Network. Aug 13 01:11:23.926607 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:11:23.926656 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:11:23.927681 systemd[1]: Stopped target paths.target - Path Units. Aug 13 01:11:23.928699 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:11:23.932237 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:11:23.932862 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 01:11:23.934056 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 01:11:23.935114 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:11:23.935154 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:11:23.936296 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:11:23.936335 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:11:23.937571 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:11:23.937621 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 01:11:23.938603 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 01:11:23.938646 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 01:11:23.939813 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:11:23.940068 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 01:11:23.941180 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 01:11:23.942285 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 01:11:23.945935 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:11:23.946073 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 01:11:23.949543 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 01:11:23.949800 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 01:11:23.949848 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:11:23.953067 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:11:23.953964 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:11:23.954106 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 01:11:23.955764 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 01:11:23.956298 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 01:11:23.957437 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:11:23.957487 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:11:23.959490 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 01:11:23.961250 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:11:23.961301 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:11:23.963859 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:11:23.963923 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:11:23.965564 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:11:23.965610 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 01:11:23.966388 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:11:23.970791 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:11:23.972533 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:11:23.972737 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:11:23.975521 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:11:23.975579 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 01:11:23.977137 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:11:23.977171 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:11:23.977680 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:11:23.977726 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:11:23.978760 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:11:23.978807 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 01:11:23.980010 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:11:23.980062 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:11:23.982685 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 01:11:23.984755 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 01:11:23.984810 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:11:23.987781 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:11:23.987830 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:11:23.989079 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:11:23.989125 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:11:23.990867 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:11:23.993019 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 01:11:23.997811 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:11:23.997941 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 01:11:23.999806 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 01:11:24.001116 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 01:11:24.033681 systemd[1]: Switching root. Aug 13 01:11:24.063698 systemd-journald[206]: Journal stopped Aug 13 01:11:25.113571 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Aug 13 01:11:25.113598 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:11:25.113610 kernel: SELinux: policy capability open_perms=1 Aug 13 01:11:25.113622 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:11:25.113631 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:11:25.113639 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:11:25.113648 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:11:25.113657 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:11:25.113666 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:11:25.113674 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 01:11:25.113685 kernel: audit: type=1403 audit(1755047484.203:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:11:25.113694 systemd[1]: Successfully loaded SELinux policy in 58.097ms. Aug 13 01:11:25.113705 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 10.863ms. Aug 13 01:11:25.113715 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:11:25.113726 systemd[1]: Detected virtualization kvm. Aug 13 01:11:25.113737 systemd[1]: Detected architecture x86-64. Aug 13 01:11:25.113746 systemd[1]: Detected first boot. Aug 13 01:11:25.113756 systemd[1]: Initializing machine ID from random generator. Aug 13 01:11:25.113766 zram_generator::config[1112]: No configuration found. Aug 13 01:11:25.113776 kernel: Guest personality initialized and is inactive Aug 13 01:11:25.113785 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 01:11:25.113793 kernel: Initialized host personality Aug 13 01:11:25.113804 kernel: NET: Registered PF_VSOCK protocol family Aug 13 01:11:25.113813 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:11:25.113824 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 01:11:25.113833 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:11:25.113843 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 01:11:25.113852 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:11:25.113862 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 01:11:25.113873 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 01:11:25.113883 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 01:11:25.116354 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 01:11:25.116375 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 01:11:25.116386 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 01:11:25.116396 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 01:11:25.116406 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 01:11:25.116419 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:11:25.116429 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:11:25.116438 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 01:11:25.116448 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 01:11:25.116461 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 01:11:25.116471 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:11:25.116481 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 01:11:25.116491 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:11:25.116502 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:11:25.116512 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 01:11:25.116522 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 01:11:25.116531 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 01:11:25.116541 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 01:11:25.116551 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:11:25.116561 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:11:25.116570 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:11:25.116582 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:11:25.116591 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 01:11:25.116602 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 01:11:25.116611 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 01:11:25.116621 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:11:25.116633 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:11:25.116643 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:11:25.116653 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 01:11:25.116662 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 01:11:25.116673 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 01:11:25.116683 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 01:11:25.116692 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:11:25.116702 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 01:11:25.116714 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 01:11:25.116723 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 01:11:25.116733 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:11:25.116743 systemd[1]: Reached target machines.target - Containers. Aug 13 01:11:25.116753 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 01:11:25.116763 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:11:25.116773 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:11:25.116783 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 01:11:25.116794 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:11:25.116804 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:11:25.116814 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:11:25.116823 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 01:11:25.116833 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:11:25.116843 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:11:25.116853 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:11:25.116863 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 01:11:25.116872 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:11:25.116884 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:11:25.118966 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:11:25.118985 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:11:25.118996 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:11:25.119006 kernel: loop: module loaded Aug 13 01:11:25.119016 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:11:25.119026 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 01:11:25.119036 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 01:11:25.119050 kernel: fuse: init (API version 7.41) Aug 13 01:11:25.119059 kernel: ACPI: bus type drm_connector registered Aug 13 01:11:25.119069 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:11:25.119078 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:11:25.119088 systemd[1]: Stopped verity-setup.service. Aug 13 01:11:25.119098 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:11:25.119108 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 01:11:25.119118 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 01:11:25.119155 systemd-journald[1200]: Collecting audit messages is disabled. Aug 13 01:11:25.119176 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 01:11:25.119187 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 01:11:25.119197 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 01:11:25.119207 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 01:11:25.119219 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 01:11:25.119229 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:11:25.119238 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:11:25.119248 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 01:11:25.119258 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:11:25.119268 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:11:25.119278 systemd-journald[1200]: Journal started Aug 13 01:11:25.119298 systemd-journald[1200]: Runtime Journal (/run/log/journal/a3b7200ea9394575ae0de220ca465009) is 8M, max 78.5M, 70.5M free. Aug 13 01:11:24.762527 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:11:24.788280 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 01:11:24.788770 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:11:25.123098 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:11:25.122346 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:11:25.122535 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:11:25.124481 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:11:25.124691 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:11:25.125517 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:11:25.125710 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 01:11:25.126526 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:11:25.126709 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:11:25.128585 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:11:25.129564 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:11:25.130499 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 01:11:25.131495 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 01:11:25.146773 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:11:25.150994 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 01:11:25.155058 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 01:11:25.155726 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:11:25.155784 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:11:25.158199 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 01:11:25.179004 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 01:11:25.180251 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:11:25.182099 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 01:11:25.184141 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 01:11:25.184682 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:11:25.186996 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 01:11:25.188326 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:11:25.195323 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:11:25.198079 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 01:11:25.208991 systemd-journald[1200]: Time spent on flushing to /var/log/journal/a3b7200ea9394575ae0de220ca465009 is 50.642ms for 992 entries. Aug 13 01:11:25.208991 systemd-journald[1200]: System Journal (/var/log/journal/a3b7200ea9394575ae0de220ca465009) is 8M, max 195.6M, 187.6M free. Aug 13 01:11:25.281452 systemd-journald[1200]: Received client request to flush runtime journal. Aug 13 01:11:25.281509 kernel: loop0: detected capacity change from 0 to 146240 Aug 13 01:11:25.202349 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 01:11:25.211592 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:11:25.212556 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 01:11:25.287082 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:11:25.213195 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 01:11:25.236219 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 01:11:25.237071 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 01:11:25.242301 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 01:11:25.269268 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:11:25.288583 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 01:11:25.292670 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 01:11:25.309928 kernel: loop1: detected capacity change from 0 to 8 Aug 13 01:11:25.318172 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 01:11:25.320693 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:11:25.327941 kernel: loop2: detected capacity change from 0 to 229808 Aug 13 01:11:25.363893 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Aug 13 01:11:25.370127 kernel: loop3: detected capacity change from 0 to 113872 Aug 13 01:11:25.365638 systemd-tmpfiles[1255]: ACLs are not supported, ignoring. Aug 13 01:11:25.372393 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:11:25.402132 kernel: loop4: detected capacity change from 0 to 146240 Aug 13 01:11:25.422919 kernel: loop5: detected capacity change from 0 to 8 Aug 13 01:11:25.429247 kernel: loop6: detected capacity change from 0 to 229808 Aug 13 01:11:25.452061 kernel: loop7: detected capacity change from 0 to 113872 Aug 13 01:11:25.470825 (sd-merge)[1260]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 01:11:25.472581 (sd-merge)[1260]: Merged extensions into '/usr'. Aug 13 01:11:25.478698 systemd[1]: Reload requested from client PID 1237 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 01:11:25.478791 systemd[1]: Reloading... Aug 13 01:11:25.551922 zram_generator::config[1288]: No configuration found. Aug 13 01:11:25.694046 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:11:25.722968 ldconfig[1232]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:11:25.779028 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:11:25.779404 systemd[1]: Reloading finished in 300 ms. Aug 13 01:11:25.796317 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 01:11:25.797400 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 01:11:25.812036 systemd[1]: Starting ensure-sysext.service... Aug 13 01:11:25.815291 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:11:25.841041 systemd[1]: Reload requested from client PID 1329 ('systemctl') (unit ensure-sysext.service)... Aug 13 01:11:25.841114 systemd[1]: Reloading... Aug 13 01:11:25.843257 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 01:11:25.843295 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 01:11:25.843565 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:11:25.843797 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 01:11:25.844646 systemd-tmpfiles[1330]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:11:25.844873 systemd-tmpfiles[1330]: ACLs are not supported, ignoring. Aug 13 01:11:25.847974 systemd-tmpfiles[1330]: ACLs are not supported, ignoring. Aug 13 01:11:25.855322 systemd-tmpfiles[1330]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:11:25.855422 systemd-tmpfiles[1330]: Skipping /boot Aug 13 01:11:25.870315 systemd-tmpfiles[1330]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:11:25.870381 systemd-tmpfiles[1330]: Skipping /boot Aug 13 01:11:25.919913 zram_generator::config[1366]: No configuration found. Aug 13 01:11:25.990521 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:11:26.056016 systemd[1]: Reloading finished in 214 ms. Aug 13 01:11:26.074628 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 01:11:26.084360 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:11:26.092820 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:11:26.096419 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 01:11:26.107233 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 01:11:26.111080 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:11:26.113179 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:11:26.117968 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 01:11:26.121285 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:11:26.121442 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:11:26.124802 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:11:26.127877 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:11:26.136101 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:11:26.137219 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:11:26.137320 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:11:26.137408 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:11:26.141974 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:11:26.142286 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:11:26.142423 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:11:26.142494 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:11:26.146224 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 01:11:26.146819 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:11:26.147511 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:11:26.148962 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:11:26.149890 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:11:26.150796 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:11:26.164443 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:11:26.164678 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:11:26.168163 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:11:26.173144 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:11:26.175183 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:11:26.177051 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:11:26.177195 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:11:26.177348 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:11:26.180038 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 01:11:26.181104 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:11:26.181301 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:11:26.195748 systemd-udevd[1406]: Using default interface naming scheme 'v255'. Aug 13 01:11:26.195834 systemd[1]: Finished ensure-sysext.service. Aug 13 01:11:26.196731 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 01:11:26.208367 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 01:11:26.211072 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 01:11:26.222795 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:11:26.224059 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:11:26.226663 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:11:26.229004 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:11:26.229231 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:11:26.230685 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:11:26.237499 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:11:26.237811 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:11:26.249995 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 01:11:26.257242 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:11:26.263227 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:11:26.264956 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 01:11:26.266867 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:11:26.275930 augenrules[1459]: No rules Aug 13 01:11:26.276172 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:11:26.277215 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:11:26.280383 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 01:11:26.436768 systemd-networkd[1449]: lo: Link UP Aug 13 01:11:26.436782 systemd-networkd[1449]: lo: Gained carrier Aug 13 01:11:26.438273 systemd-networkd[1449]: Enumeration completed Aug 13 01:11:26.438350 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:11:26.439244 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:11:26.439249 systemd-networkd[1449]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:11:26.440837 systemd-networkd[1449]: eth0: Link UP Aug 13 01:11:26.441280 systemd-networkd[1449]: eth0: Gained carrier Aug 13 01:11:26.441294 systemd-networkd[1449]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:11:26.443411 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 01:11:26.449113 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 01:11:26.449923 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 01:11:26.496588 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 01:11:26.536915 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:11:26.557007 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 01:11:26.557653 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 01:11:26.561019 systemd-resolved[1405]: Positive Trust Anchors: Aug 13 01:11:26.561255 systemd-resolved[1405]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:11:26.561321 systemd-resolved[1405]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:11:26.562935 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 01:11:26.566450 systemd-resolved[1405]: Defaulting to hostname 'linux'. Aug 13 01:11:26.570369 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:11:26.571388 systemd[1]: Reached target network.target - Network. Aug 13 01:11:26.571916 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:11:26.572463 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:11:26.573076 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 01:11:26.573986 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 01:11:26.574546 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 01:11:26.575477 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 01:11:26.576158 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 01:11:26.576964 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 01:11:26.577716 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:11:26.577750 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:11:26.578347 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:11:26.580110 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 01:11:26.582961 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 01:11:26.587961 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 01:11:26.588877 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 01:11:26.589836 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 01:11:26.601666 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 01:11:26.603353 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 01:11:26.605928 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:11:26.606138 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 01:11:26.607186 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:11:26.608184 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:11:26.609138 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:11:26.609174 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:11:26.611325 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 01:11:26.615250 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 01:11:26.618045 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 01:11:26.622491 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 01:11:26.631085 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 01:11:26.635560 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 01:11:26.636304 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 01:11:26.642222 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 01:11:26.647256 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 01:11:26.654050 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 01:11:26.682222 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 01:11:26.682506 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 01:11:26.692171 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 01:11:26.698147 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 01:11:26.718920 jq[1511]: false Aug 13 01:11:26.715481 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 01:11:26.718822 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:11:26.720483 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:11:26.724239 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 01:11:26.730687 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 01:11:26.741359 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Refreshing passwd entry cache Aug 13 01:11:26.739763 oslogin_cache_refresh[1513]: Refreshing passwd entry cache Aug 13 01:11:26.745458 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 01:11:26.746492 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:11:26.748309 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 01:11:26.751631 update_engine[1524]: I20250813 01:11:26.751567 1524 main.cc:92] Flatcar Update Engine starting Aug 13 01:11:26.757982 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Failure getting users, quitting Aug 13 01:11:26.755795 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:11:26.763261 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 01:11:26.769921 oslogin_cache_refresh[1513]: Failure getting users, quitting Aug 13 01:11:26.771106 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:11:26.771106 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Refreshing group entry cache Aug 13 01:11:26.771106 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Failure getting groups, quitting Aug 13 01:11:26.771106 google_oslogin_nss_cache[1513]: oslogin_cache_refresh[1513]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:11:26.769944 oslogin_cache_refresh[1513]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:11:26.769983 oslogin_cache_refresh[1513]: Refreshing group entry cache Aug 13 01:11:26.770609 oslogin_cache_refresh[1513]: Failure getting groups, quitting Aug 13 01:11:26.770617 oslogin_cache_refresh[1513]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:11:26.771935 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:11:26.777226 jq[1526]: true Aug 13 01:11:26.785087 extend-filesystems[1512]: Found /dev/sda6 Aug 13 01:11:26.786107 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 01:11:26.792107 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 01:11:26.793623 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:11:26.794973 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 01:11:26.808549 extend-filesystems[1512]: Found /dev/sda9 Aug 13 01:11:26.810168 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 01:11:26.815296 jq[1544]: true Aug 13 01:11:26.819734 extend-filesystems[1512]: Checking size of /dev/sda9 Aug 13 01:11:26.824131 tar[1539]: linux-amd64/LICENSE Aug 13 01:11:26.824568 tar[1539]: linux-amd64/helm Aug 13 01:11:26.826445 (ntainerd)[1550]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 01:11:26.867503 extend-filesystems[1512]: Resized partition /dev/sda9 Aug 13 01:11:26.888458 extend-filesystems[1576]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 01:11:26.893723 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 01:11:26.899931 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 01:11:26.897725 dbus-daemon[1509]: [system] SELinux support is enabled Aug 13 01:11:26.898120 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 01:11:26.904678 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:11:26.904709 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 01:11:26.905974 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:11:26.905990 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 01:11:26.910937 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 01:11:26.920087 coreos-metadata[1508]: Aug 13 01:11:26.919 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:11:26.925328 extend-filesystems[1576]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 01:11:26.925328 extend-filesystems[1576]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 01:11:26.925328 extend-filesystems[1576]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 01:11:26.932684 extend-filesystems[1512]: Resized filesystem in /dev/sda9 Aug 13 01:11:26.927732 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:11:26.928086 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 01:11:26.949414 systemd[1]: Started update-engine.service - Update Engine. Aug 13 01:11:26.950703 update_engine[1524]: I20250813 01:11:26.950650 1524 update_check_scheduler.cc:74] Next update check in 9m47s Aug 13 01:11:26.955817 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 01:11:26.964925 bash[1588]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:11:26.965328 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 01:11:26.972614 systemd[1]: Starting sshkeys.service... Aug 13 01:11:26.980787 sshd_keygen[1563]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:11:26.983004 systemd-networkd[1449]: eth0: DHCPv4 address 172.233.214.103/24, gateway 172.233.214.1 acquired from 23.40.197.129 Aug 13 01:11:26.983227 dbus-daemon[1509]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1449 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 01:11:26.987003 systemd-timesyncd[1432]: Network configuration changed, trying to establish connection. Aug 13 01:11:26.989507 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 01:11:27.027257 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:11:27.030511 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 01:11:27.037675 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 01:11:27.107854 systemd-timesyncd[1432]: Contacted time server 162.159.200.123:123 (0.flatcar.pool.ntp.org). Aug 13 01:11:27.108137 systemd-timesyncd[1432]: Initial clock synchronization to Wed 2025-08-13 01:11:27.120605 UTC. Aug 13 01:11:27.119996 kernel: EDAC MC: Ver: 3.0.0 Aug 13 01:11:27.125941 systemd-logind[1522]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 01:11:27.125969 systemd-logind[1522]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:11:27.129078 systemd-logind[1522]: New seat seat0. Aug 13 01:11:27.129338 locksmithd[1589]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:11:27.133196 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 01:11:27.143604 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 01:11:27.149271 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 01:11:27.164155 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:11:27.164421 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 01:11:27.169350 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 01:11:27.210298 coreos-metadata[1606]: Aug 13 01:11:27.210 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:11:27.214834 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 01:11:27.218440 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 01:11:27.222057 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 01:11:27.222758 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 01:11:27.236560 containerd[1550]: time="2025-08-13T01:11:27Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 01:11:27.256779 containerd[1550]: time="2025-08-13T01:11:27.256753503Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 01:11:27.279419 containerd[1550]: time="2025-08-13T01:11:27.279031054Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.79µs" Aug 13 01:11:27.279419 containerd[1550]: time="2025-08-13T01:11:27.279054005Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 01:11:27.279419 containerd[1550]: time="2025-08-13T01:11:27.279070245Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 01:11:27.279419 containerd[1550]: time="2025-08-13T01:11:27.279233975Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 01:11:27.279419 containerd[1550]: time="2025-08-13T01:11:27.279248395Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 01:11:27.279419 containerd[1550]: time="2025-08-13T01:11:27.279268165Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:11:27.279419 containerd[1550]: time="2025-08-13T01:11:27.279332015Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:11:27.279419 containerd[1550]: time="2025-08-13T01:11:27.279343385Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:11:27.281200 containerd[1550]: time="2025-08-13T01:11:27.280443135Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:11:27.281200 containerd[1550]: time="2025-08-13T01:11:27.280461725Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:11:27.281200 containerd[1550]: time="2025-08-13T01:11:27.280473395Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:11:27.281200 containerd[1550]: time="2025-08-13T01:11:27.280480915Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 01:11:27.281200 containerd[1550]: time="2025-08-13T01:11:27.280572535Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 01:11:27.281200 containerd[1550]: time="2025-08-13T01:11:27.280781585Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:11:27.281200 containerd[1550]: time="2025-08-13T01:11:27.280810665Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:11:27.281200 containerd[1550]: time="2025-08-13T01:11:27.280819725Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 01:11:27.281200 containerd[1550]: time="2025-08-13T01:11:27.280846855Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 01:11:27.281200 containerd[1550]: time="2025-08-13T01:11:27.281014915Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 01:11:27.281200 containerd[1550]: time="2025-08-13T01:11:27.281076696Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:11:27.285917 containerd[1550]: time="2025-08-13T01:11:27.285053858Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 01:11:27.285917 containerd[1550]: time="2025-08-13T01:11:27.285089258Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 01:11:27.285917 containerd[1550]: time="2025-08-13T01:11:27.285101708Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 01:11:27.285917 containerd[1550]: time="2025-08-13T01:11:27.285129778Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 01:11:27.285917 containerd[1550]: time="2025-08-13T01:11:27.285140348Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 01:11:27.285917 containerd[1550]: time="2025-08-13T01:11:27.285148658Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 01:11:27.285917 containerd[1550]: time="2025-08-13T01:11:27.285159118Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 01:11:27.285917 containerd[1550]: time="2025-08-13T01:11:27.285168968Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 01:11:27.285917 containerd[1550]: time="2025-08-13T01:11:27.285177438Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 01:11:27.285917 containerd[1550]: time="2025-08-13T01:11:27.285189808Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 01:11:27.285917 containerd[1550]: time="2025-08-13T01:11:27.285197688Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 01:11:27.285917 containerd[1550]: time="2025-08-13T01:11:27.285207428Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 01:11:27.285917 containerd[1550]: time="2025-08-13T01:11:27.285305308Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 01:11:27.285917 containerd[1550]: time="2025-08-13T01:11:27.285322508Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 01:11:27.286150 containerd[1550]: time="2025-08-13T01:11:27.285335178Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 01:11:27.286150 containerd[1550]: time="2025-08-13T01:11:27.285344138Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 01:11:27.286150 containerd[1550]: time="2025-08-13T01:11:27.285353968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 01:11:27.286150 containerd[1550]: time="2025-08-13T01:11:27.285362968Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 01:11:27.286150 containerd[1550]: time="2025-08-13T01:11:27.285371588Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 01:11:27.286150 containerd[1550]: time="2025-08-13T01:11:27.285382618Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 01:11:27.286150 containerd[1550]: time="2025-08-13T01:11:27.285394638Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 01:11:27.286150 containerd[1550]: time="2025-08-13T01:11:27.285406788Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 01:11:27.286150 containerd[1550]: time="2025-08-13T01:11:27.285415928Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 01:11:27.286150 containerd[1550]: time="2025-08-13T01:11:27.285474178Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 01:11:27.286150 containerd[1550]: time="2025-08-13T01:11:27.285488228Z" level=info msg="Start snapshots syncer" Aug 13 01:11:27.286150 containerd[1550]: time="2025-08-13T01:11:27.285506868Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 01:11:27.286334 containerd[1550]: time="2025-08-13T01:11:27.285684358Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 01:11:27.286334 containerd[1550]: time="2025-08-13T01:11:27.285722118Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 01:11:27.286892 containerd[1550]: time="2025-08-13T01:11:27.286873548Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 01:11:27.287095 containerd[1550]: time="2025-08-13T01:11:27.287077279Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 01:11:27.287952 containerd[1550]: time="2025-08-13T01:11:27.287935059Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 01:11:27.288111 containerd[1550]: time="2025-08-13T01:11:27.288001599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 01:11:27.288111 containerd[1550]: time="2025-08-13T01:11:27.288017359Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 01:11:27.288111 containerd[1550]: time="2025-08-13T01:11:27.288028239Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 01:11:27.288111 containerd[1550]: time="2025-08-13T01:11:27.288036879Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 01:11:27.288111 containerd[1550]: time="2025-08-13T01:11:27.288046019Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 01:11:27.288111 containerd[1550]: time="2025-08-13T01:11:27.288065709Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 01:11:27.288111 containerd[1550]: time="2025-08-13T01:11:27.288074579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 01:11:27.288111 containerd[1550]: time="2025-08-13T01:11:27.288083359Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 01:11:27.288450 containerd[1550]: time="2025-08-13T01:11:27.288265569Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:11:27.288450 containerd[1550]: time="2025-08-13T01:11:27.288285469Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:11:27.288450 containerd[1550]: time="2025-08-13T01:11:27.288293199Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:11:27.288450 containerd[1550]: time="2025-08-13T01:11:27.288301029Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:11:27.288450 containerd[1550]: time="2025-08-13T01:11:27.288350839Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 01:11:27.288450 containerd[1550]: time="2025-08-13T01:11:27.288359599Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 01:11:27.288450 containerd[1550]: time="2025-08-13T01:11:27.288368429Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 01:11:27.288450 containerd[1550]: time="2025-08-13T01:11:27.288384289Z" level=info msg="runtime interface created" Aug 13 01:11:27.288450 containerd[1550]: time="2025-08-13T01:11:27.288389239Z" level=info msg="created NRI interface" Aug 13 01:11:27.288450 containerd[1550]: time="2025-08-13T01:11:27.288395729Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 01:11:27.288450 containerd[1550]: time="2025-08-13T01:11:27.288404899Z" level=info msg="Connect containerd service" Aug 13 01:11:27.288450 containerd[1550]: time="2025-08-13T01:11:27.288423629Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 01:11:27.289384 containerd[1550]: time="2025-08-13T01:11:27.289364070Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:11:27.368314 coreos-metadata[1606]: Aug 13 01:11:27.365 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 01:11:27.493573 containerd[1550]: time="2025-08-13T01:11:27.493538962Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:11:27.494407 containerd[1550]: time="2025-08-13T01:11:27.493977732Z" level=info msg="Start subscribing containerd event" Aug 13 01:11:27.494407 containerd[1550]: time="2025-08-13T01:11:27.494077792Z" level=info msg="Start recovering state" Aug 13 01:11:27.494407 containerd[1550]: time="2025-08-13T01:11:27.494173312Z" level=info msg="Start event monitor" Aug 13 01:11:27.494407 containerd[1550]: time="2025-08-13T01:11:27.494186822Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:11:27.494407 containerd[1550]: time="2025-08-13T01:11:27.494193792Z" level=info msg="Start streaming server" Aug 13 01:11:27.494407 containerd[1550]: time="2025-08-13T01:11:27.494201982Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 01:11:27.494407 containerd[1550]: time="2025-08-13T01:11:27.494208592Z" level=info msg="runtime interface starting up..." Aug 13 01:11:27.494407 containerd[1550]: time="2025-08-13T01:11:27.494214392Z" level=info msg="starting plugins..." Aug 13 01:11:27.494407 containerd[1550]: time="2025-08-13T01:11:27.494226112Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 01:11:27.495021 containerd[1550]: time="2025-08-13T01:11:27.495006022Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:11:27.495242 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 01:11:27.497094 containerd[1550]: time="2025-08-13T01:11:27.496965923Z" level=info msg="containerd successfully booted in 0.260976s" Aug 13 01:11:27.509652 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 01:11:27.512972 dbus-daemon[1509]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1597 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 01:11:27.526616 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 01:11:27.527491 coreos-metadata[1606]: Aug 13 01:11:27.527 INFO Fetch successful Aug 13 01:11:27.534358 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 01:11:27.590109 update-ssh-keys[1651]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:11:27.590685 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 01:11:27.602183 systemd[1]: Finished sshkeys.service. Aug 13 01:11:27.604874 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:11:27.644880 polkitd[1650]: Started polkitd version 126 Aug 13 01:11:27.648722 polkitd[1650]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 01:11:27.649273 polkitd[1650]: Loading rules from directory /run/polkit-1/rules.d Aug 13 01:11:27.649356 polkitd[1650]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:11:27.649591 polkitd[1650]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 13 01:11:27.649649 polkitd[1650]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:11:27.649734 polkitd[1650]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 01:11:27.650360 polkitd[1650]: Finished loading, compiling and executing 2 rules Aug 13 01:11:27.650596 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 01:11:27.652369 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 01:11:27.652788 polkitd[1650]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 01:11:27.665409 systemd-hostnamed[1597]: Hostname set to <172-233-214-103> (transient) Aug 13 01:11:27.665505 systemd-resolved[1405]: System hostname changed to '172-233-214-103'. Aug 13 01:11:27.791770 tar[1539]: linux-amd64/README.md Aug 13 01:11:27.813003 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 01:11:27.930516 coreos-metadata[1508]: Aug 13 01:11:27.930 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 01:11:28.032570 coreos-metadata[1508]: Aug 13 01:11:28.032 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 01:11:28.057032 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 01:11:28.058875 systemd[1]: Started sshd@0-172.233.214.103:22-147.75.109.163:58730.service - OpenSSH per-connection server daemon (147.75.109.163:58730). Aug 13 01:11:28.330818 coreos-metadata[1508]: Aug 13 01:11:28.330 INFO Fetch successful Aug 13 01:11:28.331000 coreos-metadata[1508]: Aug 13 01:11:28.330 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 01:11:28.395147 systemd-networkd[1449]: eth0: Gained IPv6LL Aug 13 01:11:28.397849 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 01:11:28.399257 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 01:11:28.406514 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:11:28.410010 sshd[1670]: Accepted publickey for core from 147.75.109.163 port 58730 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:11:28.412251 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 01:11:28.412713 sshd-session[1670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:11:28.428373 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 01:11:28.431646 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 01:11:28.442965 systemd-logind[1522]: New session 1 of user core. Aug 13 01:11:28.449223 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 01:11:28.456916 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 01:11:28.461970 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 01:11:28.472211 (systemd)[1686]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:11:28.475018 systemd-logind[1522]: New session c1 of user core. Aug 13 01:11:28.594588 systemd[1686]: Queued start job for default target default.target. Aug 13 01:11:28.603313 systemd[1686]: Created slice app.slice - User Application Slice. Aug 13 01:11:28.603514 systemd[1686]: Reached target paths.target - Paths. Aug 13 01:11:28.603633 systemd[1686]: Reached target timers.target - Timers. Aug 13 01:11:28.605141 systemd[1686]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 01:11:28.618928 systemd[1686]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 01:11:28.618983 systemd[1686]: Reached target sockets.target - Sockets. Aug 13 01:11:28.619148 systemd[1686]: Reached target basic.target - Basic System. Aug 13 01:11:28.619198 systemd[1686]: Reached target default.target - Main User Target. Aug 13 01:11:28.619229 systemd[1686]: Startup finished in 136ms. Aug 13 01:11:28.619566 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 01:11:28.626016 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 01:11:28.635144 coreos-metadata[1508]: Aug 13 01:11:28.635 INFO Fetch successful Aug 13 01:11:28.725930 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 01:11:28.726916 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 01:11:28.886967 systemd[1]: Started sshd@1-172.233.214.103:22-147.75.109.163:45642.service - OpenSSH per-connection server daemon (147.75.109.163:45642). Aug 13 01:11:29.223822 sshd[1716]: Accepted publickey for core from 147.75.109.163 port 45642 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:11:29.226034 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:11:29.232212 systemd-logind[1522]: New session 2 of user core. Aug 13 01:11:29.242415 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 01:11:29.251721 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:11:29.252621 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 01:11:29.253878 systemd[1]: Startup finished in 2.649s (kernel) + 11.554s (initrd) + 5.105s (userspace) = 19.309s. Aug 13 01:11:29.263557 (kubelet)[1724]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:11:29.474373 sshd[1722]: Connection closed by 147.75.109.163 port 45642 Aug 13 01:11:29.475164 sshd-session[1716]: pam_unix(sshd:session): session closed for user core Aug 13 01:11:29.479528 systemd-logind[1522]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:11:29.480482 systemd[1]: sshd@1-172.233.214.103:22-147.75.109.163:45642.service: Deactivated successfully. Aug 13 01:11:29.482413 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:11:29.487355 systemd-logind[1522]: Removed session 2. Aug 13 01:11:29.531720 systemd[1]: Started sshd@2-172.233.214.103:22-147.75.109.163:45652.service - OpenSSH per-connection server daemon (147.75.109.163:45652). Aug 13 01:11:29.759970 kubelet[1724]: E0813 01:11:29.759570 1724 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:11:29.762533 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:11:29.762720 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:11:29.763088 systemd[1]: kubelet.service: Consumed 827ms CPU time, 267.2M memory peak. Aug 13 01:11:29.852040 sshd[1738]: Accepted publickey for core from 147.75.109.163 port 45652 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:11:29.853293 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:11:29.858089 systemd-logind[1522]: New session 3 of user core. Aug 13 01:11:29.863003 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 01:11:30.089603 sshd[1742]: Connection closed by 147.75.109.163 port 45652 Aug 13 01:11:30.090630 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Aug 13 01:11:30.099509 systemd[1]: sshd@2-172.233.214.103:22-147.75.109.163:45652.service: Deactivated successfully. Aug 13 01:11:30.101515 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:11:30.102248 systemd-logind[1522]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:11:30.103578 systemd-logind[1522]: Removed session 3. Aug 13 01:11:30.153349 systemd[1]: Started sshd@3-172.233.214.103:22-147.75.109.163:45660.service - OpenSSH per-connection server daemon (147.75.109.163:45660). Aug 13 01:11:30.496861 sshd[1748]: Accepted publickey for core from 147.75.109.163 port 45660 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:11:30.498922 sshd-session[1748]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:11:30.504634 systemd-logind[1522]: New session 4 of user core. Aug 13 01:11:30.516041 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 01:11:30.746970 sshd[1750]: Connection closed by 147.75.109.163 port 45660 Aug 13 01:11:30.747519 sshd-session[1748]: pam_unix(sshd:session): session closed for user core Aug 13 01:11:30.751738 systemd[1]: sshd@3-172.233.214.103:22-147.75.109.163:45660.service: Deactivated successfully. Aug 13 01:11:30.753500 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:11:30.754622 systemd-logind[1522]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:11:30.755595 systemd-logind[1522]: Removed session 4. Aug 13 01:11:30.821316 systemd[1]: Started sshd@4-172.233.214.103:22-147.75.109.163:45666.service - OpenSSH per-connection server daemon (147.75.109.163:45666). Aug 13 01:11:31.171761 sshd[1756]: Accepted publickey for core from 147.75.109.163 port 45666 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:11:31.173124 sshd-session[1756]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:11:31.177433 systemd-logind[1522]: New session 5 of user core. Aug 13 01:11:31.182987 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 01:11:31.383235 sudo[1759]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 01:11:31.383515 sudo[1759]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:11:31.400700 sudo[1759]: pam_unix(sudo:session): session closed for user root Aug 13 01:11:31.453116 sshd[1758]: Connection closed by 147.75.109.163 port 45666 Aug 13 01:11:31.453939 sshd-session[1756]: pam_unix(sshd:session): session closed for user core Aug 13 01:11:31.458285 systemd-logind[1522]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:11:31.459172 systemd[1]: sshd@4-172.233.214.103:22-147.75.109.163:45666.service: Deactivated successfully. Aug 13 01:11:31.460858 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:11:31.462418 systemd-logind[1522]: Removed session 5. Aug 13 01:11:31.520878 systemd[1]: Started sshd@5-172.233.214.103:22-147.75.109.163:45678.service - OpenSSH per-connection server daemon (147.75.109.163:45678). Aug 13 01:11:31.860371 sshd[1765]: Accepted publickey for core from 147.75.109.163 port 45678 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:11:31.861387 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:11:31.865696 systemd-logind[1522]: New session 6 of user core. Aug 13 01:11:31.872987 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 01:11:32.061741 sudo[1769]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 01:11:32.062036 sudo[1769]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:11:32.067411 sudo[1769]: pam_unix(sudo:session): session closed for user root Aug 13 01:11:32.073813 sudo[1768]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 01:11:32.074169 sudo[1768]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:11:32.087593 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:11:32.123296 augenrules[1791]: No rules Aug 13 01:11:32.124160 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:11:32.124428 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:11:32.125996 sudo[1768]: pam_unix(sudo:session): session closed for user root Aug 13 01:11:32.177709 sshd[1767]: Connection closed by 147.75.109.163 port 45678 Aug 13 01:11:32.178300 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Aug 13 01:11:32.181770 systemd-logind[1522]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:11:32.182032 systemd[1]: sshd@5-172.233.214.103:22-147.75.109.163:45678.service: Deactivated successfully. Aug 13 01:11:32.183866 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:11:32.186566 systemd-logind[1522]: Removed session 6. Aug 13 01:11:32.238201 systemd[1]: Started sshd@6-172.233.214.103:22-147.75.109.163:45692.service - OpenSSH per-connection server daemon (147.75.109.163:45692). Aug 13 01:11:32.589629 sshd[1800]: Accepted publickey for core from 147.75.109.163 port 45692 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:11:32.591119 sshd-session[1800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:11:32.594943 systemd-logind[1522]: New session 7 of user core. Aug 13 01:11:32.603990 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 01:11:32.785720 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:11:32.786019 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:11:33.028146 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 01:11:33.042153 (dockerd)[1821]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 01:11:33.196734 dockerd[1821]: time="2025-08-13T01:11:33.196686692Z" level=info msg="Starting up" Aug 13 01:11:33.197924 dockerd[1821]: time="2025-08-13T01:11:33.197887281Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 01:11:33.272235 dockerd[1821]: time="2025-08-13T01:11:33.272190176Z" level=info msg="Loading containers: start." Aug 13 01:11:33.281921 kernel: Initializing XFRM netlink socket Aug 13 01:11:33.479563 systemd-networkd[1449]: docker0: Link UP Aug 13 01:11:33.482339 dockerd[1821]: time="2025-08-13T01:11:33.482306402Z" level=info msg="Loading containers: done." Aug 13 01:11:33.495086 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3199313596-merged.mount: Deactivated successfully. Aug 13 01:11:33.496772 dockerd[1821]: time="2025-08-13T01:11:33.496740409Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:11:33.496825 dockerd[1821]: time="2025-08-13T01:11:33.496796301Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 01:11:33.496928 dockerd[1821]: time="2025-08-13T01:11:33.496889795Z" level=info msg="Initializing buildkit" Aug 13 01:11:33.517385 dockerd[1821]: time="2025-08-13T01:11:33.517361336Z" level=info msg="Completed buildkit initialization" Aug 13 01:11:33.523181 dockerd[1821]: time="2025-08-13T01:11:33.523154175Z" level=info msg="Daemon has completed initialization" Aug 13 01:11:33.523489 dockerd[1821]: time="2025-08-13T01:11:33.523459683Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:11:33.523565 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 01:11:34.182561 containerd[1550]: time="2025-08-13T01:11:34.182311995Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 01:11:34.951672 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2835729885.mount: Deactivated successfully. Aug 13 01:11:35.835349 containerd[1550]: time="2025-08-13T01:11:35.835303986Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:11:35.836169 containerd[1550]: time="2025-08-13T01:11:35.836111019Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=30078237" Aug 13 01:11:35.836770 containerd[1550]: time="2025-08-13T01:11:35.836744983Z" level=info msg="ImageCreate event name:\"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:11:35.838922 containerd[1550]: time="2025-08-13T01:11:35.838585094Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:11:35.839526 containerd[1550]: time="2025-08-13T01:11:35.839346653Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"30075037\" in 1.656999949s" Aug 13 01:11:35.839526 containerd[1550]: time="2025-08-13T01:11:35.839376098Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\"" Aug 13 01:11:35.839859 containerd[1550]: time="2025-08-13T01:11:35.839834362Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 01:11:37.308808 containerd[1550]: time="2025-08-13T01:11:37.308745393Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:11:37.309926 containerd[1550]: time="2025-08-13T01:11:37.309610441Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=26019361" Aug 13 01:11:37.310608 containerd[1550]: time="2025-08-13T01:11:37.310566932Z" level=info msg="ImageCreate event name:\"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:11:37.312664 containerd[1550]: time="2025-08-13T01:11:37.312639773Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:11:37.313503 containerd[1550]: time="2025-08-13T01:11:37.313463533Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"27646922\" in 1.473601257s" Aug 13 01:11:37.313553 containerd[1550]: time="2025-08-13T01:11:37.313503771Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\"" Aug 13 01:11:37.314398 containerd[1550]: time="2025-08-13T01:11:37.314362898Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 01:11:38.655417 containerd[1550]: time="2025-08-13T01:11:38.655351058Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:11:38.656481 containerd[1550]: time="2025-08-13T01:11:38.656224085Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=20155013" Aug 13 01:11:38.657277 containerd[1550]: time="2025-08-13T01:11:38.657240643Z" level=info msg="ImageCreate event name:\"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:11:38.659335 containerd[1550]: time="2025-08-13T01:11:38.659300332Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:11:38.660153 containerd[1550]: time="2025-08-13T01:11:38.660130462Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"21782592\" in 1.345681245s" Aug 13 01:11:38.660223 containerd[1550]: time="2025-08-13T01:11:38.660209735Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\"" Aug 13 01:11:38.661158 containerd[1550]: time="2025-08-13T01:11:38.661121799Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 01:11:40.012588 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1860133460.mount: Deactivated successfully. Aug 13 01:11:40.014416 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:11:40.019032 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:11:40.237231 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:11:40.245289 (kubelet)[2102]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:11:40.279402 kubelet[2102]: E0813 01:11:40.279293 2102 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:11:40.284565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:11:40.284734 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:11:40.285382 systemd[1]: kubelet.service: Consumed 198ms CPU time, 110.3M memory peak. Aug 13 01:11:40.471692 containerd[1550]: time="2025-08-13T01:11:40.471610982Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:11:40.472451 containerd[1550]: time="2025-08-13T01:11:40.472411548Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=31892666" Aug 13 01:11:40.473120 containerd[1550]: time="2025-08-13T01:11:40.473064870Z" level=info msg="ImageCreate event name:\"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:11:40.474492 containerd[1550]: time="2025-08-13T01:11:40.474473252Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:11:40.475218 containerd[1550]: time="2025-08-13T01:11:40.474987863Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"31891685\" in 1.813838682s" Aug 13 01:11:40.475218 containerd[1550]: time="2025-08-13T01:11:40.475033750Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 13 01:11:40.475847 containerd[1550]: time="2025-08-13T01:11:40.475809868Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 01:11:41.299493 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1144055400.mount: Deactivated successfully. Aug 13 01:11:42.044513 containerd[1550]: time="2025-08-13T01:11:42.044425872Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:11:42.045288 containerd[1550]: time="2025-08-13T01:11:42.045258844Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 01:11:42.045831 containerd[1550]: time="2025-08-13T01:11:42.045808402Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:11:42.048056 containerd[1550]: time="2025-08-13T01:11:42.048011410Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:11:42.049175 containerd[1550]: time="2025-08-13T01:11:42.049047758Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 1.573110764s" Aug 13 01:11:42.049175 containerd[1550]: time="2025-08-13T01:11:42.049075047Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 01:11:42.050207 containerd[1550]: time="2025-08-13T01:11:42.050184878Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:11:42.752463 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1836570268.mount: Deactivated successfully. Aug 13 01:11:42.756364 containerd[1550]: time="2025-08-13T01:11:42.756313899Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:11:42.757016 containerd[1550]: time="2025-08-13T01:11:42.756998342Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 01:11:42.757606 containerd[1550]: time="2025-08-13T01:11:42.757565706Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:11:42.759190 containerd[1550]: time="2025-08-13T01:11:42.759155544Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:11:42.760406 containerd[1550]: time="2025-08-13T01:11:42.759806197Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 709.573094ms" Aug 13 01:11:42.760406 containerd[1550]: time="2025-08-13T01:11:42.759832866Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:11:42.760469 containerd[1550]: time="2025-08-13T01:11:42.760415635Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 01:11:43.575062 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount760336675.mount: Deactivated successfully. Aug 13 01:11:44.942318 containerd[1550]: time="2025-08-13T01:11:44.942246759Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:11:44.943515 containerd[1550]: time="2025-08-13T01:11:44.943486543Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Aug 13 01:11:44.944604 containerd[1550]: time="2025-08-13T01:11:44.944202009Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:11:44.946555 containerd[1550]: time="2025-08-13T01:11:44.946520843Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:11:44.947521 containerd[1550]: time="2025-08-13T01:11:44.947485748Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 2.187048516s" Aug 13 01:11:44.947602 containerd[1550]: time="2025-08-13T01:11:44.947586807Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 01:11:47.883867 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:11:47.884026 systemd[1]: kubelet.service: Consumed 198ms CPU time, 110.3M memory peak. Aug 13 01:11:47.892815 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:11:47.909844 systemd[1]: Reload requested from client PID 2248 ('systemctl') (unit session-7.scope)... Aug 13 01:11:47.909861 systemd[1]: Reloading... Aug 13 01:11:48.057921 zram_generator::config[2303]: No configuration found. Aug 13 01:11:48.123660 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:11:48.223289 systemd[1]: Reloading finished in 313 ms. Aug 13 01:11:48.278240 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 01:11:48.278328 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 01:11:48.278602 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:11:48.278652 systemd[1]: kubelet.service: Consumed 134ms CPU time, 98.3M memory peak. Aug 13 01:11:48.279855 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:11:48.436026 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:11:48.444125 (kubelet)[2346]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:11:48.474839 kubelet[2346]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:11:48.474839 kubelet[2346]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:11:48.474839 kubelet[2346]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:11:48.475154 kubelet[2346]: I0813 01:11:48.475037 2346 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:11:48.825886 kubelet[2346]: I0813 01:11:48.825804 2346 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 01:11:48.825886 kubelet[2346]: I0813 01:11:48.825827 2346 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:11:48.826153 kubelet[2346]: I0813 01:11:48.826021 2346 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 01:11:48.855214 kubelet[2346]: E0813 01:11:48.855180 2346 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.233.214.103:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.233.214.103:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 01:11:48.859263 kubelet[2346]: I0813 01:11:48.859240 2346 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:11:48.867287 kubelet[2346]: I0813 01:11:48.867265 2346 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:11:48.871613 kubelet[2346]: I0813 01:11:48.871593 2346 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:11:48.872031 kubelet[2346]: I0813 01:11:48.872007 2346 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:11:48.872173 kubelet[2346]: I0813 01:11:48.872025 2346 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-233-214-103","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:11:48.872173 kubelet[2346]: I0813 01:11:48.872172 2346 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:11:48.872303 kubelet[2346]: I0813 01:11:48.872180 2346 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 01:11:48.873040 kubelet[2346]: I0813 01:11:48.873018 2346 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:11:48.875811 kubelet[2346]: I0813 01:11:48.875784 2346 kubelet.go:480] "Attempting to sync node with API server" Aug 13 01:11:48.875811 kubelet[2346]: I0813 01:11:48.875804 2346 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:11:48.878556 kubelet[2346]: I0813 01:11:48.878533 2346 kubelet.go:386] "Adding apiserver pod source" Aug 13 01:11:48.880512 kubelet[2346]: I0813 01:11:48.880488 2346 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:11:48.885837 kubelet[2346]: E0813 01:11:48.885713 2346 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.233.214.103:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-214-103&limit=500&resourceVersion=0\": dial tcp 172.233.214.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 01:11:48.886252 kubelet[2346]: E0813 01:11:48.886034 2346 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.233.214.103:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.233.214.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 01:11:48.886252 kubelet[2346]: I0813 01:11:48.886122 2346 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:11:48.886538 kubelet[2346]: I0813 01:11:48.886510 2346 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 01:11:48.887497 kubelet[2346]: W0813 01:11:48.887472 2346 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:11:48.891268 kubelet[2346]: I0813 01:11:48.891249 2346 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:11:48.891323 kubelet[2346]: I0813 01:11:48.891290 2346 server.go:1289] "Started kubelet" Aug 13 01:11:48.893506 kubelet[2346]: I0813 01:11:48.893468 2346 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:11:48.894194 kubelet[2346]: I0813 01:11:48.894177 2346 server.go:317] "Adding debug handlers to kubelet server" Aug 13 01:11:48.896065 kubelet[2346]: I0813 01:11:48.895099 2346 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:11:48.896065 kubelet[2346]: I0813 01:11:48.895848 2346 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:11:48.897924 kubelet[2346]: E0813 01:11:48.895945 2346 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.233.214.103:6443/api/v1/namespaces/default/events\": dial tcp 172.233.214.103:6443: connect: connection refused" event="&Event{ObjectMeta:{172-233-214-103.185b2e692365f936 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-233-214-103,UID:172-233-214-103,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-233-214-103,},FirstTimestamp:2025-08-13 01:11:48.89126943 +0000 UTC m=+0.443560100,LastTimestamp:2025-08-13 01:11:48.89126943 +0000 UTC m=+0.443560100,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-233-214-103,}" Aug 13 01:11:48.898418 kubelet[2346]: I0813 01:11:48.898400 2346 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:11:48.899717 kubelet[2346]: I0813 01:11:48.899702 2346 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:11:48.901380 kubelet[2346]: I0813 01:11:48.901367 2346 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:11:48.902559 kubelet[2346]: E0813 01:11:48.902532 2346 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-214-103\" not found" Aug 13 01:11:48.902950 kubelet[2346]: I0813 01:11:48.902733 2346 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:11:48.902999 kubelet[2346]: I0813 01:11:48.902984 2346 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:11:48.905130 kubelet[2346]: E0813 01:11:48.905115 2346 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:11:48.905364 kubelet[2346]: E0813 01:11:48.905348 2346 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.233.214.103:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.233.214.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 01:11:48.905486 kubelet[2346]: E0813 01:11:48.905461 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.214.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-214-103?timeout=10s\": dial tcp 172.233.214.103:6443: connect: connection refused" interval="200ms" Aug 13 01:11:48.905930 kubelet[2346]: I0813 01:11:48.905912 2346 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:11:48.907565 kubelet[2346]: I0813 01:11:48.907551 2346 factory.go:223] Registration of the containerd container factory successfully Aug 13 01:11:48.907629 kubelet[2346]: I0813 01:11:48.907620 2346 factory.go:223] Registration of the systemd container factory successfully Aug 13 01:11:48.922307 kubelet[2346]: I0813 01:11:48.922287 2346 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:11:48.922307 kubelet[2346]: I0813 01:11:48.922301 2346 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:11:48.922381 kubelet[2346]: I0813 01:11:48.922315 2346 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:11:48.928487 kubelet[2346]: I0813 01:11:48.927967 2346 policy_none.go:49] "None policy: Start" Aug 13 01:11:48.928487 kubelet[2346]: I0813 01:11:48.927984 2346 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:11:48.928487 kubelet[2346]: I0813 01:11:48.927994 2346 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:11:48.931198 kubelet[2346]: I0813 01:11:48.931169 2346 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 01:11:48.932866 kubelet[2346]: I0813 01:11:48.932844 2346 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 01:11:48.932866 kubelet[2346]: I0813 01:11:48.932864 2346 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 01:11:48.933066 kubelet[2346]: I0813 01:11:48.932880 2346 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:11:48.933066 kubelet[2346]: I0813 01:11:48.932886 2346 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 01:11:48.933066 kubelet[2346]: E0813 01:11:48.933018 2346 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:11:48.937329 kubelet[2346]: E0813 01:11:48.937107 2346 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.233.214.103:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.233.214.103:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 01:11:48.940411 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 01:11:48.958503 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 01:11:48.962292 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 01:11:48.968697 kubelet[2346]: E0813 01:11:48.968496 2346 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 01:11:48.970055 kubelet[2346]: I0813 01:11:48.969802 2346 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:11:48.970055 kubelet[2346]: I0813 01:11:48.969815 2346 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:11:48.970788 kubelet[2346]: I0813 01:11:48.970778 2346 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:11:48.971375 kubelet[2346]: E0813 01:11:48.971357 2346 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:11:48.971830 kubelet[2346]: E0813 01:11:48.971427 2346 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-233-214-103\" not found" Aug 13 01:11:49.045944 systemd[1]: Created slice kubepods-burstable-pod2dd8b161f65c7eea6bb980d72abd4859.slice - libcontainer container kubepods-burstable-pod2dd8b161f65c7eea6bb980d72abd4859.slice. Aug 13 01:11:49.055632 kubelet[2346]: E0813 01:11:49.055609 2346 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-214-103\" not found" node="172-233-214-103" Aug 13 01:11:49.058572 systemd[1]: Created slice kubepods-burstable-podad1f662ad81e3fd9d79b36c930a10036.slice - libcontainer container kubepods-burstable-podad1f662ad81e3fd9d79b36c930a10036.slice. Aug 13 01:11:49.060611 kubelet[2346]: E0813 01:11:49.060484 2346 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-214-103\" not found" node="172-233-214-103" Aug 13 01:11:49.063081 systemd[1]: Created slice kubepods-burstable-podf6270b9e0b1c30e3a700e1f4d357d8ec.slice - libcontainer container kubepods-burstable-podf6270b9e0b1c30e3a700e1f4d357d8ec.slice. Aug 13 01:11:49.064583 kubelet[2346]: E0813 01:11:49.064554 2346 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-214-103\" not found" node="172-233-214-103" Aug 13 01:11:49.071415 kubelet[2346]: I0813 01:11:49.071404 2346 kubelet_node_status.go:75] "Attempting to register node" node="172-233-214-103" Aug 13 01:11:49.071637 kubelet[2346]: E0813 01:11:49.071609 2346 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.233.214.103:6443/api/v1/nodes\": dial tcp 172.233.214.103:6443: connect: connection refused" node="172-233-214-103" Aug 13 01:11:49.106151 kubelet[2346]: E0813 01:11:49.106096 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.214.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-214-103?timeout=10s\": dial tcp 172.233.214.103:6443: connect: connection refused" interval="400ms" Aug 13 01:11:49.204605 kubelet[2346]: I0813 01:11:49.204561 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2dd8b161f65c7eea6bb980d72abd4859-ca-certs\") pod \"kube-apiserver-172-233-214-103\" (UID: \"2dd8b161f65c7eea6bb980d72abd4859\") " pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:11:49.204605 kubelet[2346]: I0813 01:11:49.204586 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ad1f662ad81e3fd9d79b36c930a10036-flexvolume-dir\") pod \"kube-controller-manager-172-233-214-103\" (UID: \"ad1f662ad81e3fd9d79b36c930a10036\") " pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:11:49.204605 kubelet[2346]: I0813 01:11:49.204603 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad1f662ad81e3fd9d79b36c930a10036-k8s-certs\") pod \"kube-controller-manager-172-233-214-103\" (UID: \"ad1f662ad81e3fd9d79b36c930a10036\") " pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:11:49.204605 kubelet[2346]: I0813 01:11:49.204617 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad1f662ad81e3fd9d79b36c930a10036-kubeconfig\") pod \"kube-controller-manager-172-233-214-103\" (UID: \"ad1f662ad81e3fd9d79b36c930a10036\") " pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:11:49.204605 kubelet[2346]: I0813 01:11:49.204630 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad1f662ad81e3fd9d79b36c930a10036-usr-share-ca-certificates\") pod \"kube-controller-manager-172-233-214-103\" (UID: \"ad1f662ad81e3fd9d79b36c930a10036\") " pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:11:49.204972 kubelet[2346]: I0813 01:11:49.204642 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f6270b9e0b1c30e3a700e1f4d357d8ec-kubeconfig\") pod \"kube-scheduler-172-233-214-103\" (UID: \"f6270b9e0b1c30e3a700e1f4d357d8ec\") " pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:11:49.204972 kubelet[2346]: I0813 01:11:49.204655 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2dd8b161f65c7eea6bb980d72abd4859-k8s-certs\") pod \"kube-apiserver-172-233-214-103\" (UID: \"2dd8b161f65c7eea6bb980d72abd4859\") " pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:11:49.204972 kubelet[2346]: I0813 01:11:49.204669 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2dd8b161f65c7eea6bb980d72abd4859-usr-share-ca-certificates\") pod \"kube-apiserver-172-233-214-103\" (UID: \"2dd8b161f65c7eea6bb980d72abd4859\") " pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:11:49.204972 kubelet[2346]: I0813 01:11:49.204681 2346 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad1f662ad81e3fd9d79b36c930a10036-ca-certs\") pod \"kube-controller-manager-172-233-214-103\" (UID: \"ad1f662ad81e3fd9d79b36c930a10036\") " pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:11:49.273154 kubelet[2346]: I0813 01:11:49.273091 2346 kubelet_node_status.go:75] "Attempting to register node" node="172-233-214-103" Aug 13 01:11:49.273394 kubelet[2346]: E0813 01:11:49.273371 2346 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.233.214.103:6443/api/v1/nodes\": dial tcp 172.233.214.103:6443: connect: connection refused" node="172-233-214-103" Aug 13 01:11:49.356985 kubelet[2346]: E0813 01:11:49.356916 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:49.357512 containerd[1550]: time="2025-08-13T01:11:49.357487342Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-233-214-103,Uid:2dd8b161f65c7eea6bb980d72abd4859,Namespace:kube-system,Attempt:0,}" Aug 13 01:11:49.361135 kubelet[2346]: E0813 01:11:49.361101 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:49.361869 containerd[1550]: time="2025-08-13T01:11:49.361586143Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-233-214-103,Uid:ad1f662ad81e3fd9d79b36c930a10036,Namespace:kube-system,Attempt:0,}" Aug 13 01:11:49.365249 kubelet[2346]: E0813 01:11:49.365224 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:49.365553 containerd[1550]: time="2025-08-13T01:11:49.365525481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-233-214-103,Uid:f6270b9e0b1c30e3a700e1f4d357d8ec,Namespace:kube-system,Attempt:0,}" Aug 13 01:11:49.391180 containerd[1550]: time="2025-08-13T01:11:49.391010701Z" level=info msg="connecting to shim 3ba1029d91379fdc42075b6f50d5e113a9faa23d7721323205dcca36b4e6d87e" address="unix:///run/containerd/s/45352f49eeff269843ece8625a31d94897672c803c2e6c4f201efa29d11ff8c5" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:11:49.405391 containerd[1550]: time="2025-08-13T01:11:49.405342015Z" level=info msg="connecting to shim a81325bae0f4ca129126a3448b2a21b3f1d04a81b9c063a8cf754bb2db92b5d7" address="unix:///run/containerd/s/f2bb2e04c1515cbe1f71966def45d0d14fcfd84231f6ef844f20f5dd5e1084f8" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:11:49.428134 systemd[1]: Started cri-containerd-3ba1029d91379fdc42075b6f50d5e113a9faa23d7721323205dcca36b4e6d87e.scope - libcontainer container 3ba1029d91379fdc42075b6f50d5e113a9faa23d7721323205dcca36b4e6d87e. Aug 13 01:11:49.434337 containerd[1550]: time="2025-08-13T01:11:49.434181671Z" level=info msg="connecting to shim 69422833489f143c5ab0ead4825cb997ea65d2f1bd3d74c6635cf58a4b5f493d" address="unix:///run/containerd/s/6c14c4774adcc9172946ebff5975d65b65bb6f4da8d24354f72e35abc0648e23" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:11:49.456052 systemd[1]: Started cri-containerd-a81325bae0f4ca129126a3448b2a21b3f1d04a81b9c063a8cf754bb2db92b5d7.scope - libcontainer container a81325bae0f4ca129126a3448b2a21b3f1d04a81b9c063a8cf754bb2db92b5d7. Aug 13 01:11:49.468254 systemd[1]: Started cri-containerd-69422833489f143c5ab0ead4825cb997ea65d2f1bd3d74c6635cf58a4b5f493d.scope - libcontainer container 69422833489f143c5ab0ead4825cb997ea65d2f1bd3d74c6635cf58a4b5f493d. Aug 13 01:11:49.500192 containerd[1550]: time="2025-08-13T01:11:49.500150113Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-233-214-103,Uid:2dd8b161f65c7eea6bb980d72abd4859,Namespace:kube-system,Attempt:0,} returns sandbox id \"3ba1029d91379fdc42075b6f50d5e113a9faa23d7721323205dcca36b4e6d87e\"" Aug 13 01:11:49.501542 kubelet[2346]: E0813 01:11:49.501489 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:49.501542 kubelet[2346]: E0813 01:11:49.506417 2346 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.214.103:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-214-103?timeout=10s\": dial tcp 172.233.214.103:6443: connect: connection refused" interval="800ms" Aug 13 01:11:49.508818 containerd[1550]: time="2025-08-13T01:11:49.508797777Z" level=info msg="CreateContainer within sandbox \"3ba1029d91379fdc42075b6f50d5e113a9faa23d7721323205dcca36b4e6d87e\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:11:49.515855 containerd[1550]: time="2025-08-13T01:11:49.515825075Z" level=info msg="Container 5620fab08ad904335078b51fa0df0fee22c83b5de80a53cce6f3990cfc2e3a63: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:11:49.520668 containerd[1550]: time="2025-08-13T01:11:49.520647757Z" level=info msg="CreateContainer within sandbox \"3ba1029d91379fdc42075b6f50d5e113a9faa23d7721323205dcca36b4e6d87e\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"5620fab08ad904335078b51fa0df0fee22c83b5de80a53cce6f3990cfc2e3a63\"" Aug 13 01:11:49.521707 containerd[1550]: time="2025-08-13T01:11:49.521081867Z" level=info msg="StartContainer for \"5620fab08ad904335078b51fa0df0fee22c83b5de80a53cce6f3990cfc2e3a63\"" Aug 13 01:11:49.522610 containerd[1550]: time="2025-08-13T01:11:49.522591400Z" level=info msg="connecting to shim 5620fab08ad904335078b51fa0df0fee22c83b5de80a53cce6f3990cfc2e3a63" address="unix:///run/containerd/s/45352f49eeff269843ece8625a31d94897672c803c2e6c4f201efa29d11ff8c5" protocol=ttrpc version=3 Aug 13 01:11:49.553026 systemd[1]: Started cri-containerd-5620fab08ad904335078b51fa0df0fee22c83b5de80a53cce6f3990cfc2e3a63.scope - libcontainer container 5620fab08ad904335078b51fa0df0fee22c83b5de80a53cce6f3990cfc2e3a63. Aug 13 01:11:49.557505 containerd[1550]: time="2025-08-13T01:11:49.557478800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-233-214-103,Uid:ad1f662ad81e3fd9d79b36c930a10036,Namespace:kube-system,Attempt:0,} returns sandbox id \"a81325bae0f4ca129126a3448b2a21b3f1d04a81b9c063a8cf754bb2db92b5d7\"" Aug 13 01:11:49.558345 kubelet[2346]: E0813 01:11:49.558319 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:49.561372 containerd[1550]: time="2025-08-13T01:11:49.561353705Z" level=info msg="CreateContainer within sandbox \"a81325bae0f4ca129126a3448b2a21b3f1d04a81b9c063a8cf754bb2db92b5d7\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:11:49.562551 containerd[1550]: time="2025-08-13T01:11:49.562511375Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-233-214-103,Uid:f6270b9e0b1c30e3a700e1f4d357d8ec,Namespace:kube-system,Attempt:0,} returns sandbox id \"69422833489f143c5ab0ead4825cb997ea65d2f1bd3d74c6635cf58a4b5f493d\"" Aug 13 01:11:49.563527 kubelet[2346]: E0813 01:11:49.563513 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:49.566346 containerd[1550]: time="2025-08-13T01:11:49.566327087Z" level=info msg="Container adf1b5f86017b32e545fa339781f00af34c441e58ec4dd6b6df57708b8aa50f8: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:11:49.581914 containerd[1550]: time="2025-08-13T01:11:49.581857520Z" level=info msg="CreateContainer within sandbox \"69422833489f143c5ab0ead4825cb997ea65d2f1bd3d74c6635cf58a4b5f493d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:11:49.586751 containerd[1550]: time="2025-08-13T01:11:49.586721430Z" level=info msg="CreateContainer within sandbox \"a81325bae0f4ca129126a3448b2a21b3f1d04a81b9c063a8cf754bb2db92b5d7\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"adf1b5f86017b32e545fa339781f00af34c441e58ec4dd6b6df57708b8aa50f8\"" Aug 13 01:11:49.588350 containerd[1550]: time="2025-08-13T01:11:49.588330954Z" level=info msg="StartContainer for \"adf1b5f86017b32e545fa339781f00af34c441e58ec4dd6b6df57708b8aa50f8\"" Aug 13 01:11:49.591987 containerd[1550]: time="2025-08-13T01:11:49.591961987Z" level=info msg="connecting to shim adf1b5f86017b32e545fa339781f00af34c441e58ec4dd6b6df57708b8aa50f8" address="unix:///run/containerd/s/f2bb2e04c1515cbe1f71966def45d0d14fcfd84231f6ef844f20f5dd5e1084f8" protocol=ttrpc version=3 Aug 13 01:11:49.594334 containerd[1550]: time="2025-08-13T01:11:49.594235250Z" level=info msg="Container a827b4fab1567fd044afa9fef69027605ada1093d1778f4ec69b61f82c013c30: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:11:49.600877 containerd[1550]: time="2025-08-13T01:11:49.600856364Z" level=info msg="CreateContainer within sandbox \"69422833489f143c5ab0ead4825cb997ea65d2f1bd3d74c6635cf58a4b5f493d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"a827b4fab1567fd044afa9fef69027605ada1093d1778f4ec69b61f82c013c30\"" Aug 13 01:11:49.601249 containerd[1550]: time="2025-08-13T01:11:49.601232722Z" level=info msg="StartContainer for \"a827b4fab1567fd044afa9fef69027605ada1093d1778f4ec69b61f82c013c30\"" Aug 13 01:11:49.602265 containerd[1550]: time="2025-08-13T01:11:49.602174698Z" level=info msg="connecting to shim a827b4fab1567fd044afa9fef69027605ada1093d1778f4ec69b61f82c013c30" address="unix:///run/containerd/s/6c14c4774adcc9172946ebff5975d65b65bb6f4da8d24354f72e35abc0648e23" protocol=ttrpc version=3 Aug 13 01:11:49.623230 systemd[1]: Started cri-containerd-a827b4fab1567fd044afa9fef69027605ada1093d1778f4ec69b61f82c013c30.scope - libcontainer container a827b4fab1567fd044afa9fef69027605ada1093d1778f4ec69b61f82c013c30. Aug 13 01:11:49.631461 systemd[1]: Started cri-containerd-adf1b5f86017b32e545fa339781f00af34c441e58ec4dd6b6df57708b8aa50f8.scope - libcontainer container adf1b5f86017b32e545fa339781f00af34c441e58ec4dd6b6df57708b8aa50f8. Aug 13 01:11:49.639234 containerd[1550]: time="2025-08-13T01:11:49.639213004Z" level=info msg="StartContainer for \"5620fab08ad904335078b51fa0df0fee22c83b5de80a53cce6f3990cfc2e3a63\" returns successfully" Aug 13 01:11:49.675427 kubelet[2346]: I0813 01:11:49.675405 2346 kubelet_node_status.go:75] "Attempting to register node" node="172-233-214-103" Aug 13 01:11:49.676104 kubelet[2346]: E0813 01:11:49.676082 2346 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.233.214.103:6443/api/v1/nodes\": dial tcp 172.233.214.103:6443: connect: connection refused" node="172-233-214-103" Aug 13 01:11:49.713712 containerd[1550]: time="2025-08-13T01:11:49.713645702Z" level=info msg="StartContainer for \"a827b4fab1567fd044afa9fef69027605ada1093d1778f4ec69b61f82c013c30\" returns successfully" Aug 13 01:11:49.727805 containerd[1550]: time="2025-08-13T01:11:49.727773934Z" level=info msg="StartContainer for \"adf1b5f86017b32e545fa339781f00af34c441e58ec4dd6b6df57708b8aa50f8\" returns successfully" Aug 13 01:11:49.953416 kubelet[2346]: E0813 01:11:49.951561 2346 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-214-103\" not found" node="172-233-214-103" Aug 13 01:11:49.953416 kubelet[2346]: E0813 01:11:49.953014 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:49.957081 kubelet[2346]: E0813 01:11:49.956972 2346 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-214-103\" not found" node="172-233-214-103" Aug 13 01:11:49.957081 kubelet[2346]: E0813 01:11:49.957039 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:49.958406 kubelet[2346]: E0813 01:11:49.958394 2346 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-214-103\" not found" node="172-233-214-103" Aug 13 01:11:49.958629 kubelet[2346]: E0813 01:11:49.958600 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:50.481639 kubelet[2346]: I0813 01:11:50.481586 2346 kubelet_node_status.go:75] "Attempting to register node" node="172-233-214-103" Aug 13 01:11:50.963519 kubelet[2346]: E0813 01:11:50.962377 2346 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-214-103\" not found" node="172-233-214-103" Aug 13 01:11:50.963519 kubelet[2346]: E0813 01:11:50.962520 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:50.963519 kubelet[2346]: E0813 01:11:50.962756 2346 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-233-214-103\" not found" node="172-233-214-103" Aug 13 01:11:50.963519 kubelet[2346]: E0813 01:11:50.962836 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:51.275321 kubelet[2346]: E0813 01:11:51.274653 2346 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-233-214-103\" not found" node="172-233-214-103" Aug 13 01:11:51.466727 kubelet[2346]: I0813 01:11:51.466667 2346 kubelet_node_status.go:78] "Successfully registered node" node="172-233-214-103" Aug 13 01:11:51.503801 kubelet[2346]: I0813 01:11:51.503750 2346 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:11:51.509752 kubelet[2346]: E0813 01:11:51.509710 2346 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-233-214-103\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:11:51.509752 kubelet[2346]: I0813 01:11:51.509735 2346 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:11:51.511327 kubelet[2346]: E0813 01:11:51.511293 2346 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-233-214-103\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:11:51.511327 kubelet[2346]: I0813 01:11:51.511317 2346 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:11:51.512713 kubelet[2346]: E0813 01:11:51.512659 2346 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-233-214-103\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:11:51.887696 kubelet[2346]: I0813 01:11:51.887641 2346 apiserver.go:52] "Watching apiserver" Aug 13 01:11:51.903642 kubelet[2346]: I0813 01:11:51.903621 2346 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:11:51.961122 kubelet[2346]: I0813 01:11:51.961019 2346 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:11:51.962081 kubelet[2346]: E0813 01:11:51.962066 2346 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-233-214-103\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:11:51.962265 kubelet[2346]: E0813 01:11:51.962249 2346 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:53.388531 systemd[1]: Reload requested from client PID 2627 ('systemctl') (unit session-7.scope)... Aug 13 01:11:53.388546 systemd[1]: Reloading... Aug 13 01:11:53.484931 zram_generator::config[2671]: No configuration found. Aug 13 01:11:53.576615 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:11:53.682063 systemd[1]: Reloading finished in 293 ms. Aug 13 01:11:53.719057 kubelet[2346]: I0813 01:11:53.718979 2346 dynamic_cafile_content.go:175] "Shutting down controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:11:53.719450 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:11:53.747058 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:11:53.747728 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:11:53.747879 systemd[1]: kubelet.service: Consumed 800ms CPU time, 131.5M memory peak. Aug 13 01:11:53.750041 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:11:53.944148 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:11:53.956205 (kubelet)[2722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:11:54.003083 kubelet[2722]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:11:54.003083 kubelet[2722]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:11:54.003083 kubelet[2722]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:11:54.003083 kubelet[2722]: I0813 01:11:54.002495 2722 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:11:54.009354 kubelet[2722]: I0813 01:11:54.009334 2722 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 01:11:54.009427 kubelet[2722]: I0813 01:11:54.009417 2722 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:11:54.009669 kubelet[2722]: I0813 01:11:54.009657 2722 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 01:11:54.010803 kubelet[2722]: I0813 01:11:54.010786 2722 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 01:11:54.012940 kubelet[2722]: I0813 01:11:54.012910 2722 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:11:54.017001 kubelet[2722]: I0813 01:11:54.016984 2722 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:11:54.020688 kubelet[2722]: I0813 01:11:54.020659 2722 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:11:54.021056 kubelet[2722]: I0813 01:11:54.021027 2722 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:11:54.021269 kubelet[2722]: I0813 01:11:54.021053 2722 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-233-214-103","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:11:54.021337 kubelet[2722]: I0813 01:11:54.021276 2722 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:11:54.021337 kubelet[2722]: I0813 01:11:54.021287 2722 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 01:11:54.021337 kubelet[2722]: I0813 01:11:54.021326 2722 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:11:54.021505 kubelet[2722]: I0813 01:11:54.021486 2722 kubelet.go:480] "Attempting to sync node with API server" Aug 13 01:11:54.021505 kubelet[2722]: I0813 01:11:54.021505 2722 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:11:54.021547 kubelet[2722]: I0813 01:11:54.021525 2722 kubelet.go:386] "Adding apiserver pod source" Aug 13 01:11:54.021547 kubelet[2722]: I0813 01:11:54.021538 2722 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:11:54.025356 kubelet[2722]: I0813 01:11:54.025287 2722 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:11:54.026256 kubelet[2722]: I0813 01:11:54.026224 2722 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 01:11:54.031227 kubelet[2722]: I0813 01:11:54.031207 2722 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:11:54.031291 kubelet[2722]: I0813 01:11:54.031244 2722 server.go:1289] "Started kubelet" Aug 13 01:11:54.033165 kubelet[2722]: I0813 01:11:54.032983 2722 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:11:54.037944 kubelet[2722]: I0813 01:11:54.037861 2722 server.go:317] "Adding debug handlers to kubelet server" Aug 13 01:11:54.038586 kubelet[2722]: I0813 01:11:54.037178 2722 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:11:54.049927 kubelet[2722]: I0813 01:11:54.048402 2722 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:11:54.050115 kubelet[2722]: I0813 01:11:54.033275 2722 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:11:54.050922 kubelet[2722]: I0813 01:11:54.050274 2722 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:11:54.050922 kubelet[2722]: I0813 01:11:54.050339 2722 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:11:54.050922 kubelet[2722]: E0813 01:11:54.050461 2722 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-233-214-103\" not found" Aug 13 01:11:54.052580 kubelet[2722]: I0813 01:11:54.052006 2722 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:11:54.052580 kubelet[2722]: I0813 01:11:54.052104 2722 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:11:54.055085 kubelet[2722]: I0813 01:11:54.055003 2722 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 01:11:54.056534 kubelet[2722]: I0813 01:11:54.056509 2722 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 01:11:54.056571 kubelet[2722]: I0813 01:11:54.056537 2722 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 01:11:54.056571 kubelet[2722]: I0813 01:11:54.056552 2722 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:11:54.056571 kubelet[2722]: I0813 01:11:54.056558 2722 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 01:11:54.056688 kubelet[2722]: E0813 01:11:54.056595 2722 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:11:54.063151 kubelet[2722]: E0813 01:11:54.063132 2722 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:11:54.063709 kubelet[2722]: I0813 01:11:54.063694 2722 factory.go:223] Registration of the containerd container factory successfully Aug 13 01:11:54.063762 kubelet[2722]: I0813 01:11:54.063754 2722 factory.go:223] Registration of the systemd container factory successfully Aug 13 01:11:54.063870 kubelet[2722]: I0813 01:11:54.063853 2722 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:11:54.108143 kubelet[2722]: I0813 01:11:54.108087 2722 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:11:54.108216 kubelet[2722]: I0813 01:11:54.108206 2722 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:11:54.108270 kubelet[2722]: I0813 01:11:54.108262 2722 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:11:54.108402 kubelet[2722]: I0813 01:11:54.108390 2722 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:11:54.108459 kubelet[2722]: I0813 01:11:54.108443 2722 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:11:54.108496 kubelet[2722]: I0813 01:11:54.108490 2722 policy_none.go:49] "None policy: Start" Aug 13 01:11:54.108528 kubelet[2722]: I0813 01:11:54.108522 2722 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:11:54.108570 kubelet[2722]: I0813 01:11:54.108562 2722 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:11:54.108698 kubelet[2722]: I0813 01:11:54.108689 2722 state_mem.go:75] "Updated machine memory state" Aug 13 01:11:54.112645 kubelet[2722]: E0813 01:11:54.112621 2722 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 01:11:54.112844 kubelet[2722]: I0813 01:11:54.112742 2722 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:11:54.112844 kubelet[2722]: I0813 01:11:54.112756 2722 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:11:54.113298 kubelet[2722]: I0813 01:11:54.113279 2722 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:11:54.114366 kubelet[2722]: E0813 01:11:54.114297 2722 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:11:54.157919 kubelet[2722]: I0813 01:11:54.157886 2722 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:11:54.158032 kubelet[2722]: I0813 01:11:54.158004 2722 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:11:54.158147 kubelet[2722]: I0813 01:11:54.157914 2722 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:11:54.215707 kubelet[2722]: I0813 01:11:54.215569 2722 kubelet_node_status.go:75] "Attempting to register node" node="172-233-214-103" Aug 13 01:11:54.222425 kubelet[2722]: I0813 01:11:54.222377 2722 kubelet_node_status.go:124] "Node was previously registered" node="172-233-214-103" Aug 13 01:11:54.222573 kubelet[2722]: I0813 01:11:54.222445 2722 kubelet_node_status.go:78] "Successfully registered node" node="172-233-214-103" Aug 13 01:11:54.353944 kubelet[2722]: I0813 01:11:54.353893 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2dd8b161f65c7eea6bb980d72abd4859-k8s-certs\") pod \"kube-apiserver-172-233-214-103\" (UID: \"2dd8b161f65c7eea6bb980d72abd4859\") " pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:11:54.353944 kubelet[2722]: I0813 01:11:54.353947 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2dd8b161f65c7eea6bb980d72abd4859-usr-share-ca-certificates\") pod \"kube-apiserver-172-233-214-103\" (UID: \"2dd8b161f65c7eea6bb980d72abd4859\") " pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:11:54.353944 kubelet[2722]: I0813 01:11:54.353964 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ad1f662ad81e3fd9d79b36c930a10036-ca-certs\") pod \"kube-controller-manager-172-233-214-103\" (UID: \"ad1f662ad81e3fd9d79b36c930a10036\") " pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:11:54.354209 kubelet[2722]: I0813 01:11:54.353980 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ad1f662ad81e3fd9d79b36c930a10036-usr-share-ca-certificates\") pod \"kube-controller-manager-172-233-214-103\" (UID: \"ad1f662ad81e3fd9d79b36c930a10036\") " pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:11:54.354209 kubelet[2722]: I0813 01:11:54.353997 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2dd8b161f65c7eea6bb980d72abd4859-ca-certs\") pod \"kube-apiserver-172-233-214-103\" (UID: \"2dd8b161f65c7eea6bb980d72abd4859\") " pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:11:54.354209 kubelet[2722]: I0813 01:11:54.354010 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ad1f662ad81e3fd9d79b36c930a10036-flexvolume-dir\") pod \"kube-controller-manager-172-233-214-103\" (UID: \"ad1f662ad81e3fd9d79b36c930a10036\") " pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:11:54.354209 kubelet[2722]: I0813 01:11:54.354023 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ad1f662ad81e3fd9d79b36c930a10036-k8s-certs\") pod \"kube-controller-manager-172-233-214-103\" (UID: \"ad1f662ad81e3fd9d79b36c930a10036\") " pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:11:54.354209 kubelet[2722]: I0813 01:11:54.354040 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ad1f662ad81e3fd9d79b36c930a10036-kubeconfig\") pod \"kube-controller-manager-172-233-214-103\" (UID: \"ad1f662ad81e3fd9d79b36c930a10036\") " pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:11:54.354335 kubelet[2722]: I0813 01:11:54.354054 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/f6270b9e0b1c30e3a700e1f4d357d8ec-kubeconfig\") pod \"kube-scheduler-172-233-214-103\" (UID: \"f6270b9e0b1c30e3a700e1f4d357d8ec\") " pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:11:54.466236 kubelet[2722]: E0813 01:11:54.464417 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:54.466236 kubelet[2722]: E0813 01:11:54.464461 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:54.466236 kubelet[2722]: E0813 01:11:54.466029 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:55.033916 kubelet[2722]: I0813 01:11:55.033827 2722 apiserver.go:52] "Watching apiserver" Aug 13 01:11:55.052613 kubelet[2722]: I0813 01:11:55.052584 2722 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:11:55.087247 kubelet[2722]: I0813 01:11:55.087215 2722 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:11:55.087416 kubelet[2722]: I0813 01:11:55.087404 2722 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:11:55.087597 kubelet[2722]: I0813 01:11:55.087586 2722 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:11:55.092204 kubelet[2722]: E0813 01:11:55.092183 2722 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-233-214-103\" already exists" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:11:55.092485 kubelet[2722]: E0813 01:11:55.092436 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:55.095268 kubelet[2722]: E0813 01:11:55.095253 2722 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-233-214-103\" already exists" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:11:55.095547 kubelet[2722]: E0813 01:11:55.095352 2722 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-233-214-103\" already exists" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:11:55.095596 kubelet[2722]: E0813 01:11:55.095585 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:55.096095 kubelet[2722]: E0813 01:11:55.096065 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:55.105461 kubelet[2722]: I0813 01:11:55.105385 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-233-214-103" podStartSLOduration=1.105358951 podStartE2EDuration="1.105358951s" podCreationTimestamp="2025-08-13 01:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:11:55.104983538 +0000 UTC m=+1.142912837" watchObservedRunningTime="2025-08-13 01:11:55.105358951 +0000 UTC m=+1.143288250" Aug 13 01:11:55.110431 kubelet[2722]: I0813 01:11:55.110362 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-233-214-103" podStartSLOduration=1.110357646 podStartE2EDuration="1.110357646s" podCreationTimestamp="2025-08-13 01:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:11:55.110256432 +0000 UTC m=+1.148185741" watchObservedRunningTime="2025-08-13 01:11:55.110357646 +0000 UTC m=+1.148286945" Aug 13 01:11:55.115097 kubelet[2722]: I0813 01:11:55.114829 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-233-214-103" podStartSLOduration=1.114820635 podStartE2EDuration="1.114820635s" podCreationTimestamp="2025-08-13 01:11:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:11:55.114699678 +0000 UTC m=+1.152628987" watchObservedRunningTime="2025-08-13 01:11:55.114820635 +0000 UTC m=+1.152749934" Aug 13 01:11:56.088780 kubelet[2722]: E0813 01:11:56.088315 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:56.088780 kubelet[2722]: E0813 01:11:56.088598 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:56.090875 kubelet[2722]: E0813 01:11:56.090861 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:57.091211 kubelet[2722]: E0813 01:11:57.091161 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:11:57.697427 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 01:11:58.667831 kubelet[2722]: I0813 01:11:58.667804 2722 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:11:58.668201 containerd[1550]: time="2025-08-13T01:11:58.668151195Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:11:58.668397 kubelet[2722]: I0813 01:11:58.668267 2722 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:11:58.700316 systemd[1]: Created slice kubepods-besteffort-pod770e1d82_b02e_4a9d_a204_9d60b463cda1.slice - libcontainer container kubepods-besteffort-pod770e1d82_b02e_4a9d_a204_9d60b463cda1.slice. Aug 13 01:11:58.778923 kubelet[2722]: I0813 01:11:58.778879 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4sk74\" (UniqueName: \"kubernetes.io/projected/770e1d82-b02e-4a9d-a204-9d60b463cda1-kube-api-access-4sk74\") pod \"kube-proxy-tb5sq\" (UID: \"770e1d82-b02e-4a9d-a204-9d60b463cda1\") " pod="kube-system/kube-proxy-tb5sq" Aug 13 01:11:58.779013 kubelet[2722]: I0813 01:11:58.778925 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/770e1d82-b02e-4a9d-a204-9d60b463cda1-kube-proxy\") pod \"kube-proxy-tb5sq\" (UID: \"770e1d82-b02e-4a9d-a204-9d60b463cda1\") " pod="kube-system/kube-proxy-tb5sq" Aug 13 01:11:58.779013 kubelet[2722]: I0813 01:11:58.778944 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/770e1d82-b02e-4a9d-a204-9d60b463cda1-xtables-lock\") pod \"kube-proxy-tb5sq\" (UID: \"770e1d82-b02e-4a9d-a204-9d60b463cda1\") " pod="kube-system/kube-proxy-tb5sq" Aug 13 01:11:58.779013 kubelet[2722]: I0813 01:11:58.778979 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/770e1d82-b02e-4a9d-a204-9d60b463cda1-lib-modules\") pod \"kube-proxy-tb5sq\" (UID: \"770e1d82-b02e-4a9d-a204-9d60b463cda1\") " pod="kube-system/kube-proxy-tb5sq" Aug 13 01:11:58.888037 kubelet[2722]: E0813 01:11:58.887965 2722 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 01:11:58.888037 kubelet[2722]: E0813 01:11:58.888017 2722 projected.go:194] Error preparing data for projected volume kube-api-access-4sk74 for pod kube-system/kube-proxy-tb5sq: configmap "kube-root-ca.crt" not found Aug 13 01:11:58.888037 kubelet[2722]: E0813 01:11:58.888069 2722 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/770e1d82-b02e-4a9d-a204-9d60b463cda1-kube-api-access-4sk74 podName:770e1d82-b02e-4a9d-a204-9d60b463cda1 nodeName:}" failed. No retries permitted until 2025-08-13 01:11:59.388052308 +0000 UTC m=+5.425981617 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-4sk74" (UniqueName: "kubernetes.io/projected/770e1d82-b02e-4a9d-a204-9d60b463cda1-kube-api-access-4sk74") pod "kube-proxy-tb5sq" (UID: "770e1d82-b02e-4a9d-a204-9d60b463cda1") : configmap "kube-root-ca.crt" not found Aug 13 01:11:59.483713 kubelet[2722]: E0813 01:11:59.483423 2722 projected.go:289] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 01:11:59.483713 kubelet[2722]: E0813 01:11:59.483457 2722 projected.go:194] Error preparing data for projected volume kube-api-access-4sk74 for pod kube-system/kube-proxy-tb5sq: configmap "kube-root-ca.crt" not found Aug 13 01:11:59.483713 kubelet[2722]: E0813 01:11:59.483516 2722 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/770e1d82-b02e-4a9d-a204-9d60b463cda1-kube-api-access-4sk74 podName:770e1d82-b02e-4a9d-a204-9d60b463cda1 nodeName:}" failed. No retries permitted until 2025-08-13 01:12:00.483503253 +0000 UTC m=+6.521432562 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-4sk74" (UniqueName: "kubernetes.io/projected/770e1d82-b02e-4a9d-a204-9d60b463cda1-kube-api-access-4sk74") pod "kube-proxy-tb5sq" (UID: "770e1d82-b02e-4a9d-a204-9d60b463cda1") : configmap "kube-root-ca.crt" not found Aug 13 01:11:59.874166 systemd[1]: Created slice kubepods-besteffort-pod8cfc1c94_5c1f_4ff5_8a0c_a47150661a4c.slice - libcontainer container kubepods-besteffort-pod8cfc1c94_5c1f_4ff5_8a0c_a47150661a4c.slice. Aug 13 01:11:59.884693 kubelet[2722]: I0813 01:11:59.884656 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zpp8r\" (UniqueName: \"kubernetes.io/projected/8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c-kube-api-access-zpp8r\") pod \"tigera-operator-747864d56d-hvgq2\" (UID: \"8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c\") " pod="tigera-operator/tigera-operator-747864d56d-hvgq2" Aug 13 01:11:59.884986 kubelet[2722]: I0813 01:11:59.884696 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c-var-lib-calico\") pod \"tigera-operator-747864d56d-hvgq2\" (UID: \"8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c\") " pod="tigera-operator/tigera-operator-747864d56d-hvgq2" Aug 13 01:12:00.177676 containerd[1550]: time="2025-08-13T01:12:00.177592947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-hvgq2,Uid:8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c,Namespace:tigera-operator,Attempt:0,}" Aug 13 01:12:00.202450 containerd[1550]: time="2025-08-13T01:12:00.202033377Z" level=info msg="connecting to shim 42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497" address="unix:///run/containerd/s/051432ac8c43703fb9bf4a0646d2d7112856e4877d22df1c9ac115099de53d0f" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:12:00.230017 systemd[1]: Started cri-containerd-42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497.scope - libcontainer container 42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497. Aug 13 01:12:00.268924 containerd[1550]: time="2025-08-13T01:12:00.268865585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-hvgq2,Uid:8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497\"" Aug 13 01:12:00.272266 containerd[1550]: time="2025-08-13T01:12:00.272214088Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 01:12:00.508645 kubelet[2722]: E0813 01:12:00.508558 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:00.509024 containerd[1550]: time="2025-08-13T01:12:00.509003664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tb5sq,Uid:770e1d82-b02e-4a9d-a204-9d60b463cda1,Namespace:kube-system,Attempt:0,}" Aug 13 01:12:00.532418 containerd[1550]: time="2025-08-13T01:12:00.532395987Z" level=info msg="connecting to shim d48c69dbba2bdff219723ad14648f4bc7dc3ea2c8a8efa21048893f4f70ca9d9" address="unix:///run/containerd/s/dbac02b561fb90628a3190e021db5a5a5cfedf1bf52e35dba234c22a621f116c" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:12:00.558023 systemd[1]: Started cri-containerd-d48c69dbba2bdff219723ad14648f4bc7dc3ea2c8a8efa21048893f4f70ca9d9.scope - libcontainer container d48c69dbba2bdff219723ad14648f4bc7dc3ea2c8a8efa21048893f4f70ca9d9. Aug 13 01:12:00.585882 containerd[1550]: time="2025-08-13T01:12:00.585854707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-tb5sq,Uid:770e1d82-b02e-4a9d-a204-9d60b463cda1,Namespace:kube-system,Attempt:0,} returns sandbox id \"d48c69dbba2bdff219723ad14648f4bc7dc3ea2c8a8efa21048893f4f70ca9d9\"" Aug 13 01:12:00.587026 kubelet[2722]: E0813 01:12:00.587000 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:00.592572 containerd[1550]: time="2025-08-13T01:12:00.592434160Z" level=info msg="CreateContainer within sandbox \"d48c69dbba2bdff219723ad14648f4bc7dc3ea2c8a8efa21048893f4f70ca9d9\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:12:00.605373 containerd[1550]: time="2025-08-13T01:12:00.605339620Z" level=info msg="Container d4453309d9b381ba2cb4f9b0fbe1f6fc972e542a1d59193e86118d95825b0320: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:12:00.610241 containerd[1550]: time="2025-08-13T01:12:00.610207578Z" level=info msg="CreateContainer within sandbox \"d48c69dbba2bdff219723ad14648f4bc7dc3ea2c8a8efa21048893f4f70ca9d9\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"d4453309d9b381ba2cb4f9b0fbe1f6fc972e542a1d59193e86118d95825b0320\"" Aug 13 01:12:00.610978 containerd[1550]: time="2025-08-13T01:12:00.610762615Z" level=info msg="StartContainer for \"d4453309d9b381ba2cb4f9b0fbe1f6fc972e542a1d59193e86118d95825b0320\"" Aug 13 01:12:00.612724 containerd[1550]: time="2025-08-13T01:12:00.612703383Z" level=info msg="connecting to shim d4453309d9b381ba2cb4f9b0fbe1f6fc972e542a1d59193e86118d95825b0320" address="unix:///run/containerd/s/dbac02b561fb90628a3190e021db5a5a5cfedf1bf52e35dba234c22a621f116c" protocol=ttrpc version=3 Aug 13 01:12:00.634025 systemd[1]: Started cri-containerd-d4453309d9b381ba2cb4f9b0fbe1f6fc972e542a1d59193e86118d95825b0320.scope - libcontainer container d4453309d9b381ba2cb4f9b0fbe1f6fc972e542a1d59193e86118d95825b0320. Aug 13 01:12:00.674353 containerd[1550]: time="2025-08-13T01:12:00.674323078Z" level=info msg="StartContainer for \"d4453309d9b381ba2cb4f9b0fbe1f6fc972e542a1d59193e86118d95825b0320\" returns successfully" Aug 13 01:12:01.098336 kubelet[2722]: E0813 01:12:01.098275 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:01.236141 kubelet[2722]: E0813 01:12:01.236107 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:01.249642 kubelet[2722]: I0813 01:12:01.249431 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-tb5sq" podStartSLOduration=3.249416865 podStartE2EDuration="3.249416865s" podCreationTimestamp="2025-08-13 01:11:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:12:01.118174283 +0000 UTC m=+7.156103592" watchObservedRunningTime="2025-08-13 01:12:01.249416865 +0000 UTC m=+7.287346174" Aug 13 01:12:01.424310 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2428981422.mount: Deactivated successfully. Aug 13 01:12:01.913491 containerd[1550]: time="2025-08-13T01:12:01.913332137Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:01.914392 containerd[1550]: time="2025-08-13T01:12:01.914225962Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 01:12:01.914873 containerd[1550]: time="2025-08-13T01:12:01.914843522Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:01.916567 containerd[1550]: time="2025-08-13T01:12:01.916530664Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:01.917139 containerd[1550]: time="2025-08-13T01:12:01.917110350Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.64487246s" Aug 13 01:12:01.917225 containerd[1550]: time="2025-08-13T01:12:01.917202458Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:12:01.921091 containerd[1550]: time="2025-08-13T01:12:01.921069590Z" level=info msg="CreateContainer within sandbox \"42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 01:12:01.927648 containerd[1550]: time="2025-08-13T01:12:01.927588915Z" level=info msg="Container b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:12:01.935727 containerd[1550]: time="2025-08-13T01:12:01.935691092Z" level=info msg="CreateContainer within sandbox \"42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd\"" Aug 13 01:12:01.937100 containerd[1550]: time="2025-08-13T01:12:01.937023370Z" level=info msg="StartContainer for \"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd\"" Aug 13 01:12:01.938337 containerd[1550]: time="2025-08-13T01:12:01.938310923Z" level=info msg="connecting to shim b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd" address="unix:///run/containerd/s/051432ac8c43703fb9bf4a0646d2d7112856e4877d22df1c9ac115099de53d0f" protocol=ttrpc version=3 Aug 13 01:12:01.958039 systemd[1]: Started cri-containerd-b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd.scope - libcontainer container b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd. Aug 13 01:12:01.987658 containerd[1550]: time="2025-08-13T01:12:01.987622205Z" level=info msg="StartContainer for \"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd\" returns successfully" Aug 13 01:12:02.106112 kubelet[2722]: E0813 01:12:02.105872 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:02.123943 kubelet[2722]: I0813 01:12:02.123879 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-hvgq2" podStartSLOduration=1.476161891 podStartE2EDuration="3.123866374s" podCreationTimestamp="2025-08-13 01:11:59 +0000 UTC" firstStartedPulling="2025-08-13 01:12:00.270400542 +0000 UTC m=+6.308329851" lastFinishedPulling="2025-08-13 01:12:01.918105025 +0000 UTC m=+7.956034334" observedRunningTime="2025-08-13 01:12:02.113560447 +0000 UTC m=+8.151489756" watchObservedRunningTime="2025-08-13 01:12:02.123866374 +0000 UTC m=+8.161795683" Aug 13 01:12:04.584515 kubelet[2722]: E0813 01:12:04.584464 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:04.623159 kubelet[2722]: E0813 01:12:04.623127 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:05.110922 kubelet[2722]: E0813 01:12:05.110477 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:07.545477 sudo[1803]: pam_unix(sudo:session): session closed for user root Aug 13 01:12:07.596711 sshd[1802]: Connection closed by 147.75.109.163 port 45692 Aug 13 01:12:07.598741 sshd-session[1800]: pam_unix(sshd:session): session closed for user core Aug 13 01:12:07.603336 systemd[1]: sshd@6-172.233.214.103:22-147.75.109.163:45692.service: Deactivated successfully. Aug 13 01:12:07.608130 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:12:07.608463 systemd[1]: session-7.scope: Consumed 4.742s CPU time, 234.2M memory peak. Aug 13 01:12:07.610794 systemd-logind[1522]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:12:07.615324 systemd-logind[1522]: Removed session 7. Aug 13 01:12:10.455219 systemd[1]: Created slice kubepods-besteffort-pod06b97c70_02ae_4c94_aac3_d4d2a75f96ae.slice - libcontainer container kubepods-besteffort-pod06b97c70_02ae_4c94_aac3_d4d2a75f96ae.slice. Aug 13 01:12:10.556593 kubelet[2722]: I0813 01:12:10.556544 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/06b97c70-02ae-4c94-aac3-d4d2a75f96ae-tigera-ca-bundle\") pod \"calico-typha-55bf5cd98c-8lqpc\" (UID: \"06b97c70-02ae-4c94-aac3-d4d2a75f96ae\") " pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:12:10.557127 kubelet[2722]: I0813 01:12:10.556961 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh842\" (UniqueName: \"kubernetes.io/projected/06b97c70-02ae-4c94-aac3-d4d2a75f96ae-kube-api-access-mh842\") pod \"calico-typha-55bf5cd98c-8lqpc\" (UID: \"06b97c70-02ae-4c94-aac3-d4d2a75f96ae\") " pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:12:10.557127 kubelet[2722]: I0813 01:12:10.556986 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/06b97c70-02ae-4c94-aac3-d4d2a75f96ae-typha-certs\") pod \"calico-typha-55bf5cd98c-8lqpc\" (UID: \"06b97c70-02ae-4c94-aac3-d4d2a75f96ae\") " pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:12:10.735076 systemd[1]: Created slice kubepods-besteffort-pod3c0f3b86_7d63_44df_843e_763eb95a8b94.slice - libcontainer container kubepods-besteffort-pod3c0f3b86_7d63_44df_843e_763eb95a8b94.slice. Aug 13 01:12:10.758940 kubelet[2722]: I0813 01:12:10.758867 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/3c0f3b86-7d63-44df-843e-763eb95a8b94-flexvol-driver-host\") pod \"calico-node-hq29b\" (UID: \"3c0f3b86-7d63-44df-843e-763eb95a8b94\") " pod="calico-system/calico-node-hq29b" Aug 13 01:12:10.759206 kubelet[2722]: I0813 01:12:10.759057 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/3c0f3b86-7d63-44df-843e-763eb95a8b94-cni-net-dir\") pod \"calico-node-hq29b\" (UID: \"3c0f3b86-7d63-44df-843e-763eb95a8b94\") " pod="calico-system/calico-node-hq29b" Aug 13 01:12:10.759206 kubelet[2722]: I0813 01:12:10.759087 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/3c0f3b86-7d63-44df-843e-763eb95a8b94-cni-bin-dir\") pod \"calico-node-hq29b\" (UID: \"3c0f3b86-7d63-44df-843e-763eb95a8b94\") " pod="calico-system/calico-node-hq29b" Aug 13 01:12:10.759206 kubelet[2722]: I0813 01:12:10.759102 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3c0f3b86-7d63-44df-843e-763eb95a8b94-lib-modules\") pod \"calico-node-hq29b\" (UID: \"3c0f3b86-7d63-44df-843e-763eb95a8b94\") " pod="calico-system/calico-node-hq29b" Aug 13 01:12:10.759206 kubelet[2722]: I0813 01:12:10.759115 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3c0f3b86-7d63-44df-843e-763eb95a8b94-xtables-lock\") pod \"calico-node-hq29b\" (UID: \"3c0f3b86-7d63-44df-843e-763eb95a8b94\") " pod="calico-system/calico-node-hq29b" Aug 13 01:12:10.759206 kubelet[2722]: I0813 01:12:10.759130 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/3c0f3b86-7d63-44df-843e-763eb95a8b94-cni-log-dir\") pod \"calico-node-hq29b\" (UID: \"3c0f3b86-7d63-44df-843e-763eb95a8b94\") " pod="calico-system/calico-node-hq29b" Aug 13 01:12:10.759354 kubelet[2722]: I0813 01:12:10.759143 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/3c0f3b86-7d63-44df-843e-763eb95a8b94-node-certs\") pod \"calico-node-hq29b\" (UID: \"3c0f3b86-7d63-44df-843e-763eb95a8b94\") " pod="calico-system/calico-node-hq29b" Aug 13 01:12:10.759354 kubelet[2722]: I0813 01:12:10.759158 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/3c0f3b86-7d63-44df-843e-763eb95a8b94-var-run-calico\") pod \"calico-node-hq29b\" (UID: \"3c0f3b86-7d63-44df-843e-763eb95a8b94\") " pod="calico-system/calico-node-hq29b" Aug 13 01:12:10.759354 kubelet[2722]: I0813 01:12:10.759171 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mq7j6\" (UniqueName: \"kubernetes.io/projected/3c0f3b86-7d63-44df-843e-763eb95a8b94-kube-api-access-mq7j6\") pod \"calico-node-hq29b\" (UID: \"3c0f3b86-7d63-44df-843e-763eb95a8b94\") " pod="calico-system/calico-node-hq29b" Aug 13 01:12:10.759354 kubelet[2722]: I0813 01:12:10.759223 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/3c0f3b86-7d63-44df-843e-763eb95a8b94-policysync\") pod \"calico-node-hq29b\" (UID: \"3c0f3b86-7d63-44df-843e-763eb95a8b94\") " pod="calico-system/calico-node-hq29b" Aug 13 01:12:10.759354 kubelet[2722]: I0813 01:12:10.759260 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c0f3b86-7d63-44df-843e-763eb95a8b94-tigera-ca-bundle\") pod \"calico-node-hq29b\" (UID: \"3c0f3b86-7d63-44df-843e-763eb95a8b94\") " pod="calico-system/calico-node-hq29b" Aug 13 01:12:10.759650 kubelet[2722]: I0813 01:12:10.759287 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3c0f3b86-7d63-44df-843e-763eb95a8b94-var-lib-calico\") pod \"calico-node-hq29b\" (UID: \"3c0f3b86-7d63-44df-843e-763eb95a8b94\") " pod="calico-system/calico-node-hq29b" Aug 13 01:12:10.761068 kubelet[2722]: E0813 01:12:10.761048 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:10.761913 containerd[1550]: time="2025-08-13T01:12:10.761789940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55bf5cd98c-8lqpc,Uid:06b97c70-02ae-4c94-aac3-d4d2a75f96ae,Namespace:calico-system,Attempt:0,}" Aug 13 01:12:10.779126 containerd[1550]: time="2025-08-13T01:12:10.779063920Z" level=info msg="connecting to shim f47ba15e1c906c0e4a4e96b52a0a92b5bd9b6708de325076d6845d41ce57c618" address="unix:///run/containerd/s/6fd0001e03eba1bae5630b19c1b1323f76fa75e1c9c13e378ffa0ccef3fcc55f" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:12:10.806171 systemd[1]: Started cri-containerd-f47ba15e1c906c0e4a4e96b52a0a92b5bd9b6708de325076d6845d41ce57c618.scope - libcontainer container f47ba15e1c906c0e4a4e96b52a0a92b5bd9b6708de325076d6845d41ce57c618. Aug 13 01:12:10.863130 kubelet[2722]: E0813 01:12:10.863096 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:10.863707 kubelet[2722]: W0813 01:12:10.863119 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:10.863747 kubelet[2722]: E0813 01:12:10.863707 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:10.865421 kubelet[2722]: E0813 01:12:10.865390 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:10.865421 kubelet[2722]: W0813 01:12:10.865411 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:10.865421 kubelet[2722]: E0813 01:12:10.865423 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:10.871168 kubelet[2722]: E0813 01:12:10.869850 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:10.871168 kubelet[2722]: W0813 01:12:10.869864 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:10.871168 kubelet[2722]: E0813 01:12:10.869874 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:10.871168 kubelet[2722]: E0813 01:12:10.870515 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:10.871168 kubelet[2722]: W0813 01:12:10.870523 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:10.871168 kubelet[2722]: E0813 01:12:10.870533 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:10.872067 kubelet[2722]: E0813 01:12:10.872043 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:10.872067 kubelet[2722]: W0813 01:12:10.872060 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:10.872137 kubelet[2722]: E0813 01:12:10.872090 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:10.872745 kubelet[2722]: E0813 01:12:10.872322 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:10.872745 kubelet[2722]: W0813 01:12:10.872335 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:10.872745 kubelet[2722]: E0813 01:12:10.872343 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:10.872993 kubelet[2722]: E0813 01:12:10.872971 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:10.872993 kubelet[2722]: W0813 01:12:10.872986 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:10.872993 kubelet[2722]: E0813 01:12:10.872996 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:10.873494 kubelet[2722]: E0813 01:12:10.873470 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:10.873494 kubelet[2722]: W0813 01:12:10.873487 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:10.873628 kubelet[2722]: E0813 01:12:10.873607 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:10.874725 kubelet[2722]: E0813 01:12:10.874702 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:10.874725 kubelet[2722]: W0813 01:12:10.874718 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:10.874725 kubelet[2722]: E0813 01:12:10.874727 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:10.875916 kubelet[2722]: E0813 01:12:10.875062 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:10.875916 kubelet[2722]: W0813 01:12:10.875070 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:10.875916 kubelet[2722]: E0813 01:12:10.875077 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:10.875916 kubelet[2722]: E0813 01:12:10.875387 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:10.875916 kubelet[2722]: W0813 01:12:10.875395 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:10.875916 kubelet[2722]: E0813 01:12:10.875403 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:10.875916 kubelet[2722]: E0813 01:12:10.875844 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:10.875916 kubelet[2722]: W0813 01:12:10.875852 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:10.875916 kubelet[2722]: E0813 01:12:10.875859 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:10.878041 kubelet[2722]: E0813 01:12:10.876102 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:10.878041 kubelet[2722]: W0813 01:12:10.876137 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:10.878041 kubelet[2722]: E0813 01:12:10.876145 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:10.878041 kubelet[2722]: E0813 01:12:10.876595 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:10.878041 kubelet[2722]: W0813 01:12:10.876603 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:10.878041 kubelet[2722]: E0813 01:12:10.876610 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:10.886714 containerd[1550]: time="2025-08-13T01:12:10.886668130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-55bf5cd98c-8lqpc,Uid:06b97c70-02ae-4c94-aac3-d4d2a75f96ae,Namespace:calico-system,Attempt:0,} returns sandbox id \"f47ba15e1c906c0e4a4e96b52a0a92b5bd9b6708de325076d6845d41ce57c618\"" Aug 13 01:12:10.887759 kubelet[2722]: E0813 01:12:10.887742 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:10.889650 kubelet[2722]: E0813 01:12:10.889579 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:10.889650 kubelet[2722]: W0813 01:12:10.889595 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:10.889650 kubelet[2722]: E0813 01:12:10.889604 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:10.889881 containerd[1550]: time="2025-08-13T01:12:10.889834431Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 01:12:10.976935 kubelet[2722]: E0813 01:12:10.976454 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7lv4" podUID="6b834979-32a4-464b-9898-ef87b1042a9e" Aug 13 01:12:11.042198 containerd[1550]: time="2025-08-13T01:12:11.042078129Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hq29b,Uid:3c0f3b86-7d63-44df-843e-763eb95a8b94,Namespace:calico-system,Attempt:0,}" Aug 13 01:12:11.048938 kubelet[2722]: E0813 01:12:11.048246 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.048938 kubelet[2722]: W0813 01:12:11.048262 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.048938 kubelet[2722]: E0813 01:12:11.048288 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.049312 kubelet[2722]: E0813 01:12:11.049253 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.049312 kubelet[2722]: W0813 01:12:11.049265 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.049312 kubelet[2722]: E0813 01:12:11.049275 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.049643 kubelet[2722]: E0813 01:12:11.049586 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.049643 kubelet[2722]: W0813 01:12:11.049597 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.049643 kubelet[2722]: E0813 01:12:11.049605 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.050453 kubelet[2722]: E0813 01:12:11.049940 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.050453 kubelet[2722]: W0813 01:12:11.049951 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.050453 kubelet[2722]: E0813 01:12:11.049960 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.050816 kubelet[2722]: E0813 01:12:11.050738 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.051092 kubelet[2722]: W0813 01:12:11.051064 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.051092 kubelet[2722]: E0813 01:12:11.051082 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.051857 kubelet[2722]: E0813 01:12:11.051829 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.051857 kubelet[2722]: W0813 01:12:11.051851 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.051857 kubelet[2722]: E0813 01:12:11.051860 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.053238 kubelet[2722]: E0813 01:12:11.053041 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.053238 kubelet[2722]: W0813 01:12:11.053072 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.053238 kubelet[2722]: E0813 01:12:11.053097 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.054195 kubelet[2722]: E0813 01:12:11.054173 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.054195 kubelet[2722]: W0813 01:12:11.054210 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.054757 kubelet[2722]: E0813 01:12:11.054223 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.054891 kubelet[2722]: E0813 01:12:11.054874 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.055527 kubelet[2722]: W0813 01:12:11.054888 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.055603 kubelet[2722]: E0813 01:12:11.055534 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.056740 kubelet[2722]: E0813 01:12:11.056661 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.056740 kubelet[2722]: W0813 01:12:11.056673 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.056740 kubelet[2722]: E0813 01:12:11.056684 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.057877 kubelet[2722]: E0813 01:12:11.057815 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.058042 kubelet[2722]: W0813 01:12:11.057939 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.058042 kubelet[2722]: E0813 01:12:11.057952 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.058807 kubelet[2722]: E0813 01:12:11.058758 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.058807 kubelet[2722]: W0813 01:12:11.058768 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.059027 kubelet[2722]: E0813 01:12:11.059006 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.059745 kubelet[2722]: E0813 01:12:11.059552 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.059745 kubelet[2722]: W0813 01:12:11.059565 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.059829 kubelet[2722]: E0813 01:12:11.059574 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.060174 kubelet[2722]: E0813 01:12:11.060154 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.060276 kubelet[2722]: W0813 01:12:11.060264 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.060368 kubelet[2722]: E0813 01:12:11.060358 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.060679 kubelet[2722]: E0813 01:12:11.060669 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.060759 kubelet[2722]: W0813 01:12:11.060748 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.060922 kubelet[2722]: E0813 01:12:11.060828 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.061215 kubelet[2722]: E0813 01:12:11.061201 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.061366 kubelet[2722]: W0813 01:12:11.061277 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.061366 kubelet[2722]: E0813 01:12:11.061288 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.061655 kubelet[2722]: E0813 01:12:11.061645 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.061792 kubelet[2722]: W0813 01:12:11.061713 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.061792 kubelet[2722]: E0813 01:12:11.061722 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.062339 kubelet[2722]: E0813 01:12:11.062217 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.062339 kubelet[2722]: W0813 01:12:11.062265 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.062339 kubelet[2722]: E0813 01:12:11.062273 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.062761 kubelet[2722]: E0813 01:12:11.062684 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.062761 kubelet[2722]: W0813 01:12:11.062694 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.062761 kubelet[2722]: E0813 01:12:11.062702 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.063166 kubelet[2722]: E0813 01:12:11.063127 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.063219 kubelet[2722]: W0813 01:12:11.063209 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.063296 kubelet[2722]: E0813 01:12:11.063286 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.063718 kubelet[2722]: E0813 01:12:11.063707 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.063869 kubelet[2722]: W0813 01:12:11.063777 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.063869 kubelet[2722]: E0813 01:12:11.063788 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.063869 kubelet[2722]: I0813 01:12:11.063809 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s2jxg\" (UniqueName: \"kubernetes.io/projected/6b834979-32a4-464b-9898-ef87b1042a9e-kube-api-access-s2jxg\") pod \"csi-node-driver-l7lv4\" (UID: \"6b834979-32a4-464b-9898-ef87b1042a9e\") " pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:12:11.064181 kubelet[2722]: E0813 01:12:11.064132 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.064181 kubelet[2722]: W0813 01:12:11.064161 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.064181 kubelet[2722]: E0813 01:12:11.064170 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.064339 kubelet[2722]: I0813 01:12:11.064282 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/6b834979-32a4-464b-9898-ef87b1042a9e-kubelet-dir\") pod \"csi-node-driver-l7lv4\" (UID: \"6b834979-32a4-464b-9898-ef87b1042a9e\") " pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:12:11.064636 kubelet[2722]: E0813 01:12:11.064593 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.064636 kubelet[2722]: W0813 01:12:11.064617 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.064636 kubelet[2722]: E0813 01:12:11.064625 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.064815 kubelet[2722]: I0813 01:12:11.064751 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/6b834979-32a4-464b-9898-ef87b1042a9e-registration-dir\") pod \"csi-node-driver-l7lv4\" (UID: \"6b834979-32a4-464b-9898-ef87b1042a9e\") " pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:12:11.065127 kubelet[2722]: E0813 01:12:11.065074 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.065127 kubelet[2722]: W0813 01:12:11.065108 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.065127 kubelet[2722]: E0813 01:12:11.065117 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.065316 kubelet[2722]: I0813 01:12:11.065214 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/6b834979-32a4-464b-9898-ef87b1042a9e-varrun\") pod \"csi-node-driver-l7lv4\" (UID: \"6b834979-32a4-464b-9898-ef87b1042a9e\") " pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:12:11.065654 kubelet[2722]: E0813 01:12:11.065614 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.065654 kubelet[2722]: W0813 01:12:11.065633 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.065654 kubelet[2722]: E0813 01:12:11.065641 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.066055 kubelet[2722]: I0813 01:12:11.066034 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/6b834979-32a4-464b-9898-ef87b1042a9e-socket-dir\") pod \"csi-node-driver-l7lv4\" (UID: \"6b834979-32a4-464b-9898-ef87b1042a9e\") " pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:12:11.066550 kubelet[2722]: E0813 01:12:11.066496 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.066550 kubelet[2722]: W0813 01:12:11.066506 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.066971 kubelet[2722]: E0813 01:12:11.066645 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.069183 kubelet[2722]: E0813 01:12:11.068989 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.069183 kubelet[2722]: W0813 01:12:11.069023 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.069183 kubelet[2722]: E0813 01:12:11.069033 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.070167 kubelet[2722]: E0813 01:12:11.069314 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.070167 kubelet[2722]: W0813 01:12:11.069322 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.070167 kubelet[2722]: E0813 01:12:11.069354 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.070167 kubelet[2722]: E0813 01:12:11.069605 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.070167 kubelet[2722]: W0813 01:12:11.069612 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.070167 kubelet[2722]: E0813 01:12:11.069620 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.070167 kubelet[2722]: E0813 01:12:11.069857 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.070167 kubelet[2722]: W0813 01:12:11.069887 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.070167 kubelet[2722]: E0813 01:12:11.069934 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.070664 kubelet[2722]: E0813 01:12:11.070500 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.070664 kubelet[2722]: W0813 01:12:11.070511 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.070664 kubelet[2722]: E0813 01:12:11.070518 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.071244 kubelet[2722]: E0813 01:12:11.070987 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.071244 kubelet[2722]: W0813 01:12:11.070999 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.071244 kubelet[2722]: E0813 01:12:11.071008 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.071476 kubelet[2722]: E0813 01:12:11.071402 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.071476 kubelet[2722]: W0813 01:12:11.071412 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.071476 kubelet[2722]: E0813 01:12:11.071421 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.071887 kubelet[2722]: E0813 01:12:11.071876 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.072001 kubelet[2722]: W0813 01:12:11.071970 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.072086 kubelet[2722]: E0813 01:12:11.072052 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.072589 kubelet[2722]: E0813 01:12:11.072556 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.072589 kubelet[2722]: W0813 01:12:11.072566 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.072589 kubelet[2722]: E0813 01:12:11.072574 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.080937 containerd[1550]: time="2025-08-13T01:12:11.080887191Z" level=info msg="connecting to shim 230d5e2260a9c5817bd0783127b59ee5e78b885b601c7a71a82f5c041382166d" address="unix:///run/containerd/s/67393f8f5ec0478f793c124a7121aab7064377f0424e98086f7c743d7914b797" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:12:11.107169 systemd[1]: Started cri-containerd-230d5e2260a9c5817bd0783127b59ee5e78b885b601c7a71a82f5c041382166d.scope - libcontainer container 230d5e2260a9c5817bd0783127b59ee5e78b885b601c7a71a82f5c041382166d. Aug 13 01:12:11.164609 containerd[1550]: time="2025-08-13T01:12:11.164548681Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hq29b,Uid:3c0f3b86-7d63-44df-843e-763eb95a8b94,Namespace:calico-system,Attempt:0,} returns sandbox id \"230d5e2260a9c5817bd0783127b59ee5e78b885b601c7a71a82f5c041382166d\"" Aug 13 01:12:11.167004 kubelet[2722]: E0813 01:12:11.166981 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.167004 kubelet[2722]: W0813 01:12:11.167002 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.167195 kubelet[2722]: E0813 01:12:11.167022 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.167431 kubelet[2722]: E0813 01:12:11.167413 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.167431 kubelet[2722]: W0813 01:12:11.167427 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.167521 kubelet[2722]: E0813 01:12:11.167437 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.167700 kubelet[2722]: E0813 01:12:11.167685 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.167700 kubelet[2722]: W0813 01:12:11.167697 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.167700 kubelet[2722]: E0813 01:12:11.167706 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.167984 kubelet[2722]: E0813 01:12:11.167957 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.167984 kubelet[2722]: W0813 01:12:11.167977 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.168025 kubelet[2722]: E0813 01:12:11.167986 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.168174 kubelet[2722]: E0813 01:12:11.168158 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.168174 kubelet[2722]: W0813 01:12:11.168171 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.168274 kubelet[2722]: E0813 01:12:11.168179 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.168520 kubelet[2722]: E0813 01:12:11.168493 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.168676 kubelet[2722]: W0813 01:12:11.168510 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.168729 kubelet[2722]: E0813 01:12:11.168718 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.169147 kubelet[2722]: E0813 01:12:11.169133 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.169238 kubelet[2722]: W0813 01:12:11.169191 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.169238 kubelet[2722]: E0813 01:12:11.169202 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.169525 kubelet[2722]: E0813 01:12:11.169515 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.169586 kubelet[2722]: W0813 01:12:11.169575 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.169651 kubelet[2722]: E0813 01:12:11.169619 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.169977 kubelet[2722]: E0813 01:12:11.169947 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.169977 kubelet[2722]: W0813 01:12:11.169958 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.169977 kubelet[2722]: E0813 01:12:11.169966 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.170322 kubelet[2722]: E0813 01:12:11.170292 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.170322 kubelet[2722]: W0813 01:12:11.170302 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.170322 kubelet[2722]: E0813 01:12:11.170310 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.170655 kubelet[2722]: E0813 01:12:11.170626 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.170655 kubelet[2722]: W0813 01:12:11.170636 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.170778 kubelet[2722]: E0813 01:12:11.170724 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.171078 kubelet[2722]: E0813 01:12:11.171050 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.171078 kubelet[2722]: W0813 01:12:11.171059 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.171078 kubelet[2722]: E0813 01:12:11.171067 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.171415 kubelet[2722]: E0813 01:12:11.171368 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.171415 kubelet[2722]: W0813 01:12:11.171395 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.171415 kubelet[2722]: E0813 01:12:11.171404 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.171696 kubelet[2722]: E0813 01:12:11.171668 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.171696 kubelet[2722]: W0813 01:12:11.171678 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.171696 kubelet[2722]: E0813 01:12:11.171686 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.172028 kubelet[2722]: E0813 01:12:11.172018 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.172102 kubelet[2722]: W0813 01:12:11.172077 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.172102 kubelet[2722]: E0813 01:12:11.172090 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.172351 kubelet[2722]: E0813 01:12:11.172341 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.172415 kubelet[2722]: W0813 01:12:11.172394 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.172415 kubelet[2722]: E0813 01:12:11.172405 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.172667 kubelet[2722]: E0813 01:12:11.172657 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.172737 kubelet[2722]: W0813 01:12:11.172714 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.172737 kubelet[2722]: E0813 01:12:11.172726 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.173038 kubelet[2722]: E0813 01:12:11.173011 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.173038 kubelet[2722]: W0813 01:12:11.173020 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.173038 kubelet[2722]: E0813 01:12:11.173027 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.173329 kubelet[2722]: E0813 01:12:11.173302 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.173329 kubelet[2722]: W0813 01:12:11.173311 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.173329 kubelet[2722]: E0813 01:12:11.173319 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.173703 kubelet[2722]: E0813 01:12:11.173590 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.173703 kubelet[2722]: W0813 01:12:11.173601 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.173703 kubelet[2722]: E0813 01:12:11.173608 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.173934 kubelet[2722]: E0813 01:12:11.173922 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.174053 kubelet[2722]: W0813 01:12:11.173982 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.174053 kubelet[2722]: E0813 01:12:11.173994 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.174326 kubelet[2722]: E0813 01:12:11.174297 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.174326 kubelet[2722]: W0813 01:12:11.174307 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.174326 kubelet[2722]: E0813 01:12:11.174316 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.174634 kubelet[2722]: E0813 01:12:11.174624 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.174694 kubelet[2722]: W0813 01:12:11.174673 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.174694 kubelet[2722]: E0813 01:12:11.174684 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.175072 kubelet[2722]: E0813 01:12:11.174961 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.175072 kubelet[2722]: W0813 01:12:11.174972 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.175072 kubelet[2722]: E0813 01:12:11.174979 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.175357 kubelet[2722]: E0813 01:12:11.175346 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.175433 kubelet[2722]: W0813 01:12:11.175422 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.175474 kubelet[2722]: E0813 01:12:11.175465 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.183734 kubelet[2722]: E0813 01:12:11.183695 2722 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:12:11.183734 kubelet[2722]: W0813 01:12:11.183705 2722 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:12:11.183734 kubelet[2722]: E0813 01:12:11.183714 2722 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:12:11.944103 update_engine[1524]: I20250813 01:12:11.943980 1524 update_attempter.cc:509] Updating boot flags... Aug 13 01:12:12.390339 containerd[1550]: time="2025-08-13T01:12:12.389931061Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:12.391665 containerd[1550]: time="2025-08-13T01:12:12.391645362Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 01:12:12.393851 containerd[1550]: time="2025-08-13T01:12:12.393831366Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:12.395652 containerd[1550]: time="2025-08-13T01:12:12.395608110Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:12.396004 containerd[1550]: time="2025-08-13T01:12:12.395947286Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.505803149s" Aug 13 01:12:12.396004 containerd[1550]: time="2025-08-13T01:12:12.395974408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 01:12:12.400216 containerd[1550]: time="2025-08-13T01:12:12.400137235Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 01:12:12.419192 containerd[1550]: time="2025-08-13T01:12:12.419169408Z" level=info msg="CreateContainer within sandbox \"f47ba15e1c906c0e4a4e96b52a0a92b5bd9b6708de325076d6845d41ce57c618\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 01:12:12.427021 containerd[1550]: time="2025-08-13T01:12:12.426852613Z" level=info msg="Container 2de1488f2b9e06fbb97e8b977f3a3bafb0a5c60120679a25b0e2c96e545add39: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:12:12.435307 containerd[1550]: time="2025-08-13T01:12:12.435272942Z" level=info msg="CreateContainer within sandbox \"f47ba15e1c906c0e4a4e96b52a0a92b5bd9b6708de325076d6845d41ce57c618\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2de1488f2b9e06fbb97e8b977f3a3bafb0a5c60120679a25b0e2c96e545add39\"" Aug 13 01:12:12.435962 containerd[1550]: time="2025-08-13T01:12:12.435924792Z" level=info msg="StartContainer for \"2de1488f2b9e06fbb97e8b977f3a3bafb0a5c60120679a25b0e2c96e545add39\"" Aug 13 01:12:12.437087 containerd[1550]: time="2025-08-13T01:12:12.436865758Z" level=info msg="connecting to shim 2de1488f2b9e06fbb97e8b977f3a3bafb0a5c60120679a25b0e2c96e545add39" address="unix:///run/containerd/s/6fd0001e03eba1bae5630b19c1b1323f76fa75e1c9c13e378ffa0ccef3fcc55f" protocol=ttrpc version=3 Aug 13 01:12:12.460016 systemd[1]: Started cri-containerd-2de1488f2b9e06fbb97e8b977f3a3bafb0a5c60120679a25b0e2c96e545add39.scope - libcontainer container 2de1488f2b9e06fbb97e8b977f3a3bafb0a5c60120679a25b0e2c96e545add39. Aug 13 01:12:12.513484 containerd[1550]: time="2025-08-13T01:12:12.513374737Z" level=info msg="StartContainer for \"2de1488f2b9e06fbb97e8b977f3a3bafb0a5c60120679a25b0e2c96e545add39\" returns successfully" Aug 13 01:12:13.011929 containerd[1550]: time="2025-08-13T01:12:13.011857309Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:13.012473 containerd[1550]: time="2025-08-13T01:12:13.012448865Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 01:12:13.013197 containerd[1550]: time="2025-08-13T01:12:13.013166117Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:13.014701 containerd[1550]: time="2025-08-13T01:12:13.014651812Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:13.015075 containerd[1550]: time="2025-08-13T01:12:13.015041721Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 614.733907ms" Aug 13 01:12:13.015110 containerd[1550]: time="2025-08-13T01:12:13.015073942Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 01:12:13.018961 containerd[1550]: time="2025-08-13T01:12:13.018670101Z" level=info msg="CreateContainer within sandbox \"230d5e2260a9c5817bd0783127b59ee5e78b885b601c7a71a82f5c041382166d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 01:12:13.026748 containerd[1550]: time="2025-08-13T01:12:13.026051780Z" level=info msg="Container 49fe246f999d64e040b53746c2793e83baae491c225e0bc1e02c089474f32e8e: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:12:13.030110 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3189092923.mount: Deactivated successfully. Aug 13 01:12:13.036699 containerd[1550]: time="2025-08-13T01:12:13.036632341Z" level=info msg="CreateContainer within sandbox \"230d5e2260a9c5817bd0783127b59ee5e78b885b601c7a71a82f5c041382166d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"49fe246f999d64e040b53746c2793e83baae491c225e0bc1e02c089474f32e8e\"" Aug 13 01:12:13.039437 containerd[1550]: time="2025-08-13T01:12:13.038496544Z" level=info msg="StartContainer for \"49fe246f999d64e040b53746c2793e83baae491c225e0bc1e02c089474f32e8e\"" Aug 13 01:12:13.042168 containerd[1550]: time="2025-08-13T01:12:13.042103404Z" level=info msg="connecting to shim 49fe246f999d64e040b53746c2793e83baae491c225e0bc1e02c089474f32e8e" address="unix:///run/containerd/s/67393f8f5ec0478f793c124a7121aab7064377f0424e98086f7c743d7914b797" protocol=ttrpc version=3 Aug 13 01:12:13.057314 kubelet[2722]: E0813 01:12:13.057285 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7lv4" podUID="6b834979-32a4-464b-9898-ef87b1042a9e" Aug 13 01:12:13.070012 systemd[1]: Started cri-containerd-49fe246f999d64e040b53746c2793e83baae491c225e0bc1e02c089474f32e8e.scope - libcontainer container 49fe246f999d64e040b53746c2793e83baae491c225e0bc1e02c089474f32e8e. Aug 13 01:12:13.113705 containerd[1550]: time="2025-08-13T01:12:13.113659699Z" level=info msg="StartContainer for \"49fe246f999d64e040b53746c2793e83baae491c225e0bc1e02c089474f32e8e\" returns successfully" Aug 13 01:12:13.127350 systemd[1]: cri-containerd-49fe246f999d64e040b53746c2793e83baae491c225e0bc1e02c089474f32e8e.scope: Deactivated successfully. Aug 13 01:12:13.131195 containerd[1550]: time="2025-08-13T01:12:13.131151607Z" level=info msg="received exit event container_id:\"49fe246f999d64e040b53746c2793e83baae491c225e0bc1e02c089474f32e8e\" id:\"49fe246f999d64e040b53746c2793e83baae491c225e0bc1e02c089474f32e8e\" pid:3392 exited_at:{seconds:1755047533 nanos:130502328}" Aug 13 01:12:13.131408 containerd[1550]: time="2025-08-13T01:12:13.131223001Z" level=info msg="TaskExit event in podsandbox handler container_id:\"49fe246f999d64e040b53746c2793e83baae491c225e0bc1e02c089474f32e8e\" id:\"49fe246f999d64e040b53746c2793e83baae491c225e0bc1e02c089474f32e8e\" pid:3392 exited_at:{seconds:1755047533 nanos:130502328}" Aug 13 01:12:13.138050 kubelet[2722]: E0813 01:12:13.137190 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:13.154038 kubelet[2722]: I0813 01:12:13.151662 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" podStartSLOduration=1.641959696 podStartE2EDuration="3.151652239s" podCreationTimestamp="2025-08-13 01:12:10 +0000 UTC" firstStartedPulling="2025-08-13 01:12:10.888687209 +0000 UTC m=+16.926616528" lastFinishedPulling="2025-08-13 01:12:12.398379742 +0000 UTC m=+18.436309071" observedRunningTime="2025-08-13 01:12:13.149491233 +0000 UTC m=+19.187420532" watchObservedRunningTime="2025-08-13 01:12:13.151652239 +0000 UTC m=+19.189581558" Aug 13 01:12:13.169638 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-49fe246f999d64e040b53746c2793e83baae491c225e0bc1e02c089474f32e8e-rootfs.mount: Deactivated successfully. Aug 13 01:12:14.146006 kubelet[2722]: E0813 01:12:14.145482 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:14.147105 containerd[1550]: time="2025-08-13T01:12:14.146652845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 01:12:15.057644 kubelet[2722]: E0813 01:12:15.057535 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-l7lv4" podUID="6b834979-32a4-464b-9898-ef87b1042a9e" Aug 13 01:12:15.146843 kubelet[2722]: E0813 01:12:15.146807 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:15.960554 containerd[1550]: time="2025-08-13T01:12:15.959309262Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:15.960988 containerd[1550]: time="2025-08-13T01:12:15.960646634Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 01:12:15.960988 containerd[1550]: time="2025-08-13T01:12:15.960711727Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:15.962065 containerd[1550]: time="2025-08-13T01:12:15.962033689Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:12:15.962720 containerd[1550]: time="2025-08-13T01:12:15.962690654Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 1.816010768s" Aug 13 01:12:15.962720 containerd[1550]: time="2025-08-13T01:12:15.962720405Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 01:12:15.965769 containerd[1550]: time="2025-08-13T01:12:15.965737794Z" level=info msg="CreateContainer within sandbox \"230d5e2260a9c5817bd0783127b59ee5e78b885b601c7a71a82f5c041382166d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 01:12:15.974318 containerd[1550]: time="2025-08-13T01:12:15.974281128Z" level=info msg="Container 1ed66b1e64a3e12a6ca570671127cd4085864a1f9adce2a8a3563d52ce2ecb22: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:12:15.983344 containerd[1550]: time="2025-08-13T01:12:15.983319893Z" level=info msg="CreateContainer within sandbox \"230d5e2260a9c5817bd0783127b59ee5e78b885b601c7a71a82f5c041382166d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"1ed66b1e64a3e12a6ca570671127cd4085864a1f9adce2a8a3563d52ce2ecb22\"" Aug 13 01:12:15.984321 containerd[1550]: time="2025-08-13T01:12:15.984204857Z" level=info msg="StartContainer for \"1ed66b1e64a3e12a6ca570671127cd4085864a1f9adce2a8a3563d52ce2ecb22\"" Aug 13 01:12:15.985865 containerd[1550]: time="2025-08-13T01:12:15.985831231Z" level=info msg="connecting to shim 1ed66b1e64a3e12a6ca570671127cd4085864a1f9adce2a8a3563d52ce2ecb22" address="unix:///run/containerd/s/67393f8f5ec0478f793c124a7121aab7064377f0424e98086f7c743d7914b797" protocol=ttrpc version=3 Aug 13 01:12:16.019099 systemd[1]: Started cri-containerd-1ed66b1e64a3e12a6ca570671127cd4085864a1f9adce2a8a3563d52ce2ecb22.scope - libcontainer container 1ed66b1e64a3e12a6ca570671127cd4085864a1f9adce2a8a3563d52ce2ecb22. Aug 13 01:12:16.071165 containerd[1550]: time="2025-08-13T01:12:16.071075011Z" level=info msg="StartContainer for \"1ed66b1e64a3e12a6ca570671127cd4085864a1f9adce2a8a3563d52ce2ecb22\" returns successfully" Aug 13 01:12:16.554393 containerd[1550]: time="2025-08-13T01:12:16.554313982Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:12:16.556851 systemd[1]: cri-containerd-1ed66b1e64a3e12a6ca570671127cd4085864a1f9adce2a8a3563d52ce2ecb22.scope: Deactivated successfully. Aug 13 01:12:16.557349 systemd[1]: cri-containerd-1ed66b1e64a3e12a6ca570671127cd4085864a1f9adce2a8a3563d52ce2ecb22.scope: Consumed 503ms CPU time, 197.9M memory peak, 171.2M written to disk. Aug 13 01:12:16.557791 containerd[1550]: time="2025-08-13T01:12:16.557723717Z" level=info msg="TaskExit event in podsandbox handler container_id:\"1ed66b1e64a3e12a6ca570671127cd4085864a1f9adce2a8a3563d52ce2ecb22\" id:\"1ed66b1e64a3e12a6ca570671127cd4085864a1f9adce2a8a3563d52ce2ecb22\" pid:3452 exited_at:{seconds:1755047536 nanos:557257119}" Aug 13 01:12:16.558366 containerd[1550]: time="2025-08-13T01:12:16.558236976Z" level=info msg="received exit event container_id:\"1ed66b1e64a3e12a6ca570671127cd4085864a1f9adce2a8a3563d52ce2ecb22\" id:\"1ed66b1e64a3e12a6ca570671127cd4085864a1f9adce2a8a3563d52ce2ecb22\" pid:3452 exited_at:{seconds:1755047536 nanos:557257119}" Aug 13 01:12:16.581538 kubelet[2722]: I0813 01:12:16.581422 2722 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 01:12:16.583101 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1ed66b1e64a3e12a6ca570671127cd4085864a1f9adce2a8a3563d52ce2ecb22-rootfs.mount: Deactivated successfully. Aug 13 01:12:16.639115 systemd[1]: Created slice kubepods-burstable-poda5b0b8ae_a381_43cc_8adc_4e3ee01749bd.slice - libcontainer container kubepods-burstable-poda5b0b8ae_a381_43cc_8adc_4e3ee01749bd.slice. Aug 13 01:12:16.652599 systemd[1]: Created slice kubepods-burstable-pod27718112_1bb9_402a_89c8_f4890dedf664.slice - libcontainer container kubepods-burstable-pod27718112_1bb9_402a_89c8_f4890dedf664.slice. Aug 13 01:12:16.681103 systemd[1]: Created slice kubepods-besteffort-podd1a9abdb_5b65_4bf0_9967_437eeeb496b0.slice - libcontainer container kubepods-besteffort-podd1a9abdb_5b65_4bf0_9967_437eeeb496b0.slice. Aug 13 01:12:16.689028 systemd[1]: Created slice kubepods-besteffort-pod2dab385f_2367_4e01_8d78_2247bcba7bcc.slice - libcontainer container kubepods-besteffort-pod2dab385f_2367_4e01_8d78_2247bcba7bcc.slice. Aug 13 01:12:16.699187 systemd[1]: Created slice kubepods-besteffort-pod122cfdf6_eb56_47f9_83a7_2c5d1b8de75c.slice - libcontainer container kubepods-besteffort-pod122cfdf6_eb56_47f9_83a7_2c5d1b8de75c.slice. Aug 13 01:12:16.703932 kubelet[2722]: I0813 01:12:16.703559 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rcv8z\" (UniqueName: \"kubernetes.io/projected/d961b3b3-4a19-4ff4-8695-624820fd67cb-kube-api-access-rcv8z\") pod \"calico-apiserver-6b554cb7d7-k4vqf\" (UID: \"d961b3b3-4a19-4ff4-8695-624820fd67cb\") " pod="calico-apiserver/calico-apiserver-6b554cb7d7-k4vqf" Aug 13 01:12:16.703932 kubelet[2722]: I0813 01:12:16.703583 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/27718112-1bb9-402a-89c8-f4890dedf664-config-volume\") pod \"coredns-674b8bbfcf-fgsjn\" (UID: \"27718112-1bb9-402a-89c8-f4890dedf664\") " pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:12:16.703932 kubelet[2722]: I0813 01:12:16.703597 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0f09822c-cb58-44b3-9644-42c7c578731c-whisker-backend-key-pair\") pod \"whisker-6c66749769-wrs69\" (UID: \"0f09822c-cb58-44b3-9644-42c7c578731c\") " pod="calico-system/whisker-6c66749769-wrs69" Aug 13 01:12:16.703932 kubelet[2722]: I0813 01:12:16.703611 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2dab385f-2367-4e01-8d78-2247bcba7bcc-tigera-ca-bundle\") pod \"calico-kube-controllers-cddc95b58-6t6z7\" (UID: \"2dab385f-2367-4e01-8d78-2247bcba7bcc\") " pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:12:16.703932 kubelet[2722]: I0813 01:12:16.703623 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l85x2\" (UniqueName: \"kubernetes.io/projected/a5b0b8ae-a381-43cc-8adc-4e3ee01749bd-kube-api-access-l85x2\") pod \"coredns-674b8bbfcf-p259x\" (UID: \"a5b0b8ae-a381-43cc-8adc-4e3ee01749bd\") " pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:12:16.704161 kubelet[2722]: I0813 01:12:16.703634 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f09822c-cb58-44b3-9644-42c7c578731c-whisker-ca-bundle\") pod \"whisker-6c66749769-wrs69\" (UID: \"0f09822c-cb58-44b3-9644-42c7c578731c\") " pod="calico-system/whisker-6c66749769-wrs69" Aug 13 01:12:16.704161 kubelet[2722]: I0813 01:12:16.703646 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fvf7\" (UniqueName: \"kubernetes.io/projected/d1a9abdb-5b65-4bf0-9967-437eeeb496b0-kube-api-access-2fvf7\") pod \"calico-apiserver-6b554cb7d7-wlpwm\" (UID: \"d1a9abdb-5b65-4bf0-9967-437eeeb496b0\") " pod="calico-apiserver/calico-apiserver-6b554cb7d7-wlpwm" Aug 13 01:12:16.704161 kubelet[2722]: I0813 01:12:16.703658 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122cfdf6-eb56-47f9-83a7-2c5d1b8de75c-config\") pod \"goldmane-768f4c5c69-8x94m\" (UID: \"122cfdf6-eb56-47f9-83a7-2c5d1b8de75c\") " pod="calico-system/goldmane-768f4c5c69-8x94m" Aug 13 01:12:16.704161 kubelet[2722]: I0813 01:12:16.703670 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d961b3b3-4a19-4ff4-8695-624820fd67cb-calico-apiserver-certs\") pod \"calico-apiserver-6b554cb7d7-k4vqf\" (UID: \"d961b3b3-4a19-4ff4-8695-624820fd67cb\") " pod="calico-apiserver/calico-apiserver-6b554cb7d7-k4vqf" Aug 13 01:12:16.704161 kubelet[2722]: I0813 01:12:16.703692 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcb9j\" (UniqueName: \"kubernetes.io/projected/2dab385f-2367-4e01-8d78-2247bcba7bcc-kube-api-access-hcb9j\") pod \"calico-kube-controllers-cddc95b58-6t6z7\" (UID: \"2dab385f-2367-4e01-8d78-2247bcba7bcc\") " pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:12:16.704253 kubelet[2722]: I0813 01:12:16.703705 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqznc\" (UniqueName: \"kubernetes.io/projected/0f09822c-cb58-44b3-9644-42c7c578731c-kube-api-access-nqznc\") pod \"whisker-6c66749769-wrs69\" (UID: \"0f09822c-cb58-44b3-9644-42c7c578731c\") " pod="calico-system/whisker-6c66749769-wrs69" Aug 13 01:12:16.704253 kubelet[2722]: I0813 01:12:16.703718 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnhgk\" (UniqueName: \"kubernetes.io/projected/27718112-1bb9-402a-89c8-f4890dedf664-kube-api-access-bnhgk\") pod \"coredns-674b8bbfcf-fgsjn\" (UID: \"27718112-1bb9-402a-89c8-f4890dedf664\") " pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:12:16.704253 kubelet[2722]: I0813 01:12:16.703730 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/a5b0b8ae-a381-43cc-8adc-4e3ee01749bd-config-volume\") pod \"coredns-674b8bbfcf-p259x\" (UID: \"a5b0b8ae-a381-43cc-8adc-4e3ee01749bd\") " pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:12:16.704253 kubelet[2722]: I0813 01:12:16.703743 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d1a9abdb-5b65-4bf0-9967-437eeeb496b0-calico-apiserver-certs\") pod \"calico-apiserver-6b554cb7d7-wlpwm\" (UID: \"d1a9abdb-5b65-4bf0-9967-437eeeb496b0\") " pod="calico-apiserver/calico-apiserver-6b554cb7d7-wlpwm" Aug 13 01:12:16.704253 kubelet[2722]: I0813 01:12:16.703755 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/122cfdf6-eb56-47f9-83a7-2c5d1b8de75c-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-8x94m\" (UID: \"122cfdf6-eb56-47f9-83a7-2c5d1b8de75c\") " pod="calico-system/goldmane-768f4c5c69-8x94m" Aug 13 01:12:16.704352 kubelet[2722]: I0813 01:12:16.703771 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/122cfdf6-eb56-47f9-83a7-2c5d1b8de75c-goldmane-key-pair\") pod \"goldmane-768f4c5c69-8x94m\" (UID: \"122cfdf6-eb56-47f9-83a7-2c5d1b8de75c\") " pod="calico-system/goldmane-768f4c5c69-8x94m" Aug 13 01:12:16.704352 kubelet[2722]: I0813 01:12:16.703784 2722 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwdqz\" (UniqueName: \"kubernetes.io/projected/122cfdf6-eb56-47f9-83a7-2c5d1b8de75c-kube-api-access-xwdqz\") pod \"goldmane-768f4c5c69-8x94m\" (UID: \"122cfdf6-eb56-47f9-83a7-2c5d1b8de75c\") " pod="calico-system/goldmane-768f4c5c69-8x94m" Aug 13 01:12:16.708383 systemd[1]: Created slice kubepods-besteffort-podd961b3b3_4a19_4ff4_8695_624820fd67cb.slice - libcontainer container kubepods-besteffort-podd961b3b3_4a19_4ff4_8695_624820fd67cb.slice. Aug 13 01:12:16.714242 systemd[1]: Created slice kubepods-besteffort-pod0f09822c_cb58_44b3_9644_42c7c578731c.slice - libcontainer container kubepods-besteffort-pod0f09822c_cb58_44b3_9644_42c7c578731c.slice. Aug 13 01:12:16.947650 kubelet[2722]: E0813 01:12:16.947607 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:16.949290 containerd[1550]: time="2025-08-13T01:12:16.949244557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,}" Aug 13 01:12:16.977236 kubelet[2722]: E0813 01:12:16.977059 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:16.982319 containerd[1550]: time="2025-08-13T01:12:16.982072564Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,}" Aug 13 01:12:16.988058 containerd[1550]: time="2025-08-13T01:12:16.988024463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b554cb7d7-wlpwm,Uid:d1a9abdb-5b65-4bf0-9967-437eeeb496b0,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:12:17.004633 containerd[1550]: time="2025-08-13T01:12:17.004556202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,}" Aug 13 01:12:17.010721 containerd[1550]: time="2025-08-13T01:12:17.010671783Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8x94m,Uid:122cfdf6-eb56-47f9-83a7-2c5d1b8de75c,Namespace:calico-system,Attempt:0,}" Aug 13 01:12:17.014410 containerd[1550]: time="2025-08-13T01:12:17.014281588Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b554cb7d7-k4vqf,Uid:d961b3b3-4a19-4ff4-8695-624820fd67cb,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:12:17.019126 containerd[1550]: time="2025-08-13T01:12:17.019106374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c66749769-wrs69,Uid:0f09822c-cb58-44b3-9644-42c7c578731c,Namespace:calico-system,Attempt:0,}" Aug 13 01:12:17.066258 systemd[1]: Created slice kubepods-besteffort-pod6b834979_32a4_464b_9898_ef87b1042a9e.slice - libcontainer container kubepods-besteffort-pod6b834979_32a4_464b_9898_ef87b1042a9e.slice. Aug 13 01:12:17.070756 containerd[1550]: time="2025-08-13T01:12:17.070504397Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,}" Aug 13 01:12:17.158838 containerd[1550]: time="2025-08-13T01:12:17.158633846Z" level=error msg="Failed to destroy network for sandbox \"6f0988afcf418ea24f6482fa3d7c65d9328eebc5255687a0fee39010f7b2396e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.162520 containerd[1550]: time="2025-08-13T01:12:17.162423557Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f0988afcf418ea24f6482fa3d7c65d9328eebc5255687a0fee39010f7b2396e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.163341 kubelet[2722]: E0813 01:12:17.163266 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f0988afcf418ea24f6482fa3d7c65d9328eebc5255687a0fee39010f7b2396e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.163341 kubelet[2722]: E0813 01:12:17.163319 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f0988afcf418ea24f6482fa3d7c65d9328eebc5255687a0fee39010f7b2396e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:12:17.163469 kubelet[2722]: E0813 01:12:17.163338 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6f0988afcf418ea24f6482fa3d7c65d9328eebc5255687a0fee39010f7b2396e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:12:17.163745 kubelet[2722]: E0813 01:12:17.163509 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6f0988afcf418ea24f6482fa3d7c65d9328eebc5255687a0fee39010f7b2396e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:12:17.168666 containerd[1550]: time="2025-08-13T01:12:17.168634671Z" level=error msg="Failed to destroy network for sandbox \"d52fced34da63d47b87e5c82fde7d46b670a6f8011f7c23e048a0acd2fee848e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.168858 containerd[1550]: time="2025-08-13T01:12:17.168833258Z" level=error msg="Failed to destroy network for sandbox \"c9e62458f2011127c5ccdb4d873ed5cd922980eaa6b1c21407047111b1263c45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.170996 containerd[1550]: time="2025-08-13T01:12:17.170913499Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b554cb7d7-k4vqf,Uid:d961b3b3-4a19-4ff4-8695-624820fd67cb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d52fced34da63d47b87e5c82fde7d46b670a6f8011f7c23e048a0acd2fee848e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.171199 kubelet[2722]: E0813 01:12:17.171173 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d52fced34da63d47b87e5c82fde7d46b670a6f8011f7c23e048a0acd2fee848e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.171872 kubelet[2722]: E0813 01:12:17.171292 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d52fced34da63d47b87e5c82fde7d46b670a6f8011f7c23e048a0acd2fee848e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b554cb7d7-k4vqf" Aug 13 01:12:17.171872 kubelet[2722]: E0813 01:12:17.171430 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d52fced34da63d47b87e5c82fde7d46b670a6f8011f7c23e048a0acd2fee848e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b554cb7d7-k4vqf" Aug 13 01:12:17.172038 kubelet[2722]: E0813 01:12:17.172006 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b554cb7d7-k4vqf_calico-apiserver(d961b3b3-4a19-4ff4-8695-624820fd67cb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b554cb7d7-k4vqf_calico-apiserver(d961b3b3-4a19-4ff4-8695-624820fd67cb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d52fced34da63d47b87e5c82fde7d46b670a6f8011f7c23e048a0acd2fee848e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b554cb7d7-k4vqf" podUID="d961b3b3-4a19-4ff4-8695-624820fd67cb" Aug 13 01:12:17.174703 containerd[1550]: time="2025-08-13T01:12:17.174635358Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9e62458f2011127c5ccdb4d873ed5cd922980eaa6b1c21407047111b1263c45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.176813 kubelet[2722]: E0813 01:12:17.176781 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9e62458f2011127c5ccdb4d873ed5cd922980eaa6b1c21407047111b1263c45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.176866 kubelet[2722]: E0813 01:12:17.176813 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9e62458f2011127c5ccdb4d873ed5cd922980eaa6b1c21407047111b1263c45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:12:17.177583 kubelet[2722]: E0813 01:12:17.176831 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9e62458f2011127c5ccdb4d873ed5cd922980eaa6b1c21407047111b1263c45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:12:17.177992 kubelet[2722]: E0813 01:12:17.177711 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9e62458f2011127c5ccdb4d873ed5cd922980eaa6b1c21407047111b1263c45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:12:17.178054 containerd[1550]: time="2025-08-13T01:12:17.177506046Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:12:17.191915 containerd[1550]: time="2025-08-13T01:12:17.191655684Z" level=error msg="Failed to destroy network for sandbox \"831e7734f528593b16a748b33c5a5ab53429c272f6695e1035437968f0fb567e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.193809 containerd[1550]: time="2025-08-13T01:12:17.193661434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6b554cb7d7-wlpwm,Uid:d1a9abdb-5b65-4bf0-9967-437eeeb496b0,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"831e7734f528593b16a748b33c5a5ab53429c272f6695e1035437968f0fb567e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.193979 kubelet[2722]: E0813 01:12:17.193909 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"831e7734f528593b16a748b33c5a5ab53429c272f6695e1035437968f0fb567e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.193979 kubelet[2722]: E0813 01:12:17.193943 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"831e7734f528593b16a748b33c5a5ab53429c272f6695e1035437968f0fb567e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b554cb7d7-wlpwm" Aug 13 01:12:17.193979 kubelet[2722]: E0813 01:12:17.193957 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"831e7734f528593b16a748b33c5a5ab53429c272f6695e1035437968f0fb567e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6b554cb7d7-wlpwm" Aug 13 01:12:17.195026 kubelet[2722]: E0813 01:12:17.193987 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6b554cb7d7-wlpwm_calico-apiserver(d1a9abdb-5b65-4bf0-9967-437eeeb496b0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6b554cb7d7-wlpwm_calico-apiserver(d1a9abdb-5b65-4bf0-9967-437eeeb496b0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"831e7734f528593b16a748b33c5a5ab53429c272f6695e1035437968f0fb567e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6b554cb7d7-wlpwm" podUID="d1a9abdb-5b65-4bf0-9967-437eeeb496b0" Aug 13 01:12:17.235471 containerd[1550]: time="2025-08-13T01:12:17.234612416Z" level=error msg="Failed to destroy network for sandbox \"4a62bd2eaf575891759d1aa8adb6cb237c3d414a538b0b250d6686a40cd7e244\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.236073 containerd[1550]: time="2025-08-13T01:12:17.235969674Z" level=error msg="Failed to destroy network for sandbox \"5b24c547043de4ce63cb1c81979e6abbfa3fd06be77ec4355caf45010b013bc1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.236336 containerd[1550]: time="2025-08-13T01:12:17.236299995Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c66749769-wrs69,Uid:0f09822c-cb58-44b3-9644-42c7c578731c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a62bd2eaf575891759d1aa8adb6cb237c3d414a538b0b250d6686a40cd7e244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.236669 kubelet[2722]: E0813 01:12:17.236604 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a62bd2eaf575891759d1aa8adb6cb237c3d414a538b0b250d6686a40cd7e244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.236713 kubelet[2722]: E0813 01:12:17.236664 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a62bd2eaf575891759d1aa8adb6cb237c3d414a538b0b250d6686a40cd7e244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c66749769-wrs69" Aug 13 01:12:17.236713 kubelet[2722]: E0813 01:12:17.236691 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a62bd2eaf575891759d1aa8adb6cb237c3d414a538b0b250d6686a40cd7e244\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c66749769-wrs69" Aug 13 01:12:17.237422 kubelet[2722]: E0813 01:12:17.236733 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c66749769-wrs69_calico-system(0f09822c-cb58-44b3-9644-42c7c578731c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c66749769-wrs69_calico-system(0f09822c-cb58-44b3-9644-42c7c578731c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a62bd2eaf575891759d1aa8adb6cb237c3d414a538b0b250d6686a40cd7e244\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c66749769-wrs69" podUID="0f09822c-cb58-44b3-9644-42c7c578731c" Aug 13 01:12:17.237475 containerd[1550]: time="2025-08-13T01:12:17.237367081Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b24c547043de4ce63cb1c81979e6abbfa3fd06be77ec4355caf45010b013bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.237517 kubelet[2722]: E0813 01:12:17.237480 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b24c547043de4ce63cb1c81979e6abbfa3fd06be77ec4355caf45010b013bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.237517 kubelet[2722]: E0813 01:12:17.237505 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b24c547043de4ce63cb1c81979e6abbfa3fd06be77ec4355caf45010b013bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:12:17.237562 kubelet[2722]: E0813 01:12:17.237518 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5b24c547043de4ce63cb1c81979e6abbfa3fd06be77ec4355caf45010b013bc1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:12:17.237562 kubelet[2722]: E0813 01:12:17.237543 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5b24c547043de4ce63cb1c81979e6abbfa3fd06be77ec4355caf45010b013bc1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:12:17.245407 containerd[1550]: time="2025-08-13T01:12:17.245323515Z" level=error msg="Failed to destroy network for sandbox \"f3a656958b592dd2f4b26790e315a8bc6a074a2eb25b969fff939fc83fe3817e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.246685 containerd[1550]: time="2025-08-13T01:12:17.246653272Z" level=error msg="Failed to destroy network for sandbox \"732dba7cef5295c33780939be413ed999a96eb75e8c97eca2447a96f4a4b3607\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.247013 containerd[1550]: time="2025-08-13T01:12:17.246985773Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-8x94m,Uid:122cfdf6-eb56-47f9-83a7-2c5d1b8de75c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3a656958b592dd2f4b26790e315a8bc6a074a2eb25b969fff939fc83fe3817e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.247347 kubelet[2722]: E0813 01:12:17.247315 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3a656958b592dd2f4b26790e315a8bc6a074a2eb25b969fff939fc83fe3817e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.247463 kubelet[2722]: E0813 01:12:17.247425 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3a656958b592dd2f4b26790e315a8bc6a074a2eb25b969fff939fc83fe3817e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-8x94m" Aug 13 01:12:17.247556 kubelet[2722]: E0813 01:12:17.247525 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f3a656958b592dd2f4b26790e315a8bc6a074a2eb25b969fff939fc83fe3817e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-8x94m" Aug 13 01:12:17.247927 containerd[1550]: time="2025-08-13T01:12:17.247579073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"732dba7cef5295c33780939be413ed999a96eb75e8c97eca2447a96f4a4b3607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.247998 kubelet[2722]: E0813 01:12:17.247690 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-8x94m_calico-system(122cfdf6-eb56-47f9-83a7-2c5d1b8de75c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-8x94m_calico-system(122cfdf6-eb56-47f9-83a7-2c5d1b8de75c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f3a656958b592dd2f4b26790e315a8bc6a074a2eb25b969fff939fc83fe3817e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-8x94m" podUID="122cfdf6-eb56-47f9-83a7-2c5d1b8de75c" Aug 13 01:12:17.248128 kubelet[2722]: E0813 01:12:17.248109 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"732dba7cef5295c33780939be413ed999a96eb75e8c97eca2447a96f4a4b3607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:17.248190 kubelet[2722]: E0813 01:12:17.248176 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"732dba7cef5295c33780939be413ed999a96eb75e8c97eca2447a96f4a4b3607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:12:17.248327 kubelet[2722]: E0813 01:12:17.248239 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"732dba7cef5295c33780939be413ed999a96eb75e8c97eca2447a96f4a4b3607\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:12:17.248327 kubelet[2722]: E0813 01:12:17.248273 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"732dba7cef5295c33780939be413ed999a96eb75e8c97eca2447a96f4a4b3607\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7lv4" podUID="6b834979-32a4-464b-9898-ef87b1042a9e" Aug 13 01:12:17.973486 systemd[1]: run-netns-cni\x2dbe886a95\x2dfb9f\x2d4ec2\x2d58a3\x2d54dca483eed9.mount: Deactivated successfully. Aug 13 01:12:17.973577 systemd[1]: run-netns-cni\x2d4887b9e8\x2d66ec\x2d92f6\x2daf44\x2deca8122d5ed9.mount: Deactivated successfully. Aug 13 01:12:17.973636 systemd[1]: run-netns-cni\x2dc153044a\x2d17c8\x2d8c03\x2d8919\x2d040582073038.mount: Deactivated successfully. Aug 13 01:12:17.973691 systemd[1]: run-netns-cni\x2d132c71e0\x2dcbce\x2de542\x2d2ea3\x2df4810ccb28f2.mount: Deactivated successfully. Aug 13 01:12:17.973741 systemd[1]: run-netns-cni\x2d773f2853\x2d9015\x2d73cd\x2d687f\x2d73fd0f02e201.mount: Deactivated successfully. Aug 13 01:12:18.896570 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount28195373.mount: Deactivated successfully. Aug 13 01:12:18.899512 containerd[1550]: time="2025-08-13T01:12:18.899403045Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount28195373: write /var/lib/containerd/tmpmounts/containerd-mount28195373/usr/bin/calico-node: no space left on device" Aug 13 01:12:18.899512 containerd[1550]: time="2025-08-13T01:12:18.899423186Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:12:18.900081 kubelet[2722]: E0813 01:12:18.900000 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount28195373: write /var/lib/containerd/tmpmounts/containerd-mount28195373/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:12:18.900081 kubelet[2722]: E0813 01:12:18.900068 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount28195373: write /var/lib/containerd/tmpmounts/containerd-mount28195373/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:12:18.901089 kubelet[2722]: E0813 01:12:18.901014 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq7j6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},StopSignal:nil,},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-hq29b_calico-system(3c0f3b86-7d63-44df-843e-763eb95a8b94): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount28195373: write /var/lib/containerd/tmpmounts/containerd-mount28195373/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:12:18.902405 kubelet[2722]: E0813 01:12:18.902329 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount28195373: write /var/lib/containerd/tmpmounts/containerd-mount28195373/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-hq29b" podUID="3c0f3b86-7d63-44df-843e-763eb95a8b94" Aug 13 01:12:19.175692 kubelet[2722]: E0813 01:12:19.175643 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount28195373: write /var/lib/containerd/tmpmounts/containerd-mount28195373/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-hq29b" podUID="3c0f3b86-7d63-44df-843e-763eb95a8b94" Aug 13 01:12:24.195963 kubelet[2722]: I0813 01:12:24.195886 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:12:24.195963 kubelet[2722]: I0813 01:12:24.195967 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:12:24.200240 kubelet[2722]: I0813 01:12:24.200138 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:12:24.220364 kubelet[2722]: I0813 01:12:24.220317 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:12:24.220526 kubelet[2722]: I0813 01:12:24.220403 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/goldmane-768f4c5c69-8x94m","calico-apiserver/calico-apiserver-6b554cb7d7-wlpwm","calico-system/whisker-6c66749769-wrs69","calico-apiserver/calico-apiserver-6b554cb7d7-k4vqf","kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","calico-system/csi-node-driver-l7lv4","calico-system/calico-node-hq29b","tigera-operator/tigera-operator-747864d56d-hvgq2","calico-system/calico-typha-55bf5cd98c-8lqpc","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:12:24.226935 kubelet[2722]: I0813 01:12:24.226713 2722 eviction_manager.go:629] "Eviction manager: pod is evicted successfully" pod="calico-system/goldmane-768f4c5c69-8x94m" Aug 13 01:12:24.226935 kubelet[2722]: I0813 01:12:24.226922 2722 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/goldmane-768f4c5c69-8x94m"] Aug 13 01:12:24.248930 kubelet[2722]: I0813 01:12:24.248390 2722 kubelet.go:2405] "Pod admission denied" podUID="ba017cbd-9c0a-48ee-995e-99f96ee0ecd5" pod="calico-system/goldmane-768f4c5c69-njxr7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:24.255920 kubelet[2722]: I0813 01:12:24.255350 2722 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/122cfdf6-eb56-47f9-83a7-2c5d1b8de75c-goldmane-ca-bundle\") pod \"122cfdf6-eb56-47f9-83a7-2c5d1b8de75c\" (UID: \"122cfdf6-eb56-47f9-83a7-2c5d1b8de75c\") " Aug 13 01:12:24.255920 kubelet[2722]: I0813 01:12:24.255387 2722 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/122cfdf6-eb56-47f9-83a7-2c5d1b8de75c-goldmane-key-pair\") pod \"122cfdf6-eb56-47f9-83a7-2c5d1b8de75c\" (UID: \"122cfdf6-eb56-47f9-83a7-2c5d1b8de75c\") " Aug 13 01:12:24.255920 kubelet[2722]: I0813 01:12:24.255430 2722 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122cfdf6-eb56-47f9-83a7-2c5d1b8de75c-config\") pod \"122cfdf6-eb56-47f9-83a7-2c5d1b8de75c\" (UID: \"122cfdf6-eb56-47f9-83a7-2c5d1b8de75c\") " Aug 13 01:12:24.255920 kubelet[2722]: I0813 01:12:24.255453 2722 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xwdqz\" (UniqueName: \"kubernetes.io/projected/122cfdf6-eb56-47f9-83a7-2c5d1b8de75c-kube-api-access-xwdqz\") pod \"122cfdf6-eb56-47f9-83a7-2c5d1b8de75c\" (UID: \"122cfdf6-eb56-47f9-83a7-2c5d1b8de75c\") " Aug 13 01:12:24.257163 kubelet[2722]: I0813 01:12:24.257118 2722 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/122cfdf6-eb56-47f9-83a7-2c5d1b8de75c-goldmane-ca-bundle" (OuterVolumeSpecName: "goldmane-ca-bundle") pod "122cfdf6-eb56-47f9-83a7-2c5d1b8de75c" (UID: "122cfdf6-eb56-47f9-83a7-2c5d1b8de75c"). InnerVolumeSpecName "goldmane-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:12:24.259491 kubelet[2722]: I0813 01:12:24.259327 2722 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/122cfdf6-eb56-47f9-83a7-2c5d1b8de75c-config" (OuterVolumeSpecName: "config") pod "122cfdf6-eb56-47f9-83a7-2c5d1b8de75c" (UID: "122cfdf6-eb56-47f9-83a7-2c5d1b8de75c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:12:24.267340 systemd[1]: var-lib-kubelet-pods-122cfdf6\x2deb56\x2d47f9\x2d83a7\x2d2c5d1b8de75c-volumes-kubernetes.io\x7esecret-goldmane\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:12:24.271495 kubelet[2722]: I0813 01:12:24.271466 2722 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/122cfdf6-eb56-47f9-83a7-2c5d1b8de75c-goldmane-key-pair" (OuterVolumeSpecName: "goldmane-key-pair") pod "122cfdf6-eb56-47f9-83a7-2c5d1b8de75c" (UID: "122cfdf6-eb56-47f9-83a7-2c5d1b8de75c"). InnerVolumeSpecName "goldmane-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:12:24.275621 kubelet[2722]: I0813 01:12:24.275574 2722 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/122cfdf6-eb56-47f9-83a7-2c5d1b8de75c-kube-api-access-xwdqz" (OuterVolumeSpecName: "kube-api-access-xwdqz") pod "122cfdf6-eb56-47f9-83a7-2c5d1b8de75c" (UID: "122cfdf6-eb56-47f9-83a7-2c5d1b8de75c"). InnerVolumeSpecName "kube-api-access-xwdqz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:12:24.276190 systemd[1]: var-lib-kubelet-pods-122cfdf6\x2deb56\x2d47f9\x2d83a7\x2d2c5d1b8de75c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxwdqz.mount: Deactivated successfully. Aug 13 01:12:24.301271 kubelet[2722]: I0813 01:12:24.300241 2722 kubelet.go:2405] "Pod admission denied" podUID="a0ff6434-69e8-4aad-b8aa-278c43d96c06" pod="calico-system/goldmane-768f4c5c69-v84dp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:24.312589 kubelet[2722]: I0813 01:12:24.312539 2722 status_manager.go:895] "Failed to get status for pod" podUID="a0ff6434-69e8-4aad-b8aa-278c43d96c06" pod="calico-system/goldmane-768f4c5c69-v84dp" err="pods \"goldmane-768f4c5c69-v84dp\" is forbidden: User \"system:node:172-233-214-103\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-233-214-103' and this object" Aug 13 01:12:24.356530 kubelet[2722]: I0813 01:12:24.356483 2722 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/122cfdf6-eb56-47f9-83a7-2c5d1b8de75c-config\") on node \"172-233-214-103\" DevicePath \"\"" Aug 13 01:12:24.356530 kubelet[2722]: I0813 01:12:24.356524 2722 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xwdqz\" (UniqueName: \"kubernetes.io/projected/122cfdf6-eb56-47f9-83a7-2c5d1b8de75c-kube-api-access-xwdqz\") on node \"172-233-214-103\" DevicePath \"\"" Aug 13 01:12:24.356530 kubelet[2722]: I0813 01:12:24.356535 2722 reconciler_common.go:299] "Volume detached for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/122cfdf6-eb56-47f9-83a7-2c5d1b8de75c-goldmane-ca-bundle\") on node \"172-233-214-103\" DevicePath \"\"" Aug 13 01:12:24.356530 kubelet[2722]: I0813 01:12:24.356547 2722 reconciler_common.go:299] "Volume detached for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/122cfdf6-eb56-47f9-83a7-2c5d1b8de75c-goldmane-key-pair\") on node \"172-233-214-103\" DevicePath \"\"" Aug 13 01:12:25.191296 systemd[1]: Removed slice kubepods-besteffort-pod122cfdf6_eb56_47f9_83a7_2c5d1b8de75c.slice - libcontainer container kubepods-besteffort-pod122cfdf6_eb56_47f9_83a7_2c5d1b8de75c.slice. Aug 13 01:12:25.228067 kubelet[2722]: I0813 01:12:25.228008 2722 eviction_manager.go:459] "Eviction manager: pods successfully cleaned up" pods=["calico-system/goldmane-768f4c5c69-8x94m"] Aug 13 01:12:25.244091 kubelet[2722]: I0813 01:12:25.244060 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:12:25.244258 kubelet[2722]: I0813 01:12:25.244246 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:12:25.247456 kubelet[2722]: I0813 01:12:25.247409 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:12:25.261398 kubelet[2722]: I0813 01:12:25.261371 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:12:25.261660 kubelet[2722]: I0813 01:12:25.261638 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-6b554cb7d7-wlpwm","calico-apiserver/calico-apiserver-6b554cb7d7-k4vqf","calico-system/whisker-6c66749769-wrs69","kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","calico-system/calico-node-hq29b","calico-system/csi-node-driver-l7lv4","tigera-operator/tigera-operator-747864d56d-hvgq2","calico-system/calico-typha-55bf5cd98c-8lqpc","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:12:25.267712 kubelet[2722]: I0813 01:12:25.267691 2722 eviction_manager.go:629] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-6b554cb7d7-wlpwm" Aug 13 01:12:25.267920 kubelet[2722]: I0813 01:12:25.267862 2722 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-6b554cb7d7-wlpwm"] Aug 13 01:12:25.363363 kubelet[2722]: I0813 01:12:25.363315 2722 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fvf7\" (UniqueName: \"kubernetes.io/projected/d1a9abdb-5b65-4bf0-9967-437eeeb496b0-kube-api-access-2fvf7\") pod \"d1a9abdb-5b65-4bf0-9967-437eeeb496b0\" (UID: \"d1a9abdb-5b65-4bf0-9967-437eeeb496b0\") " Aug 13 01:12:25.363363 kubelet[2722]: I0813 01:12:25.363362 2722 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d1a9abdb-5b65-4bf0-9967-437eeeb496b0-calico-apiserver-certs\") pod \"d1a9abdb-5b65-4bf0-9967-437eeeb496b0\" (UID: \"d1a9abdb-5b65-4bf0-9967-437eeeb496b0\") " Aug 13 01:12:25.369524 systemd[1]: var-lib-kubelet-pods-d1a9abdb\x2d5b65\x2d4bf0\x2d9967\x2d437eeeb496b0-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:12:25.371249 kubelet[2722]: I0813 01:12:25.371167 2722 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1a9abdb-5b65-4bf0-9967-437eeeb496b0-kube-api-access-2fvf7" (OuterVolumeSpecName: "kube-api-access-2fvf7") pod "d1a9abdb-5b65-4bf0-9967-437eeeb496b0" (UID: "d1a9abdb-5b65-4bf0-9967-437eeeb496b0"). InnerVolumeSpecName "kube-api-access-2fvf7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:12:25.371249 kubelet[2722]: I0813 01:12:25.369690 2722 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1a9abdb-5b65-4bf0-9967-437eeeb496b0-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "d1a9abdb-5b65-4bf0-9967-437eeeb496b0" (UID: "d1a9abdb-5b65-4bf0-9967-437eeeb496b0"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:12:25.373040 systemd[1]: var-lib-kubelet-pods-d1a9abdb\x2d5b65\x2d4bf0\x2d9967\x2d437eeeb496b0-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d2fvf7.mount: Deactivated successfully. Aug 13 01:12:25.464694 kubelet[2722]: I0813 01:12:25.464521 2722 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2fvf7\" (UniqueName: \"kubernetes.io/projected/d1a9abdb-5b65-4bf0-9967-437eeeb496b0-kube-api-access-2fvf7\") on node \"172-233-214-103\" DevicePath \"\"" Aug 13 01:12:25.464694 kubelet[2722]: I0813 01:12:25.464569 2722 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d1a9abdb-5b65-4bf0-9967-437eeeb496b0-calico-apiserver-certs\") on node \"172-233-214-103\" DevicePath \"\"" Aug 13 01:12:26.071437 systemd[1]: Removed slice kubepods-besteffort-podd1a9abdb_5b65_4bf0_9967_437eeeb496b0.slice - libcontainer container kubepods-besteffort-podd1a9abdb_5b65_4bf0_9967_437eeeb496b0.slice. Aug 13 01:12:26.269112 kubelet[2722]: I0813 01:12:26.269032 2722 eviction_manager.go:459] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-6b554cb7d7-wlpwm"] Aug 13 01:12:26.280436 kubelet[2722]: I0813 01:12:26.280408 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:12:26.280566 kubelet[2722]: I0813 01:12:26.280444 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:12:26.283533 kubelet[2722]: I0813 01:12:26.283505 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:12:26.295425 kubelet[2722]: I0813 01:12:26.295371 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:12:26.296032 kubelet[2722]: I0813 01:12:26.295471 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/whisker-6c66749769-wrs69","calico-apiserver/calico-apiserver-6b554cb7d7-k4vqf","calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-node-hq29b","calico-system/csi-node-driver-l7lv4","tigera-operator/tigera-operator-747864d56d-hvgq2","calico-system/calico-typha-55bf5cd98c-8lqpc","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:12:26.301718 kubelet[2722]: I0813 01:12:26.301674 2722 eviction_manager.go:629] "Eviction manager: pod is evicted successfully" pod="calico-system/whisker-6c66749769-wrs69" Aug 13 01:12:26.301718 kubelet[2722]: I0813 01:12:26.301699 2722 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/whisker-6c66749769-wrs69"] Aug 13 01:12:26.372505 kubelet[2722]: I0813 01:12:26.370856 2722 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f09822c-cb58-44b3-9644-42c7c578731c-whisker-ca-bundle\") pod \"0f09822c-cb58-44b3-9644-42c7c578731c\" (UID: \"0f09822c-cb58-44b3-9644-42c7c578731c\") " Aug 13 01:12:26.372505 kubelet[2722]: I0813 01:12:26.370946 2722 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0f09822c-cb58-44b3-9644-42c7c578731c-whisker-backend-key-pair\") pod \"0f09822c-cb58-44b3-9644-42c7c578731c\" (UID: \"0f09822c-cb58-44b3-9644-42c7c578731c\") " Aug 13 01:12:26.372505 kubelet[2722]: I0813 01:12:26.370981 2722 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqznc\" (UniqueName: \"kubernetes.io/projected/0f09822c-cb58-44b3-9644-42c7c578731c-kube-api-access-nqznc\") pod \"0f09822c-cb58-44b3-9644-42c7c578731c\" (UID: \"0f09822c-cb58-44b3-9644-42c7c578731c\") " Aug 13 01:12:26.372505 kubelet[2722]: I0813 01:12:26.371709 2722 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0f09822c-cb58-44b3-9644-42c7c578731c-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "0f09822c-cb58-44b3-9644-42c7c578731c" (UID: "0f09822c-cb58-44b3-9644-42c7c578731c"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:12:26.375815 kubelet[2722]: I0813 01:12:26.375771 2722 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0f09822c-cb58-44b3-9644-42c7c578731c-kube-api-access-nqznc" (OuterVolumeSpecName: "kube-api-access-nqznc") pod "0f09822c-cb58-44b3-9644-42c7c578731c" (UID: "0f09822c-cb58-44b3-9644-42c7c578731c"). InnerVolumeSpecName "kube-api-access-nqznc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:12:26.377680 systemd[1]: var-lib-kubelet-pods-0f09822c\x2dcb58\x2d44b3\x2d9644\x2d42c7c578731c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dnqznc.mount: Deactivated successfully. Aug 13 01:12:26.382755 systemd[1]: var-lib-kubelet-pods-0f09822c\x2dcb58\x2d44b3\x2d9644\x2d42c7c578731c-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:12:26.383605 kubelet[2722]: I0813 01:12:26.383563 2722 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0f09822c-cb58-44b3-9644-42c7c578731c-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "0f09822c-cb58-44b3-9644-42c7c578731c" (UID: "0f09822c-cb58-44b3-9644-42c7c578731c"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:12:26.472142 kubelet[2722]: I0813 01:12:26.472083 2722 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0f09822c-cb58-44b3-9644-42c7c578731c-whisker-ca-bundle\") on node \"172-233-214-103\" DevicePath \"\"" Aug 13 01:12:26.472142 kubelet[2722]: I0813 01:12:26.472120 2722 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0f09822c-cb58-44b3-9644-42c7c578731c-whisker-backend-key-pair\") on node \"172-233-214-103\" DevicePath \"\"" Aug 13 01:12:26.472142 kubelet[2722]: I0813 01:12:26.472135 2722 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nqznc\" (UniqueName: \"kubernetes.io/projected/0f09822c-cb58-44b3-9644-42c7c578731c-kube-api-access-nqznc\") on node \"172-233-214-103\" DevicePath \"\"" Aug 13 01:12:27.197605 systemd[1]: Removed slice kubepods-besteffort-pod0f09822c_cb58_44b3_9644_42c7c578731c.slice - libcontainer container kubepods-besteffort-pod0f09822c_cb58_44b3_9644_42c7c578731c.slice. Aug 13 01:12:27.302452 kubelet[2722]: I0813 01:12:27.302397 2722 eviction_manager.go:459] "Eviction manager: pods successfully cleaned up" pods=["calico-system/whisker-6c66749769-wrs69"] Aug 13 01:12:27.315563 kubelet[2722]: I0813 01:12:27.315522 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:12:27.315666 kubelet[2722]: I0813 01:12:27.315577 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:12:27.319341 kubelet[2722]: I0813 01:12:27.319302 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:12:27.333811 kubelet[2722]: I0813 01:12:27.333767 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:12:27.333947 kubelet[2722]: I0813 01:12:27.333853 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-6b554cb7d7-k4vqf","kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","calico-system/csi-node-driver-l7lv4","calico-system/calico-node-hq29b","tigera-operator/tigera-operator-747864d56d-hvgq2","calico-system/calico-typha-55bf5cd98c-8lqpc","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:12:27.341225 kubelet[2722]: I0813 01:12:27.341200 2722 eviction_manager.go:629] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-6b554cb7d7-k4vqf" Aug 13 01:12:27.341449 kubelet[2722]: I0813 01:12:27.341401 2722 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-6b554cb7d7-k4vqf"] Aug 13 01:12:27.379188 kubelet[2722]: I0813 01:12:27.379144 2722 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rcv8z\" (UniqueName: \"kubernetes.io/projected/d961b3b3-4a19-4ff4-8695-624820fd67cb-kube-api-access-rcv8z\") pod \"d961b3b3-4a19-4ff4-8695-624820fd67cb\" (UID: \"d961b3b3-4a19-4ff4-8695-624820fd67cb\") " Aug 13 01:12:27.379188 kubelet[2722]: I0813 01:12:27.379198 2722 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d961b3b3-4a19-4ff4-8695-624820fd67cb-calico-apiserver-certs\") pod \"d961b3b3-4a19-4ff4-8695-624820fd67cb\" (UID: \"d961b3b3-4a19-4ff4-8695-624820fd67cb\") " Aug 13 01:12:27.385517 kubelet[2722]: I0813 01:12:27.385454 2722 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d961b3b3-4a19-4ff4-8695-624820fd67cb-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "d961b3b3-4a19-4ff4-8695-624820fd67cb" (UID: "d961b3b3-4a19-4ff4-8695-624820fd67cb"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:12:27.387353 systemd[1]: var-lib-kubelet-pods-d961b3b3\x2d4a19\x2d4ff4\x2d8695\x2d624820fd67cb-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:12:27.388264 kubelet[2722]: I0813 01:12:27.388240 2722 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d961b3b3-4a19-4ff4-8695-624820fd67cb-kube-api-access-rcv8z" (OuterVolumeSpecName: "kube-api-access-rcv8z") pod "d961b3b3-4a19-4ff4-8695-624820fd67cb" (UID: "d961b3b3-4a19-4ff4-8695-624820fd67cb"). InnerVolumeSpecName "kube-api-access-rcv8z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:12:27.390872 systemd[1]: var-lib-kubelet-pods-d961b3b3\x2d4a19\x2d4ff4\x2d8695\x2d624820fd67cb-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drcv8z.mount: Deactivated successfully. Aug 13 01:12:27.480631 kubelet[2722]: I0813 01:12:27.480482 2722 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rcv8z\" (UniqueName: \"kubernetes.io/projected/d961b3b3-4a19-4ff4-8695-624820fd67cb-kube-api-access-rcv8z\") on node \"172-233-214-103\" DevicePath \"\"" Aug 13 01:12:27.480631 kubelet[2722]: I0813 01:12:27.480528 2722 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d961b3b3-4a19-4ff4-8695-624820fd67cb-calico-apiserver-certs\") on node \"172-233-214-103\" DevicePath \"\"" Aug 13 01:12:28.060673 containerd[1550]: time="2025-08-13T01:12:28.060612029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,}" Aug 13 01:12:28.070099 systemd[1]: Removed slice kubepods-besteffort-podd961b3b3_4a19_4ff4_8695_624820fd67cb.slice - libcontainer container kubepods-besteffort-podd961b3b3_4a19_4ff4_8695_624820fd67cb.slice. Aug 13 01:12:28.116190 containerd[1550]: time="2025-08-13T01:12:28.116132445Z" level=error msg="Failed to destroy network for sandbox \"873cdd1f78e29022721d2c9508f8578b4ac8a8803df04d2269b01ab732faf1e3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:28.118524 systemd[1]: run-netns-cni\x2d8ed84da5\x2dfe83\x2d16dc\x2d73a0\x2d65fe29be14d7.mount: Deactivated successfully. Aug 13 01:12:28.119704 containerd[1550]: time="2025-08-13T01:12:28.119427141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"873cdd1f78e29022721d2c9508f8578b4ac8a8803df04d2269b01ab732faf1e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:28.119775 kubelet[2722]: E0813 01:12:28.119699 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"873cdd1f78e29022721d2c9508f8578b4ac8a8803df04d2269b01ab732faf1e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:28.120384 kubelet[2722]: E0813 01:12:28.119828 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"873cdd1f78e29022721d2c9508f8578b4ac8a8803df04d2269b01ab732faf1e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:12:28.120384 kubelet[2722]: E0813 01:12:28.119859 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"873cdd1f78e29022721d2c9508f8578b4ac8a8803df04d2269b01ab732faf1e3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:12:28.120384 kubelet[2722]: E0813 01:12:28.119985 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"873cdd1f78e29022721d2c9508f8578b4ac8a8803df04d2269b01ab732faf1e3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7lv4" podUID="6b834979-32a4-464b-9898-ef87b1042a9e" Aug 13 01:12:28.341841 kubelet[2722]: I0813 01:12:28.341755 2722 eviction_manager.go:459] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-6b554cb7d7-k4vqf"] Aug 13 01:12:28.350265 kubelet[2722]: I0813 01:12:28.350250 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:12:28.350331 kubelet[2722]: I0813 01:12:28.350277 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:12:28.352554 kubelet[2722]: I0813 01:12:28.352539 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:12:28.360755 kubelet[2722]: I0813 01:12:28.360742 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:12:28.360812 kubelet[2722]: I0813 01:12:28.360783 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","calico-system/csi-node-driver-l7lv4","calico-system/calico-node-hq29b","tigera-operator/tigera-operator-747864d56d-hvgq2","calico-system/calico-typha-55bf5cd98c-8lqpc","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:12:28.360812 kubelet[2722]: E0813 01:12:28.360802 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:12:28.360812 kubelet[2722]: E0813 01:12:28.360809 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:12:28.360984 kubelet[2722]: E0813 01:12:28.360815 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:12:28.360984 kubelet[2722]: E0813 01:12:28.360821 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:12:28.360984 kubelet[2722]: E0813 01:12:28.360826 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:12:28.361525 containerd[1550]: time="2025-08-13T01:12:28.361246673Z" level=info msg="StopContainer for \"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd\" with timeout 2 (s)" Aug 13 01:12:28.361768 containerd[1550]: time="2025-08-13T01:12:28.361748692Z" level=info msg="Stop container \"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd\" with signal terminated" Aug 13 01:12:28.379613 systemd[1]: cri-containerd-b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd.scope: Deactivated successfully. Aug 13 01:12:28.380523 systemd[1]: cri-containerd-b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd.scope: Consumed 3.460s CPU time, 77.4M memory peak. Aug 13 01:12:28.381004 containerd[1550]: time="2025-08-13T01:12:28.380819780Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd\" id:\"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd\" pid:3049 exited_at:{seconds:1755047548 nanos:380432274}" Aug 13 01:12:28.381004 containerd[1550]: time="2025-08-13T01:12:28.380911722Z" level=info msg="received exit event container_id:\"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd\" id:\"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd\" pid:3049 exited_at:{seconds:1755047548 nanos:380432274}" Aug 13 01:12:28.397887 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd-rootfs.mount: Deactivated successfully. Aug 13 01:12:28.407108 containerd[1550]: time="2025-08-13T01:12:28.407089683Z" level=info msg="StopContainer for \"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd\" returns successfully" Aug 13 01:12:28.407695 containerd[1550]: time="2025-08-13T01:12:28.407673642Z" level=info msg="StopPodSandbox for \"42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497\"" Aug 13 01:12:28.407741 containerd[1550]: time="2025-08-13T01:12:28.407719353Z" level=info msg="Container to stop \"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:12:28.413581 systemd[1]: cri-containerd-42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497.scope: Deactivated successfully. Aug 13 01:12:28.415330 containerd[1550]: time="2025-08-13T01:12:28.415296144Z" level=info msg="TaskExit event in podsandbox handler container_id:\"42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497\" id:\"42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497\" pid:2806 exit_status:137 exited_at:{seconds:1755047548 nanos:415085680}" Aug 13 01:12:28.436050 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497-rootfs.mount: Deactivated successfully. Aug 13 01:12:28.439322 containerd[1550]: time="2025-08-13T01:12:28.439292267Z" level=info msg="shim disconnected" id=42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497 namespace=k8s.io Aug 13 01:12:28.439582 containerd[1550]: time="2025-08-13T01:12:28.439451039Z" level=warning msg="cleaning up after shim disconnected" id=42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497 namespace=k8s.io Aug 13 01:12:28.439582 containerd[1550]: time="2025-08-13T01:12:28.439464240Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:12:28.441922 containerd[1550]: time="2025-08-13T01:12:28.440437417Z" level=info msg="received exit event sandbox_id:\"42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497\" exit_status:137 exited_at:{seconds:1755047548 nanos:415085680}" Aug 13 01:12:28.442327 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497-shm.mount: Deactivated successfully. Aug 13 01:12:28.445068 containerd[1550]: time="2025-08-13T01:12:28.445019585Z" level=info msg="TearDown network for sandbox \"42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497\" successfully" Aug 13 01:12:28.445068 containerd[1550]: time="2025-08-13T01:12:28.445042486Z" level=info msg="StopPodSandbox for \"42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497\" returns successfully" Aug 13 01:12:28.450251 kubelet[2722]: I0813 01:12:28.450232 2722 eviction_manager.go:629] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-747864d56d-hvgq2" Aug 13 01:12:28.450251 kubelet[2722]: I0813 01:12:28.450252 2722 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-747864d56d-hvgq2"] Aug 13 01:12:28.468307 kubelet[2722]: I0813 01:12:28.468262 2722 kubelet.go:2405] "Pod admission denied" podUID="37df81c2-4511-42e3-83a3-deae90bd4d5c" pod="tigera-operator/tigera-operator-747864d56d-lxs7r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:28.470537 kubelet[2722]: I0813 01:12:28.470486 2722 status_manager.go:895] "Failed to get status for pod" podUID="37df81c2-4511-42e3-83a3-deae90bd4d5c" pod="tigera-operator/tigera-operator-747864d56d-lxs7r" err="pods \"tigera-operator-747864d56d-lxs7r\" is forbidden: User \"system:node:172-233-214-103\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '172-233-214-103' and this object" Aug 13 01:12:28.489885 kubelet[2722]: I0813 01:12:28.489868 2722 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c-var-lib-calico\") pod \"8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c\" (UID: \"8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c\") " Aug 13 01:12:28.490019 kubelet[2722]: I0813 01:12:28.489907 2722 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zpp8r\" (UniqueName: \"kubernetes.io/projected/8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c-kube-api-access-zpp8r\") pod \"8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c\" (UID: \"8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c\") " Aug 13 01:12:28.490019 kubelet[2722]: I0813 01:12:28.489945 2722 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c" (UID: "8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:12:28.493318 systemd[1]: var-lib-kubelet-pods-8cfc1c94\x2d5c1f\x2d4ff5\x2d8a0c\x2da47150661a4c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzpp8r.mount: Deactivated successfully. Aug 13 01:12:28.493776 kubelet[2722]: I0813 01:12:28.493755 2722 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c-kube-api-access-zpp8r" (OuterVolumeSpecName: "kube-api-access-zpp8r") pod "8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c" (UID: "8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c"). InnerVolumeSpecName "kube-api-access-zpp8r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:12:28.590884 kubelet[2722]: I0813 01:12:28.590861 2722 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c-var-lib-calico\") on node \"172-233-214-103\" DevicePath \"\"" Aug 13 01:12:28.590884 kubelet[2722]: I0813 01:12:28.590882 2722 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zpp8r\" (UniqueName: \"kubernetes.io/projected/8cfc1c94-5c1f-4ff5-8a0c-a47150661a4c-kube-api-access-zpp8r\") on node \"172-233-214-103\" DevicePath \"\"" Aug 13 01:12:29.057718 kubelet[2722]: E0813 01:12:29.057521 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:29.057969 containerd[1550]: time="2025-08-13T01:12:29.057940566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,}" Aug 13 01:12:29.058782 containerd[1550]: time="2025-08-13T01:12:29.058739009Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,}" Aug 13 01:12:29.113042 containerd[1550]: time="2025-08-13T01:12:29.113003786Z" level=error msg="Failed to destroy network for sandbox \"60447898ccc97b4f2f7873413de3e2cdc08f15bf3d81bf0c0519162f86354494\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:29.114889 containerd[1550]: time="2025-08-13T01:12:29.114847656Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60447898ccc97b4f2f7873413de3e2cdc08f15bf3d81bf0c0519162f86354494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:29.115091 kubelet[2722]: E0813 01:12:29.115058 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60447898ccc97b4f2f7873413de3e2cdc08f15bf3d81bf0c0519162f86354494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:29.115160 kubelet[2722]: E0813 01:12:29.115105 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60447898ccc97b4f2f7873413de3e2cdc08f15bf3d81bf0c0519162f86354494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:12:29.115160 kubelet[2722]: E0813 01:12:29.115124 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60447898ccc97b4f2f7873413de3e2cdc08f15bf3d81bf0c0519162f86354494\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:12:29.115238 kubelet[2722]: E0813 01:12:29.115171 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60447898ccc97b4f2f7873413de3e2cdc08f15bf3d81bf0c0519162f86354494\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:12:29.120957 containerd[1550]: time="2025-08-13T01:12:29.120916134Z" level=error msg="Failed to destroy network for sandbox \"a608be2366db20ca58becac233300eb72fa8185addca99cc10ffe6854c59a449\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:29.122184 containerd[1550]: time="2025-08-13T01:12:29.122148634Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a608be2366db20ca58becac233300eb72fa8185addca99cc10ffe6854c59a449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:29.122485 kubelet[2722]: E0813 01:12:29.122451 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a608be2366db20ca58becac233300eb72fa8185addca99cc10ffe6854c59a449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:29.122485 kubelet[2722]: E0813 01:12:29.122483 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a608be2366db20ca58becac233300eb72fa8185addca99cc10ffe6854c59a449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:12:29.122565 kubelet[2722]: E0813 01:12:29.122499 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a608be2366db20ca58becac233300eb72fa8185addca99cc10ffe6854c59a449\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:12:29.122565 kubelet[2722]: E0813 01:12:29.122534 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a608be2366db20ca58becac233300eb72fa8185addca99cc10ffe6854c59a449\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:12:29.194516 kubelet[2722]: I0813 01:12:29.194456 2722 scope.go:117] "RemoveContainer" containerID="b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd" Aug 13 01:12:29.196887 containerd[1550]: time="2025-08-13T01:12:29.196854352Z" level=info msg="RemoveContainer for \"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd\"" Aug 13 01:12:29.200012 systemd[1]: Removed slice kubepods-besteffort-pod8cfc1c94_5c1f_4ff5_8a0c_a47150661a4c.slice - libcontainer container kubepods-besteffort-pod8cfc1c94_5c1f_4ff5_8a0c_a47150661a4c.slice. Aug 13 01:12:29.200111 systemd[1]: kubepods-besteffort-pod8cfc1c94_5c1f_4ff5_8a0c_a47150661a4c.slice: Consumed 3.484s CPU time, 77.7M memory peak. Aug 13 01:12:29.201247 containerd[1550]: time="2025-08-13T01:12:29.201226473Z" level=info msg="RemoveContainer for \"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd\" returns successfully" Aug 13 01:12:29.201496 kubelet[2722]: I0813 01:12:29.201466 2722 scope.go:117] "RemoveContainer" containerID="b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd" Aug 13 01:12:29.201603 containerd[1550]: time="2025-08-13T01:12:29.201578448Z" level=error msg="ContainerStatus for \"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd\": not found" Aug 13 01:12:29.201678 kubelet[2722]: E0813 01:12:29.201660 2722 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd\": not found" containerID="b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd" Aug 13 01:12:29.201730 kubelet[2722]: I0813 01:12:29.201680 2722 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd"} err="failed to get container status \"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd\": rpc error: code = NotFound desc = an error occurred when try to find container \"b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd\": not found" Aug 13 01:12:29.217187 kubelet[2722]: I0813 01:12:29.217128 2722 kubelet.go:2405] "Pod admission denied" podUID="93081878-b519-4b03-8643-a9d7e19a13c9" pod="tigera-operator/tigera-operator-747864d56d-j4djn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:29.234454 kubelet[2722]: I0813 01:12:29.234420 2722 kubelet.go:2405] "Pod admission denied" podUID="d551686c-df0a-470e-9b35-960030840dd0" pod="tigera-operator/tigera-operator-747864d56d-chb4w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:29.249060 kubelet[2722]: I0813 01:12:29.249007 2722 kubelet.go:2405] "Pod admission denied" podUID="9d005834-bc3f-4a07-894a-85834dde9a73" pod="tigera-operator/tigera-operator-747864d56d-8rlx5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:29.264638 kubelet[2722]: I0813 01:12:29.264557 2722 kubelet.go:2405] "Pod admission denied" podUID="a6adfc50-2817-4b6b-abb1-77a4ff72fc6f" pod="tigera-operator/tigera-operator-747864d56d-97d52" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:29.285425 kubelet[2722]: I0813 01:12:29.285401 2722 kubelet.go:2405] "Pod admission denied" podUID="8fcb02db-2087-42c0-a6ae-27a82ce46499" pod="tigera-operator/tigera-operator-747864d56d-qh7ph" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:29.303199 kubelet[2722]: I0813 01:12:29.302869 2722 kubelet.go:2405] "Pod admission denied" podUID="6f7cac53-adba-41ae-ba06-41621a3bfab4" pod="tigera-operator/tigera-operator-747864d56d-xm8vl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:29.324605 kubelet[2722]: I0813 01:12:29.323713 2722 kubelet.go:2405] "Pod admission denied" podUID="10299fd1-cd82-451b-8612-10886700677e" pod="tigera-operator/tigera-operator-747864d56d-m7cjk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:29.339394 kubelet[2722]: I0813 01:12:29.339364 2722 kubelet.go:2405] "Pod admission denied" podUID="a5d34f16-387b-41f2-8bdb-d7638b97e605" pod="tigera-operator/tigera-operator-747864d56d-gl6f9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:29.365731 kubelet[2722]: I0813 01:12:29.365680 2722 kubelet.go:2405] "Pod admission denied" podUID="7fa9016a-1cce-42e0-bf34-b7d299ce4176" pod="tigera-operator/tigera-operator-747864d56d-tp66w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:29.401018 systemd[1]: run-netns-cni\x2d709b7374\x2d9445\x2d78c3\x2dbedd\x2d938ed7689830.mount: Deactivated successfully. Aug 13 01:12:29.450978 kubelet[2722]: I0813 01:12:29.450934 2722 eviction_manager.go:459] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-747864d56d-hvgq2"] Aug 13 01:12:29.474386 kubelet[2722]: I0813 01:12:29.474354 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:12:29.474474 kubelet[2722]: I0813 01:12:29.474392 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:12:29.480636 containerd[1550]: time="2025-08-13T01:12:29.480563098Z" level=info msg="StopPodSandbox for \"42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497\"" Aug 13 01:12:29.480742 containerd[1550]: time="2025-08-13T01:12:29.480716992Z" level=info msg="TearDown network for sandbox \"42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497\" successfully" Aug 13 01:12:29.480771 containerd[1550]: time="2025-08-13T01:12:29.480748432Z" level=info msg="StopPodSandbox for \"42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497\" returns successfully" Aug 13 01:12:29.485048 containerd[1550]: time="2025-08-13T01:12:29.485019291Z" level=info msg="RemovePodSandbox for \"42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497\"" Aug 13 01:12:29.485048 containerd[1550]: time="2025-08-13T01:12:29.485044911Z" level=info msg="Forcibly stopping sandbox \"42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497\"" Aug 13 01:12:29.485144 containerd[1550]: time="2025-08-13T01:12:29.485118343Z" level=info msg="TearDown network for sandbox \"42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497\" successfully" Aug 13 01:12:29.486367 containerd[1550]: time="2025-08-13T01:12:29.486337482Z" level=info msg="Ensure that sandbox 42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497 in task-service has been cleanup successfully" Aug 13 01:12:29.490162 containerd[1550]: time="2025-08-13T01:12:29.490097723Z" level=info msg="RemovePodSandbox \"42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497\" returns successfully" Aug 13 01:12:29.492498 kubelet[2722]: I0813 01:12:29.492477 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:12:29.506987 kubelet[2722]: I0813 01:12:29.506960 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:12:29.507051 kubelet[2722]: I0813 01:12:29.507033 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","calico-system/csi-node-driver-l7lv4","calico-system/calico-node-hq29b","calico-system/calico-typha-55bf5cd98c-8lqpc","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:12:29.507113 kubelet[2722]: E0813 01:12:29.507058 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:12:29.507113 kubelet[2722]: E0813 01:12:29.507066 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:12:29.507113 kubelet[2722]: E0813 01:12:29.507073 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:12:29.507113 kubelet[2722]: E0813 01:12:29.507079 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:12:29.507113 kubelet[2722]: E0813 01:12:29.507084 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:12:29.507113 kubelet[2722]: E0813 01:12:29.507092 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:12:29.507113 kubelet[2722]: E0813 01:12:29.507099 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:12:29.507113 kubelet[2722]: E0813 01:12:29.507106 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:12:29.507113 kubelet[2722]: E0813 01:12:29.507112 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:12:29.507113 kubelet[2722]: E0813 01:12:29.507118 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:12:29.507280 kubelet[2722]: I0813 01:12:29.507127 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:12:29.519027 kubelet[2722]: I0813 01:12:29.519004 2722 kubelet.go:2405] "Pod admission denied" podUID="ea17b948-7a88-434c-a2eb-a9c74b51357c" pod="tigera-operator/tigera-operator-747864d56d-q29fd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:29.769029 kubelet[2722]: I0813 01:12:29.768942 2722 kubelet.go:2405] "Pod admission denied" podUID="efc96df6-6338-4c40-ab87-b8be4df97884" pod="tigera-operator/tigera-operator-747864d56d-j9ckn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:29.922351 kubelet[2722]: I0813 01:12:29.922149 2722 kubelet.go:2405] "Pod admission denied" podUID="16659823-e119-4dd0-8248-498c0716264e" pod="tigera-operator/tigera-operator-747864d56d-ct94d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:30.058606 kubelet[2722]: E0813 01:12:30.058084 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:30.059772 containerd[1550]: time="2025-08-13T01:12:30.059689144Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,}" Aug 13 01:12:30.080935 kubelet[2722]: I0813 01:12:30.080470 2722 kubelet.go:2405] "Pod admission denied" podUID="3b24c537-a841-48d1-8fa1-06e3a1060562" pod="tigera-operator/tigera-operator-747864d56d-5gqlb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:30.125909 containerd[1550]: time="2025-08-13T01:12:30.125825369Z" level=error msg="Failed to destroy network for sandbox \"dd0d12fa8a866f524b98ef78724af949129357baf07812fd9f4fed93fa31c1b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:30.128479 containerd[1550]: time="2025-08-13T01:12:30.128440458Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd0d12fa8a866f524b98ef78724af949129357baf07812fd9f4fed93fa31c1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:30.129101 kubelet[2722]: E0813 01:12:30.128937 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd0d12fa8a866f524b98ef78724af949129357baf07812fd9f4fed93fa31c1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:30.129101 kubelet[2722]: E0813 01:12:30.129008 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd0d12fa8a866f524b98ef78724af949129357baf07812fd9f4fed93fa31c1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:12:30.129101 kubelet[2722]: E0813 01:12:30.129035 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd0d12fa8a866f524b98ef78724af949129357baf07812fd9f4fed93fa31c1b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:12:30.129276 kubelet[2722]: E0813 01:12:30.129245 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd0d12fa8a866f524b98ef78724af949129357baf07812fd9f4fed93fa31c1b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:12:30.131662 systemd[1]: run-netns-cni\x2d67ee0d7c\x2d6ff2\x2d8d50\x2dca7e\x2dd787a902619f.mount: Deactivated successfully. Aug 13 01:12:30.219523 kubelet[2722]: I0813 01:12:30.219296 2722 kubelet.go:2405] "Pod admission denied" podUID="7a8a2016-641f-46ec-9d54-58e44483b5a4" pod="tigera-operator/tigera-operator-747864d56d-675n5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:30.369983 kubelet[2722]: I0813 01:12:30.369766 2722 kubelet.go:2405] "Pod admission denied" podUID="a7f4e28b-3542-4b08-ab37-27c1e2c141f6" pod="tigera-operator/tigera-operator-747864d56d-zfk9w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:30.518074 kubelet[2722]: I0813 01:12:30.518029 2722 kubelet.go:2405] "Pod admission denied" podUID="27c17791-b752-4c9e-90cc-444289d15d22" pod="tigera-operator/tigera-operator-747864d56d-b98sk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:30.615443 kubelet[2722]: I0813 01:12:30.615410 2722 kubelet.go:2405] "Pod admission denied" podUID="1b77448e-555b-4cc1-9eff-0e690088564f" pod="tigera-operator/tigera-operator-747864d56d-rs7k4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:30.770774 kubelet[2722]: I0813 01:12:30.770705 2722 kubelet.go:2405] "Pod admission denied" podUID="340b8ae4-4d43-4b59-8caf-80250b85af24" pod="tigera-operator/tigera-operator-747864d56d-qlgt6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:30.918069 kubelet[2722]: I0813 01:12:30.918022 2722 kubelet.go:2405] "Pod admission denied" podUID="bc3f73db-c057-4190-8d2b-159a7922a5ea" pod="tigera-operator/tigera-operator-747864d56d-jdxf4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:31.068561 kubelet[2722]: I0813 01:12:31.068446 2722 kubelet.go:2405] "Pod admission denied" podUID="d87460b5-081e-40ea-abef-dbdf5f8a6648" pod="tigera-operator/tigera-operator-747864d56d-tlwvj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:31.221052 kubelet[2722]: I0813 01:12:31.220999 2722 kubelet.go:2405] "Pod admission denied" podUID="1272d8df-0a0c-4673-814e-8ac5df24dbbc" pod="tigera-operator/tigera-operator-747864d56d-r55xf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:31.370485 kubelet[2722]: I0813 01:12:31.370299 2722 kubelet.go:2405] "Pod admission denied" podUID="4d99b22f-dbdc-4a44-8383-3d358c11eaf5" pod="tigera-operator/tigera-operator-747864d56d-q892t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:31.518824 kubelet[2722]: I0813 01:12:31.518733 2722 kubelet.go:2405] "Pod admission denied" podUID="db21799a-3cb0-41f0-9cf1-6698f3cb4aca" pod="tigera-operator/tigera-operator-747864d56d-7smbd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:31.669571 kubelet[2722]: I0813 01:12:31.668991 2722 kubelet.go:2405] "Pod admission denied" podUID="7dc3ba1f-177f-4500-b0f2-1caba1b20425" pod="tigera-operator/tigera-operator-747864d56d-b78xx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:31.768322 kubelet[2722]: I0813 01:12:31.768262 2722 kubelet.go:2405] "Pod admission denied" podUID="a85c1148-b745-4201-ab3c-2718adc71f63" pod="tigera-operator/tigera-operator-747864d56d-bzd2l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:31.868756 kubelet[2722]: I0813 01:12:31.868697 2722 kubelet.go:2405] "Pod admission denied" podUID="7f4db3bb-cb22-4c3d-a105-8dd2a5f15778" pod="tigera-operator/tigera-operator-747864d56d-4c7kz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:31.969408 kubelet[2722]: I0813 01:12:31.969338 2722 kubelet.go:2405] "Pod admission denied" podUID="9ee64270-b4f8-4c7c-98cf-12816cf6ee3d" pod="tigera-operator/tigera-operator-747864d56d-fd7tm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:32.061649 containerd[1550]: time="2025-08-13T01:12:32.060751645Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:12:32.076172 kubelet[2722]: I0813 01:12:32.076129 2722 kubelet.go:2405] "Pod admission denied" podUID="00078d1c-2359-426e-93ab-760a6d8926a2" pod="tigera-operator/tigera-operator-747864d56d-nsrrh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:32.169531 kubelet[2722]: I0813 01:12:32.169479 2722 kubelet.go:2405] "Pod admission denied" podUID="c717ba1f-0a71-436a-af5b-76e9479a580c" pod="tigera-operator/tigera-operator-747864d56d-sw26g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:32.270292 kubelet[2722]: I0813 01:12:32.269656 2722 kubelet.go:2405] "Pod admission denied" podUID="53d9d8fc-641f-4c74-9562-42637be4e742" pod="tigera-operator/tigera-operator-747864d56d-jscdb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:32.372298 kubelet[2722]: I0813 01:12:32.372256 2722 kubelet.go:2405] "Pod admission denied" podUID="46ea7791-80f4-4d6b-af54-ac147489e0e1" pod="tigera-operator/tigera-operator-747864d56d-zsnql" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:32.467655 kubelet[2722]: I0813 01:12:32.467613 2722 kubelet.go:2405] "Pod admission denied" podUID="e0c83c3e-b6c6-471b-90d1-8dadeae15bb9" pod="tigera-operator/tigera-operator-747864d56d-d7jpc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:32.569567 kubelet[2722]: I0813 01:12:32.569439 2722 kubelet.go:2405] "Pod admission denied" podUID="a998aad9-faa2-4cc0-83ec-6177dfd72887" pod="tigera-operator/tigera-operator-747864d56d-5rtwd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:32.668117 kubelet[2722]: I0813 01:12:32.668074 2722 kubelet.go:2405] "Pod admission denied" podUID="ad94799c-4a4b-4777-996c-96772ef06845" pod="tigera-operator/tigera-operator-747864d56d-xn287" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:32.869505 kubelet[2722]: I0813 01:12:32.869241 2722 kubelet.go:2405] "Pod admission denied" podUID="3783e277-db3a-49f7-91c6-2ae6685393c5" pod="tigera-operator/tigera-operator-747864d56d-57vtr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:32.974194 kubelet[2722]: I0813 01:12:32.974160 2722 kubelet.go:2405] "Pod admission denied" podUID="0f99e4de-e5a6-4b6b-a089-320eff323379" pod="tigera-operator/tigera-operator-747864d56d-hmwlc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:33.070801 kubelet[2722]: I0813 01:12:33.070371 2722 kubelet.go:2405] "Pod admission denied" podUID="ba8bf377-cb4f-4b54-ae7c-59a8e55628b8" pod="tigera-operator/tigera-operator-747864d56d-ghk9j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:33.170063 kubelet[2722]: I0813 01:12:33.168294 2722 kubelet.go:2405] "Pod admission denied" podUID="d5a4616d-b7db-4a28-af78-d353850c9dea" pod="tigera-operator/tigera-operator-747864d56d-bx8kt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:33.271566 kubelet[2722]: I0813 01:12:33.271324 2722 kubelet.go:2405] "Pod admission denied" podUID="cfcd1aa6-3d00-4550-ba38-889e7dff59ef" pod="tigera-operator/tigera-operator-747864d56d-9pfp5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:33.475451 kubelet[2722]: I0813 01:12:33.475403 2722 kubelet.go:2405] "Pod admission denied" podUID="853c1457-61f8-4f9c-8474-c979c5fd49df" pod="tigera-operator/tigera-operator-747864d56d-xlkdz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:33.572398 kubelet[2722]: I0813 01:12:33.572363 2722 kubelet.go:2405] "Pod admission denied" podUID="c5e608ee-e9c5-40fb-84f7-8a31d856011e" pod="tigera-operator/tigera-operator-747864d56d-gqqlh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:33.674172 kubelet[2722]: I0813 01:12:33.674138 2722 kubelet.go:2405] "Pod admission denied" podUID="c050cce4-99a2-41af-b900-722f86af0f4f" pod="tigera-operator/tigera-operator-747864d56d-zxnj8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:33.775688 kubelet[2722]: I0813 01:12:33.775236 2722 kubelet.go:2405] "Pod admission denied" podUID="0d714aff-9d52-4363-900a-0a896570b0b3" pod="tigera-operator/tigera-operator-747864d56d-6c2r8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:33.871791 kubelet[2722]: I0813 01:12:33.871753 2722 kubelet.go:2405] "Pod admission denied" podUID="768c65ea-b261-4e89-8dc0-c5b2d3a9f73d" pod="tigera-operator/tigera-operator-747864d56d-kgwjm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:33.972557 kubelet[2722]: I0813 01:12:33.972509 2722 kubelet.go:2405] "Pod admission denied" podUID="3393748a-5446-4bd2-9ede-1a4303a44e90" pod="tigera-operator/tigera-operator-747864d56d-875fj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:34.073591 kubelet[2722]: I0813 01:12:34.073297 2722 kubelet.go:2405] "Pod admission denied" podUID="a7e5eeb0-6990-4d6c-8642-1efcc6f83c84" pod="tigera-operator/tigera-operator-747864d56d-6tlz5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:34.273955 kubelet[2722]: I0813 01:12:34.273876 2722 kubelet.go:2405] "Pod admission denied" podUID="37a97359-33db-46fd-84f1-72dc85e8b566" pod="tigera-operator/tigera-operator-747864d56d-zbjfj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:34.365834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4112633976.mount: Deactivated successfully. Aug 13 01:12:34.372152 containerd[1550]: time="2025-08-13T01:12:34.371978439Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4112633976: write /var/lib/containerd/tmpmounts/containerd-mount4112633976/usr/bin/calico-node: no space left on device" Aug 13 01:12:34.372152 containerd[1550]: time="2025-08-13T01:12:34.372102790Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:12:34.373663 kubelet[2722]: E0813 01:12:34.373000 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4112633976: write /var/lib/containerd/tmpmounts/containerd-mount4112633976/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:12:34.373663 kubelet[2722]: E0813 01:12:34.373041 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4112633976: write /var/lib/containerd/tmpmounts/containerd-mount4112633976/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:12:34.374004 kubelet[2722]: E0813 01:12:34.373192 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq7j6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},StopSignal:nil,},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-hq29b_calico-system(3c0f3b86-7d63-44df-843e-763eb95a8b94): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4112633976: write /var/lib/containerd/tmpmounts/containerd-mount4112633976/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:12:34.374284 kubelet[2722]: I0813 01:12:34.374263 2722 kubelet.go:2405] "Pod admission denied" podUID="e32bfa18-67f1-4596-894c-2176512bff26" pod="tigera-operator/tigera-operator-747864d56d-bxvzc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:34.374474 kubelet[2722]: E0813 01:12:34.374345 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4112633976: write /var/lib/containerd/tmpmounts/containerd-mount4112633976/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-hq29b" podUID="3c0f3b86-7d63-44df-843e-763eb95a8b94" Aug 13 01:12:34.467606 kubelet[2722]: I0813 01:12:34.467553 2722 kubelet.go:2405] "Pod admission denied" podUID="e0c5fe14-e231-4fc1-8e53-6266496f1282" pod="tigera-operator/tigera-operator-747864d56d-8r7z7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:34.577879 kubelet[2722]: I0813 01:12:34.577737 2722 kubelet.go:2405] "Pod admission denied" podUID="9666b332-f5db-4843-b1f6-6429c99b56c9" pod="tigera-operator/tigera-operator-747864d56d-g4gsm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:34.669193 kubelet[2722]: I0813 01:12:34.668805 2722 kubelet.go:2405] "Pod admission denied" podUID="16174f89-ae91-495d-901c-6765a6fada65" pod="tigera-operator/tigera-operator-747864d56d-q7rjl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:34.871147 kubelet[2722]: I0813 01:12:34.870781 2722 kubelet.go:2405] "Pod admission denied" podUID="20537721-6906-4d61-9180-1f1fab318ad2" pod="tigera-operator/tigera-operator-747864d56d-qbqbc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:34.969596 kubelet[2722]: I0813 01:12:34.969558 2722 kubelet.go:2405] "Pod admission denied" podUID="32cf62d8-3eb9-49ab-8b29-58a7e689e0e9" pod="tigera-operator/tigera-operator-747864d56d-7d5vw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:35.070684 kubelet[2722]: I0813 01:12:35.070596 2722 kubelet.go:2405] "Pod admission denied" podUID="b7026110-6f41-428c-82f7-72eae4a317ba" pod="tigera-operator/tigera-operator-747864d56d-rhp2t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:35.167562 kubelet[2722]: I0813 01:12:35.167506 2722 kubelet.go:2405] "Pod admission denied" podUID="8e83c296-2122-4b71-99c6-709477105977" pod="tigera-operator/tigera-operator-747864d56d-j949s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:35.218785 kubelet[2722]: I0813 01:12:35.218518 2722 kubelet.go:2405] "Pod admission denied" podUID="2a69bd61-c01b-4b0c-92fd-2970569b55fc" pod="tigera-operator/tigera-operator-747864d56d-8bzvl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:35.317072 kubelet[2722]: I0813 01:12:35.316963 2722 kubelet.go:2405] "Pod admission denied" podUID="157dc6e4-349f-40f6-8c59-6c4088f4e120" pod="tigera-operator/tigera-operator-747864d56d-x2xx7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:35.418840 kubelet[2722]: I0813 01:12:35.418660 2722 kubelet.go:2405] "Pod admission denied" podUID="6d0f262d-fbae-435f-a1b4-ed013959ed0b" pod="tigera-operator/tigera-operator-747864d56d-6jwh8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:35.471252 kubelet[2722]: I0813 01:12:35.471048 2722 kubelet.go:2405] "Pod admission denied" podUID="7c225d7d-fc13-4cf6-a886-64ad5dda64ba" pod="tigera-operator/tigera-operator-747864d56d-9xdzm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:35.572796 kubelet[2722]: I0813 01:12:35.572667 2722 kubelet.go:2405] "Pod admission denied" podUID="dec508bc-5d63-4693-9dcd-5e69935eab24" pod="tigera-operator/tigera-operator-747864d56d-j5nzd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:35.673413 kubelet[2722]: I0813 01:12:35.673146 2722 kubelet.go:2405] "Pod admission denied" podUID="26408e91-fa71-4cc0-9250-ec55e4b54c5d" pod="tigera-operator/tigera-operator-747864d56d-wrthz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:35.723945 kubelet[2722]: I0813 01:12:35.723136 2722 kubelet.go:2405] "Pod admission denied" podUID="cc928850-f208-4ade-b7c2-dd209dd35032" pod="tigera-operator/tigera-operator-747864d56d-d58g4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:35.819962 kubelet[2722]: I0813 01:12:35.819122 2722 kubelet.go:2405] "Pod admission denied" podUID="39916c65-565d-4282-a396-e1b24a86c409" pod="tigera-operator/tigera-operator-747864d56d-8mrzr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:36.018538 kubelet[2722]: I0813 01:12:36.018498 2722 kubelet.go:2405] "Pod admission denied" podUID="2dbcdc06-9f13-41fb-9ade-d54461158257" pod="tigera-operator/tigera-operator-747864d56d-vt2tx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:36.117566 kubelet[2722]: I0813 01:12:36.117532 2722 kubelet.go:2405] "Pod admission denied" podUID="94021985-f725-462c-9da6-0c195d11dbbd" pod="tigera-operator/tigera-operator-747864d56d-9ns8j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:36.167544 kubelet[2722]: I0813 01:12:36.167505 2722 kubelet.go:2405] "Pod admission denied" podUID="13b175b7-0492-4972-a420-5b6ca97051ad" pod="tigera-operator/tigera-operator-747864d56d-s7nkk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:36.268322 kubelet[2722]: I0813 01:12:36.268277 2722 kubelet.go:2405] "Pod admission denied" podUID="f18fe371-1b04-4d70-9285-f44e2bd9f004" pod="tigera-operator/tigera-operator-747864d56d-mcs7w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:36.368914 kubelet[2722]: I0813 01:12:36.368744 2722 kubelet.go:2405] "Pod admission denied" podUID="7d6376dd-ef28-4b22-bd32-3b1fd11fa501" pod="tigera-operator/tigera-operator-747864d56d-5266z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:36.468848 kubelet[2722]: I0813 01:12:36.468788 2722 kubelet.go:2405] "Pod admission denied" podUID="50105f37-85b5-4acb-b89b-c8365c78c4ae" pod="tigera-operator/tigera-operator-747864d56d-gblr6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:36.676596 kubelet[2722]: I0813 01:12:36.676524 2722 kubelet.go:2405] "Pod admission denied" podUID="2d9faa70-b05a-4510-9c26-3d1939fc297e" pod="tigera-operator/tigera-operator-747864d56d-gjfzq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:36.770969 kubelet[2722]: I0813 01:12:36.770371 2722 kubelet.go:2405] "Pod admission denied" podUID="7a258d7c-6021-4c50-8f5a-ef8f068c45e9" pod="tigera-operator/tigera-operator-747864d56d-gf5r4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:36.871330 kubelet[2722]: I0813 01:12:36.871286 2722 kubelet.go:2405] "Pod admission denied" podUID="2379d61d-63ae-4224-a40b-9c0da156747a" pod="tigera-operator/tigera-operator-747864d56d-wb5s2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:36.969531 kubelet[2722]: I0813 01:12:36.969222 2722 kubelet.go:2405] "Pod admission denied" podUID="d03861df-f8de-4d9b-acf8-a86010a07991" pod="tigera-operator/tigera-operator-747864d56d-pdpr4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:37.019264 kubelet[2722]: I0813 01:12:37.019224 2722 kubelet.go:2405] "Pod admission denied" podUID="9f552e7d-0078-4054-a025-457d396bec4b" pod="tigera-operator/tigera-operator-747864d56d-j7np6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:37.119922 kubelet[2722]: I0813 01:12:37.119653 2722 kubelet.go:2405] "Pod admission denied" podUID="83d06ba6-f14e-4ce5-8ec8-5f6a98bfc228" pod="tigera-operator/tigera-operator-747864d56d-zt2jk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:37.319367 kubelet[2722]: I0813 01:12:37.319264 2722 kubelet.go:2405] "Pod admission denied" podUID="73d66988-ff49-43f9-9f9e-0602b665dfa0" pod="tigera-operator/tigera-operator-747864d56d-mbjxw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:37.419277 kubelet[2722]: I0813 01:12:37.419237 2722 kubelet.go:2405] "Pod admission denied" podUID="26d7591c-50c1-4437-85d6-c6e6b539bed9" pod="tigera-operator/tigera-operator-747864d56d-cm2kh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:37.519182 kubelet[2722]: I0813 01:12:37.519123 2722 kubelet.go:2405] "Pod admission denied" podUID="3bf22a69-3dee-43fe-9f36-eac0311e347e" pod="tigera-operator/tigera-operator-747864d56d-4dsds" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:37.622423 kubelet[2722]: I0813 01:12:37.621818 2722 kubelet.go:2405] "Pod admission denied" podUID="ed96abbd-4147-4a45-ad36-f9df416fe799" pod="tigera-operator/tigera-operator-747864d56d-q7xpf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:37.719802 kubelet[2722]: I0813 01:12:37.719734 2722 kubelet.go:2405] "Pod admission denied" podUID="bb0755aa-9f53-49b4-93b8-3068551e7fe8" pod="tigera-operator/tigera-operator-747864d56d-wvvxz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:37.822492 kubelet[2722]: I0813 01:12:37.822425 2722 kubelet.go:2405] "Pod admission denied" podUID="671d787e-12bc-4a1c-843d-80ca3771744f" pod="tigera-operator/tigera-operator-747864d56d-ndbgd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:37.921858 kubelet[2722]: I0813 01:12:37.921651 2722 kubelet.go:2405] "Pod admission denied" podUID="5e0b33c3-d0dd-4ccd-abd7-7572601d1eb9" pod="tigera-operator/tigera-operator-747864d56d-skhxm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:38.026449 kubelet[2722]: I0813 01:12:38.025569 2722 kubelet.go:2405] "Pod admission denied" podUID="d2bd0274-a1ed-4c37-9bdb-551213aad6fe" pod="tigera-operator/tigera-operator-747864d56d-t9fdj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:38.121172 kubelet[2722]: I0813 01:12:38.121066 2722 kubelet.go:2405] "Pod admission denied" podUID="25d872de-800f-48a8-a59b-b7abebd83545" pod="tigera-operator/tigera-operator-747864d56d-8cmg9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:38.223527 kubelet[2722]: I0813 01:12:38.223477 2722 kubelet.go:2405] "Pod admission denied" podUID="b8d2816c-0988-4796-b517-4b5c8d7d4217" pod="tigera-operator/tigera-operator-747864d56d-jlm44" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:38.320553 kubelet[2722]: I0813 01:12:38.320279 2722 kubelet.go:2405] "Pod admission denied" podUID="23180c1f-e924-4639-b7e8-ea794feac206" pod="tigera-operator/tigera-operator-747864d56d-xjvrt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:38.417476 kubelet[2722]: I0813 01:12:38.417446 2722 kubelet.go:2405] "Pod admission denied" podUID="473b9953-9cc6-4b18-bb3c-366ea677f0aa" pod="tigera-operator/tigera-operator-747864d56d-7lhzp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:38.520912 kubelet[2722]: I0813 01:12:38.520750 2722 kubelet.go:2405] "Pod admission denied" podUID="41b7b517-19a6-4a34-8da5-c4ba59c08515" pod="tigera-operator/tigera-operator-747864d56d-fvbt9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:38.720666 kubelet[2722]: I0813 01:12:38.720470 2722 kubelet.go:2405] "Pod admission denied" podUID="17a9b21e-1620-4000-b0a0-552da6e04747" pod="tigera-operator/tigera-operator-747864d56d-jh94h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:38.826690 kubelet[2722]: I0813 01:12:38.826474 2722 kubelet.go:2405] "Pod admission denied" podUID="381ba6ac-1eef-4ed4-a0b7-6e07dbee006d" pod="tigera-operator/tigera-operator-747864d56d-vq2sv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:38.926176 kubelet[2722]: I0813 01:12:38.926090 2722 kubelet.go:2405] "Pod admission denied" podUID="53fbb72a-ba1c-4abc-b6da-ecf5ac56f8b5" pod="tigera-operator/tigera-operator-747864d56d-rgt44" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:39.026432 kubelet[2722]: I0813 01:12:39.026360 2722 kubelet.go:2405] "Pod admission denied" podUID="09959fb3-6114-4d77-abcf-784ea0ea7670" pod="tigera-operator/tigera-operator-747864d56d-ln7vh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:39.124601 kubelet[2722]: I0813 01:12:39.124031 2722 kubelet.go:2405] "Pod admission denied" podUID="784bb80a-7bd0-4c4a-9bdc-1c14368dce29" pod="tigera-operator/tigera-operator-747864d56d-vvzhx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:39.222391 kubelet[2722]: I0813 01:12:39.222348 2722 kubelet.go:2405] "Pod admission denied" podUID="fdce34c5-86ab-4a92-bdbe-4550bdf19ba0" pod="tigera-operator/tigera-operator-747864d56d-6xvm6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:39.321936 kubelet[2722]: I0813 01:12:39.321869 2722 kubelet.go:2405] "Pod admission denied" podUID="171bfe08-4e50-4fd6-ac1f-33846ad33da2" pod="tigera-operator/tigera-operator-747864d56d-lr5t9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:39.533931 kubelet[2722]: I0813 01:12:39.533180 2722 kubelet.go:2405] "Pod admission denied" podUID="357d21b9-1d8b-4dfc-a910-961bf4f26fe5" pod="tigera-operator/tigera-operator-747864d56d-xsdbz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:39.542921 kubelet[2722]: I0813 01:12:39.542752 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:12:39.542921 kubelet[2722]: I0813 01:12:39.542778 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:12:39.545926 kubelet[2722]: I0813 01:12:39.545719 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:12:39.562919 kubelet[2722]: I0813 01:12:39.562873 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:12:39.563887 kubelet[2722]: I0813 01:12:39.563861 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/csi-node-driver-l7lv4","calico-system/calico-node-hq29b","calico-system/calico-typha-55bf5cd98c-8lqpc","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:12:39.564188 kubelet[2722]: E0813 01:12:39.564168 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:12:39.564188 kubelet[2722]: E0813 01:12:39.564188 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:12:39.564253 kubelet[2722]: E0813 01:12:39.564196 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:12:39.564253 kubelet[2722]: E0813 01:12:39.564203 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:12:39.564253 kubelet[2722]: E0813 01:12:39.564220 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:12:39.564253 kubelet[2722]: E0813 01:12:39.564247 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:12:39.564335 kubelet[2722]: E0813 01:12:39.564256 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:12:39.564335 kubelet[2722]: E0813 01:12:39.564264 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:12:39.564335 kubelet[2722]: E0813 01:12:39.564283 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:12:39.564335 kubelet[2722]: E0813 01:12:39.564291 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:12:39.564335 kubelet[2722]: I0813 01:12:39.564300 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:12:39.619005 kubelet[2722]: I0813 01:12:39.618963 2722 kubelet.go:2405] "Pod admission denied" podUID="4168a0cc-7c57-4d68-9da2-6083fdadf286" pod="tigera-operator/tigera-operator-747864d56d-lj8lk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:39.723037 kubelet[2722]: I0813 01:12:39.722967 2722 kubelet.go:2405] "Pod admission denied" podUID="bee3fdcf-cf72-4b44-84bf-83556dd5b46c" pod="tigera-operator/tigera-operator-747864d56d-6n4bq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:39.920674 kubelet[2722]: I0813 01:12:39.920200 2722 kubelet.go:2405] "Pod admission denied" podUID="a6eb7c10-c674-453c-be7b-a9339f834764" pod="tigera-operator/tigera-operator-747864d56d-7wb9z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:40.019111 kubelet[2722]: I0813 01:12:40.019054 2722 kubelet.go:2405] "Pod admission denied" podUID="b46ae66c-7354-46a2-9db3-ecd3defe1f18" pod="tigera-operator/tigera-operator-747864d56d-b49f7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:40.118996 kubelet[2722]: I0813 01:12:40.118944 2722 kubelet.go:2405] "Pod admission denied" podUID="c789926f-eead-46e5-981f-ec0ad2bd9493" pod="tigera-operator/tigera-operator-747864d56d-sc9sb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:40.326470 kubelet[2722]: I0813 01:12:40.325213 2722 kubelet.go:2405] "Pod admission denied" podUID="15f8a6fa-54a0-434f-a4a5-6a0f63d00a95" pod="tigera-operator/tigera-operator-747864d56d-8djxt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:40.417863 kubelet[2722]: I0813 01:12:40.417829 2722 kubelet.go:2405] "Pod admission denied" podUID="61ddae56-1e24-416f-b8a7-4b3d403f1d6f" pod="tigera-operator/tigera-operator-747864d56d-lpbt8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:40.521411 kubelet[2722]: I0813 01:12:40.521346 2722 kubelet.go:2405] "Pod admission denied" podUID="607880a1-a294-4f9b-8087-2eb601a6a2db" pod="tigera-operator/tigera-operator-747864d56d-4wv77" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:40.720396 kubelet[2722]: I0813 01:12:40.720356 2722 kubelet.go:2405] "Pod admission denied" podUID="92ca9865-2cf8-4786-bafc-44bd2b6d4e90" pod="tigera-operator/tigera-operator-747864d56d-lbrpn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:40.818911 kubelet[2722]: I0813 01:12:40.818868 2722 kubelet.go:2405] "Pod admission denied" podUID="74d581fd-b20d-4ea4-9d13-13c2a2d2e057" pod="tigera-operator/tigera-operator-747864d56d-jnp2v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:40.917499 kubelet[2722]: I0813 01:12:40.917470 2722 kubelet.go:2405] "Pod admission denied" podUID="0170ec7a-5421-409a-bc5e-228b27a3fb0c" pod="tigera-operator/tigera-operator-747864d56d-cbh7f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:41.023647 kubelet[2722]: I0813 01:12:41.023523 2722 kubelet.go:2405] "Pod admission denied" podUID="13a6946e-0d1f-4bb3-ade2-2632ccebbdfb" pod="tigera-operator/tigera-operator-747864d56d-vvgb2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:41.057459 containerd[1550]: time="2025-08-13T01:12:41.057413422Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,}" Aug 13 01:12:41.104452 containerd[1550]: time="2025-08-13T01:12:41.104399161Z" level=error msg="Failed to destroy network for sandbox \"68d1ea67c7aef221718b17f2c3bf36195f3967b56ad8fbc45b3fcd3d778987bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:41.107942 containerd[1550]: time="2025-08-13T01:12:41.106871514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"68d1ea67c7aef221718b17f2c3bf36195f3967b56ad8fbc45b3fcd3d778987bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:41.107450 systemd[1]: run-netns-cni\x2dbd5fd562\x2d9e81\x2d3c75\x2dd7e5\x2d7f9389e06c1f.mount: Deactivated successfully. Aug 13 01:12:41.109170 kubelet[2722]: E0813 01:12:41.107962 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68d1ea67c7aef221718b17f2c3bf36195f3967b56ad8fbc45b3fcd3d778987bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:41.109170 kubelet[2722]: E0813 01:12:41.108025 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68d1ea67c7aef221718b17f2c3bf36195f3967b56ad8fbc45b3fcd3d778987bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:12:41.109170 kubelet[2722]: E0813 01:12:41.108045 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"68d1ea67c7aef221718b17f2c3bf36195f3967b56ad8fbc45b3fcd3d778987bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:12:41.109170 kubelet[2722]: E0813 01:12:41.108099 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"68d1ea67c7aef221718b17f2c3bf36195f3967b56ad8fbc45b3fcd3d778987bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7lv4" podUID="6b834979-32a4-464b-9898-ef87b1042a9e" Aug 13 01:12:41.122511 kubelet[2722]: I0813 01:12:41.122473 2722 kubelet.go:2405] "Pod admission denied" podUID="6b71e717-fa0f-45bd-95b3-ebf138c8ec17" pod="tigera-operator/tigera-operator-747864d56d-bt8nc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:41.322484 kubelet[2722]: I0813 01:12:41.322306 2722 kubelet.go:2405] "Pod admission denied" podUID="d5660a67-efc8-486c-a188-1bb8d35f81f2" pod="tigera-operator/tigera-operator-747864d56d-twqft" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:41.422088 kubelet[2722]: I0813 01:12:41.422011 2722 kubelet.go:2405] "Pod admission denied" podUID="86da9cb1-6baa-45aa-81fd-17ff1b6029de" pod="tigera-operator/tigera-operator-747864d56d-wt2kz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:41.522067 kubelet[2722]: I0813 01:12:41.522026 2722 kubelet.go:2405] "Pod admission denied" podUID="5aab8f9d-e4cb-4099-93f0-a62c0e58fdd2" pod="tigera-operator/tigera-operator-747864d56d-ntj44" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:41.719161 kubelet[2722]: I0813 01:12:41.719120 2722 kubelet.go:2405] "Pod admission denied" podUID="237a023a-c5d4-4a1f-a70b-538cfb69c230" pod="tigera-operator/tigera-operator-747864d56d-pvjxk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:41.821919 kubelet[2722]: I0813 01:12:41.821071 2722 kubelet.go:2405] "Pod admission denied" podUID="1f2576a1-614c-4925-a3f7-c2f55329342d" pod="tigera-operator/tigera-operator-747864d56d-nlvd7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:41.918690 kubelet[2722]: I0813 01:12:41.918467 2722 kubelet.go:2405] "Pod admission denied" podUID="66cf7d4d-ed2a-46c4-a676-f5a31f1c1d77" pod="tigera-operator/tigera-operator-747864d56d-wjbm5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:42.026641 kubelet[2722]: I0813 01:12:42.026519 2722 kubelet.go:2405] "Pod admission denied" podUID="ab17e890-f4a1-4c48-8c0d-805c0379c1b4" pod="tigera-operator/tigera-operator-747864d56d-hc6gd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:42.070258 kubelet[2722]: I0813 01:12:42.070225 2722 kubelet.go:2405] "Pod admission denied" podUID="e36712a9-bcbb-4c30-9d73-42bb5ec58e47" pod="tigera-operator/tigera-operator-747864d56d-6ql6x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:42.169268 kubelet[2722]: I0813 01:12:42.169233 2722 kubelet.go:2405] "Pod admission denied" podUID="cda00a5d-8952-4f47-a015-781e7958181e" pod="tigera-operator/tigera-operator-747864d56d-xc5nw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:42.267487 kubelet[2722]: I0813 01:12:42.267441 2722 kubelet.go:2405] "Pod admission denied" podUID="ebc1e3c2-b0be-41a7-8863-42540338ebff" pod="tigera-operator/tigera-operator-747864d56d-kqj7t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:42.371630 kubelet[2722]: I0813 01:12:42.371379 2722 kubelet.go:2405] "Pod admission denied" podUID="621f2039-18f7-4fee-8d55-2807ff45f48a" pod="tigera-operator/tigera-operator-747864d56d-2vnlw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:42.472421 kubelet[2722]: I0813 01:12:42.472382 2722 kubelet.go:2405] "Pod admission denied" podUID="9abb8f50-e104-4c99-a60c-f0c8b70dcc94" pod="tigera-operator/tigera-operator-747864d56d-drhnp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:42.571353 kubelet[2722]: I0813 01:12:42.571312 2722 kubelet.go:2405] "Pod admission denied" podUID="5f095f58-2bec-4318-98e8-066dfcda11c5" pod="tigera-operator/tigera-operator-747864d56d-wcl8v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:42.671120 kubelet[2722]: I0813 01:12:42.670940 2722 kubelet.go:2405] "Pod admission denied" podUID="dbbc5d86-c9cc-47da-9106-234fdf5a0c98" pod="tigera-operator/tigera-operator-747864d56d-j9zgk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:42.775347 kubelet[2722]: I0813 01:12:42.775281 2722 kubelet.go:2405] "Pod admission denied" podUID="e109c488-7f67-487e-9b26-43bb3083eac7" pod="tigera-operator/tigera-operator-747864d56d-rpljd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:42.874303 kubelet[2722]: I0813 01:12:42.873375 2722 kubelet.go:2405] "Pod admission denied" podUID="18fb517e-6b45-4c4d-929e-57d8e65a7d35" pod="tigera-operator/tigera-operator-747864d56d-wcjxx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:42.925681 kubelet[2722]: I0813 01:12:42.925490 2722 kubelet.go:2405] "Pod admission denied" podUID="ea5b298b-6510-4f9c-8a04-c81ed63da266" pod="tigera-operator/tigera-operator-747864d56d-q9dxh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:43.020234 kubelet[2722]: I0813 01:12:43.020187 2722 kubelet.go:2405] "Pod admission denied" podUID="1490ddf8-9e64-4214-a7e5-b67d1118a269" pod="tigera-operator/tigera-operator-747864d56d-7cxt4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:43.120318 kubelet[2722]: I0813 01:12:43.120266 2722 kubelet.go:2405] "Pod admission denied" podUID="ddc0c0bd-5269-4076-a70c-9d515b895cb2" pod="tigera-operator/tigera-operator-747864d56d-4vxrt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:43.222223 kubelet[2722]: I0813 01:12:43.221794 2722 kubelet.go:2405] "Pod admission denied" podUID="8ab301f9-4b35-4013-baac-054dcbd1528d" pod="tigera-operator/tigera-operator-747864d56d-6x7qg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:43.424462 kubelet[2722]: I0813 01:12:43.424411 2722 kubelet.go:2405] "Pod admission denied" podUID="14fdab50-13ce-4400-9446-dc253a7555a5" pod="tigera-operator/tigera-operator-747864d56d-kpmmb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:43.525150 kubelet[2722]: I0813 01:12:43.524077 2722 kubelet.go:2405] "Pod admission denied" podUID="f9750b27-b7d3-49ea-ac2d-e9dd84f75e51" pod="tigera-operator/tigera-operator-747864d56d-9nm99" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:43.621019 kubelet[2722]: I0813 01:12:43.620987 2722 kubelet.go:2405] "Pod admission denied" podUID="8665b90b-0c37-438a-95b6-0ba358d1709b" pod="tigera-operator/tigera-operator-747864d56d-jz8t6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:43.723962 kubelet[2722]: I0813 01:12:43.723881 2722 kubelet.go:2405] "Pod admission denied" podUID="60962b12-3013-4871-8b95-b0635e95ec92" pod="tigera-operator/tigera-operator-747864d56d-8xv45" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:43.825763 kubelet[2722]: I0813 01:12:43.825390 2722 kubelet.go:2405] "Pod admission denied" podUID="04481e79-b51c-47cc-86ee-4f90ad77c405" pod="tigera-operator/tigera-operator-747864d56d-t7wwb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:43.923095 kubelet[2722]: I0813 01:12:43.922671 2722 kubelet.go:2405] "Pod admission denied" podUID="d966d2e3-5727-4210-9508-22db3dacbe30" pod="tigera-operator/tigera-operator-747864d56d-pxhq5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:43.980917 kubelet[2722]: I0813 01:12:43.980750 2722 kubelet.go:2405] "Pod admission denied" podUID="6d312053-f8ed-4ff6-80d1-4027a8b9a436" pod="tigera-operator/tigera-operator-747864d56d-8d7nm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:44.060537 kubelet[2722]: E0813 01:12:44.060341 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:44.061253 containerd[1550]: time="2025-08-13T01:12:44.060793450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,}" Aug 13 01:12:44.062713 containerd[1550]: time="2025-08-13T01:12:44.062384338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,}" Aug 13 01:12:44.091188 kubelet[2722]: I0813 01:12:44.090772 2722 kubelet.go:2405] "Pod admission denied" podUID="312212dd-bd0a-47dc-aca9-3f56157bcff1" pod="tigera-operator/tigera-operator-747864d56d-xf4vx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:44.155783 containerd[1550]: time="2025-08-13T01:12:44.155602494Z" level=error msg="Failed to destroy network for sandbox \"517a854cb96ac1c7606f5fe7c60ba2dfec6396c7c56013bcbe5f3312f9547794\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:44.157872 systemd[1]: run-netns-cni\x2d41c92cd3\x2d078b\x2de45c\x2dd83e\x2df4895959639a.mount: Deactivated successfully. Aug 13 01:12:44.164282 containerd[1550]: time="2025-08-13T01:12:44.163853710Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"517a854cb96ac1c7606f5fe7c60ba2dfec6396c7c56013bcbe5f3312f9547794\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:44.165030 kubelet[2722]: E0813 01:12:44.164636 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"517a854cb96ac1c7606f5fe7c60ba2dfec6396c7c56013bcbe5f3312f9547794\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:44.165030 kubelet[2722]: E0813 01:12:44.164694 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"517a854cb96ac1c7606f5fe7c60ba2dfec6396c7c56013bcbe5f3312f9547794\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:12:44.165030 kubelet[2722]: E0813 01:12:44.164714 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"517a854cb96ac1c7606f5fe7c60ba2dfec6396c7c56013bcbe5f3312f9547794\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:12:44.165030 kubelet[2722]: E0813 01:12:44.164779 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"517a854cb96ac1c7606f5fe7c60ba2dfec6396c7c56013bcbe5f3312f9547794\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:12:44.168250 containerd[1550]: time="2025-08-13T01:12:44.168205549Z" level=error msg="Failed to destroy network for sandbox \"c4110ab1767c395f669a7b2bc5115b34d2e580613fe2222fdbfb75d78b69b913\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:44.170334 systemd[1]: run-netns-cni\x2d587819e2\x2d35ea\x2d2e28\x2d995c\x2d15b1a0639e3b.mount: Deactivated successfully. Aug 13 01:12:44.171225 containerd[1550]: time="2025-08-13T01:12:44.171049572Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4110ab1767c395f669a7b2bc5115b34d2e580613fe2222fdbfb75d78b69b913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:44.171371 kubelet[2722]: E0813 01:12:44.171287 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4110ab1767c395f669a7b2bc5115b34d2e580613fe2222fdbfb75d78b69b913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:44.171371 kubelet[2722]: E0813 01:12:44.171325 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4110ab1767c395f669a7b2bc5115b34d2e580613fe2222fdbfb75d78b69b913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:12:44.171371 kubelet[2722]: E0813 01:12:44.171343 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c4110ab1767c395f669a7b2bc5115b34d2e580613fe2222fdbfb75d78b69b913\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:12:44.171492 kubelet[2722]: E0813 01:12:44.171383 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c4110ab1767c395f669a7b2bc5115b34d2e580613fe2222fdbfb75d78b69b913\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:12:44.272202 kubelet[2722]: I0813 01:12:44.272140 2722 kubelet.go:2405] "Pod admission denied" podUID="001e2264-7028-41cc-8078-dc12a1398281" pod="tigera-operator/tigera-operator-747864d56d-rlczj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:44.372129 kubelet[2722]: I0813 01:12:44.371999 2722 kubelet.go:2405] "Pod admission denied" podUID="5924065d-c116-4760-bffc-34abffc8d63b" pod="tigera-operator/tigera-operator-747864d56d-9pv62" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:44.470770 kubelet[2722]: I0813 01:12:44.470724 2722 kubelet.go:2405] "Pod admission denied" podUID="daef59c2-e8cc-4224-b047-87a0a1a3e90c" pod="tigera-operator/tigera-operator-747864d56d-x5cvv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:44.673431 kubelet[2722]: I0813 01:12:44.672845 2722 kubelet.go:2405] "Pod admission denied" podUID="8752252d-97b6-430c-8664-444c2cf6ff6d" pod="tigera-operator/tigera-operator-747864d56d-x9sff" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:44.775239 kubelet[2722]: I0813 01:12:44.775178 2722 kubelet.go:2405] "Pod admission denied" podUID="15d4b91e-cb2d-4fce-8ae9-7f98ea355ba1" pod="tigera-operator/tigera-operator-747864d56d-dnqn6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:44.872647 kubelet[2722]: I0813 01:12:44.872581 2722 kubelet.go:2405] "Pod admission denied" podUID="9606bf02-28cd-4c29-9c59-d64c33fba5e0" pod="tigera-operator/tigera-operator-747864d56d-4fbtz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:44.973764 kubelet[2722]: I0813 01:12:44.973378 2722 kubelet.go:2405] "Pod admission denied" podUID="e83b8bdd-01fa-4c09-b804-7b50a87511f6" pod="tigera-operator/tigera-operator-747864d56d-7jjsr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:45.057992 kubelet[2722]: E0813 01:12:45.057795 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:45.058602 containerd[1550]: time="2025-08-13T01:12:45.058318354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,}" Aug 13 01:12:45.080469 kubelet[2722]: I0813 01:12:45.080415 2722 kubelet.go:2405] "Pod admission denied" podUID="4c6598ec-1071-442d-9b85-aac1dae3b759" pod="tigera-operator/tigera-operator-747864d56d-nxgtl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:45.131818 containerd[1550]: time="2025-08-13T01:12:45.131760217Z" level=error msg="Failed to destroy network for sandbox \"227d78b17c9b51168a5dc678544e7cf3664a51bf9c0fd8b23a9eaa4acbf65fbe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:45.134562 systemd[1]: run-netns-cni\x2d61c15702\x2d58b9\x2d9980\x2dd674\x2da01b9d408d5a.mount: Deactivated successfully. Aug 13 01:12:45.135055 containerd[1550]: time="2025-08-13T01:12:45.134730039Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"227d78b17c9b51168a5dc678544e7cf3664a51bf9c0fd8b23a9eaa4acbf65fbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:45.135346 kubelet[2722]: E0813 01:12:45.135284 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"227d78b17c9b51168a5dc678544e7cf3664a51bf9c0fd8b23a9eaa4acbf65fbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:45.135404 kubelet[2722]: E0813 01:12:45.135382 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"227d78b17c9b51168a5dc678544e7cf3664a51bf9c0fd8b23a9eaa4acbf65fbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:12:45.136353 kubelet[2722]: E0813 01:12:45.136312 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"227d78b17c9b51168a5dc678544e7cf3664a51bf9c0fd8b23a9eaa4acbf65fbe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:12:45.136468 kubelet[2722]: E0813 01:12:45.136421 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"227d78b17c9b51168a5dc678544e7cf3664a51bf9c0fd8b23a9eaa4acbf65fbe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:12:45.173084 kubelet[2722]: I0813 01:12:45.173028 2722 kubelet.go:2405] "Pod admission denied" podUID="d6c75ff3-ba37-46b8-9381-385480537b8a" pod="tigera-operator/tigera-operator-747864d56d-2zwst" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:45.272808 kubelet[2722]: I0813 01:12:45.272666 2722 kubelet.go:2405] "Pod admission denied" podUID="39600541-cb6e-4fc9-b104-e580a217b01a" pod="tigera-operator/tigera-operator-747864d56d-xc26z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:45.370617 kubelet[2722]: I0813 01:12:45.370569 2722 kubelet.go:2405] "Pod admission denied" podUID="d5536dff-8726-4d5c-848c-fbc17289782b" pod="tigera-operator/tigera-operator-747864d56d-zfhsw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:45.472921 kubelet[2722]: I0813 01:12:45.472868 2722 kubelet.go:2405] "Pod admission denied" podUID="a3492f71-ce4a-4611-8af8-78456cce9480" pod="tigera-operator/tigera-operator-747864d56d-dds7f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:45.570668 kubelet[2722]: I0813 01:12:45.570514 2722 kubelet.go:2405] "Pod admission denied" podUID="d2add2e5-b563-47fd-b2da-dea03a4ec729" pod="tigera-operator/tigera-operator-747864d56d-dnzs5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:45.674036 kubelet[2722]: I0813 01:12:45.673970 2722 kubelet.go:2405] "Pod admission denied" podUID="03475048-9f29-4f0f-aa33-d257e688da9c" pod="tigera-operator/tigera-operator-747864d56d-6krqk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:45.772945 kubelet[2722]: I0813 01:12:45.772884 2722 kubelet.go:2405] "Pod admission denied" podUID="e5c50c13-0f5f-4b00-9444-abc6e588d14e" pod="tigera-operator/tigera-operator-747864d56d-9k2fl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:45.870505 kubelet[2722]: I0813 01:12:45.870396 2722 kubelet.go:2405] "Pod admission denied" podUID="4c9b4d06-0cc3-4c02-aaf0-6ea9f1b175b6" pod="tigera-operator/tigera-operator-747864d56d-bxmjt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:46.077309 kubelet[2722]: I0813 01:12:46.077253 2722 kubelet.go:2405] "Pod admission denied" podUID="5eb578ac-1ed2-4501-a695-0b3b61ee21dc" pod="tigera-operator/tigera-operator-747864d56d-nl6bq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:46.172675 kubelet[2722]: I0813 01:12:46.172407 2722 kubelet.go:2405] "Pod admission denied" podUID="b1700b45-83eb-4265-b9e0-118ab8672860" pod="tigera-operator/tigera-operator-747864d56d-2sbsz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:46.221523 kubelet[2722]: I0813 01:12:46.221460 2722 kubelet.go:2405] "Pod admission denied" podUID="6e0f5185-c21f-4f3d-862d-33556c9b5a2d" pod="tigera-operator/tigera-operator-747864d56d-gtj2t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:46.322281 kubelet[2722]: I0813 01:12:46.322223 2722 kubelet.go:2405] "Pod admission denied" podUID="3d3a2895-d40d-4af2-a06b-69363790519d" pod="tigera-operator/tigera-operator-747864d56d-bmrqw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:46.424844 kubelet[2722]: I0813 01:12:46.424337 2722 kubelet.go:2405] "Pod admission denied" podUID="4df210cd-2584-46c7-b664-f26a475eeed5" pod="tigera-operator/tigera-operator-747864d56d-9gs4n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:46.474129 kubelet[2722]: I0813 01:12:46.474059 2722 kubelet.go:2405] "Pod admission denied" podUID="bba80915-c17e-4b20-a587-3f3a245e13d8" pod="tigera-operator/tigera-operator-747864d56d-ngdf2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:46.577653 kubelet[2722]: I0813 01:12:46.576484 2722 kubelet.go:2405] "Pod admission denied" podUID="4499990e-a2e4-471d-ab49-5bf5dca9b30e" pod="tigera-operator/tigera-operator-747864d56d-tpkkk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:46.673307 kubelet[2722]: I0813 01:12:46.673237 2722 kubelet.go:2405] "Pod admission denied" podUID="5fc664de-985c-407b-99ef-1b56901eb6cf" pod="tigera-operator/tigera-operator-747864d56d-rk674" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:46.773823 kubelet[2722]: I0813 01:12:46.773754 2722 kubelet.go:2405] "Pod admission denied" podUID="31de347f-a79d-4b47-8d9b-0bb89bab6c6a" pod="tigera-operator/tigera-operator-747864d56d-svkg2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:46.875106 kubelet[2722]: I0813 01:12:46.875039 2722 kubelet.go:2405] "Pod admission denied" podUID="5673f9bb-0735-4e68-b7bf-c29357533549" pod="tigera-operator/tigera-operator-747864d56d-f54kd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:46.972444 kubelet[2722]: I0813 01:12:46.972383 2722 kubelet.go:2405] "Pod admission denied" podUID="daad80da-e368-4b45-a360-db08541ea8d7" pod="tigera-operator/tigera-operator-747864d56d-zj75r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:47.081035 kubelet[2722]: I0813 01:12:47.079564 2722 kubelet.go:2405] "Pod admission denied" podUID="496c6b8b-6eab-48b7-b4b8-b56792fc6c40" pod="tigera-operator/tigera-operator-747864d56d-4zxfv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:47.173962 kubelet[2722]: I0813 01:12:47.173871 2722 kubelet.go:2405] "Pod admission denied" podUID="6c608899-c895-4c45-9135-8c7affb2fb95" pod="tigera-operator/tigera-operator-747864d56d-trmtg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:47.273601 kubelet[2722]: I0813 01:12:47.273544 2722 kubelet.go:2405] "Pod admission denied" podUID="5b3b0754-89b2-42e3-9d3f-fe46c20ddc56" pod="tigera-operator/tigera-operator-747864d56d-jrgjf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:47.373313 kubelet[2722]: I0813 01:12:47.373195 2722 kubelet.go:2405] "Pod admission denied" podUID="536402a9-6b8f-467e-98a3-ae6c5bd1895c" pod="tigera-operator/tigera-operator-747864d56d-ms2qs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:47.473954 kubelet[2722]: I0813 01:12:47.473877 2722 kubelet.go:2405] "Pod admission denied" podUID="ce51441f-315d-425e-9247-e1af87d0a50b" pod="tigera-operator/tigera-operator-747864d56d-c5zbs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:47.577917 kubelet[2722]: I0813 01:12:47.577269 2722 kubelet.go:2405] "Pod admission denied" podUID="8a36f27c-ba93-45ba-8c92-8ab011fe8791" pod="tigera-operator/tigera-operator-747864d56d-j8hn7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:47.677911 kubelet[2722]: I0813 01:12:47.677537 2722 kubelet.go:2405] "Pod admission denied" podUID="a8cc2556-b246-4402-9a68-8123fa270862" pod="tigera-operator/tigera-operator-747864d56d-l4l2d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:47.772794 kubelet[2722]: I0813 01:12:47.772750 2722 kubelet.go:2405] "Pod admission denied" podUID="726e0b8c-736d-44f7-a9d9-453798eb6739" pod="tigera-operator/tigera-operator-747864d56d-mdnb9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:47.870612 kubelet[2722]: I0813 01:12:47.870573 2722 kubelet.go:2405] "Pod admission denied" podUID="b6131896-3804-42be-882e-b3efc3008cbd" pod="tigera-operator/tigera-operator-747864d56d-kd9wx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:47.917643 kubelet[2722]: I0813 01:12:47.917613 2722 kubelet.go:2405] "Pod admission denied" podUID="fd23a70d-dbf8-4f76-8f89-737380872192" pod="tigera-operator/tigera-operator-747864d56d-jmw5s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:48.021364 kubelet[2722]: I0813 01:12:48.021216 2722 kubelet.go:2405] "Pod admission denied" podUID="46907566-c167-483d-af64-59a13d6e1359" pod="tigera-operator/tigera-operator-747864d56d-vk642" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:48.118600 kubelet[2722]: I0813 01:12:48.118569 2722 kubelet.go:2405] "Pod admission denied" podUID="0352317d-0752-4b88-99ec-15b32645fe66" pod="tigera-operator/tigera-operator-747864d56d-6d9bx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:48.227008 kubelet[2722]: I0813 01:12:48.226937 2722 kubelet.go:2405] "Pod admission denied" podUID="4fe6559f-d578-4cb2-943f-e7863772129b" pod="tigera-operator/tigera-operator-747864d56d-q7n2k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:48.425043 kubelet[2722]: I0813 01:12:48.424848 2722 kubelet.go:2405] "Pod admission denied" podUID="427ec801-9c9c-40f6-8634-6086234d2fe0" pod="tigera-operator/tigera-operator-747864d56d-x26bz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:48.527133 kubelet[2722]: I0813 01:12:48.527029 2722 kubelet.go:2405] "Pod admission denied" podUID="ee1bc8fc-8752-447a-866b-44e003748f04" pod="tigera-operator/tigera-operator-747864d56d-j9txz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:48.628136 kubelet[2722]: I0813 01:12:48.627730 2722 kubelet.go:2405] "Pod admission denied" podUID="7cc341bb-11a8-4ac3-8c01-31fe3e726480" pod="tigera-operator/tigera-operator-747864d56d-tx68h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:48.727505 kubelet[2722]: I0813 01:12:48.727121 2722 kubelet.go:2405] "Pod admission denied" podUID="1d8c69e5-e8c8-432e-99d2-3d38d60ca6f5" pod="tigera-operator/tigera-operator-747864d56d-r2cb5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:48.824412 kubelet[2722]: I0813 01:12:48.824343 2722 kubelet.go:2405] "Pod admission denied" podUID="439ae71c-e0e7-4aaa-9ca1-3f04382841ad" pod="tigera-operator/tigera-operator-747864d56d-86qlj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:48.924357 kubelet[2722]: I0813 01:12:48.924280 2722 kubelet.go:2405] "Pod admission denied" podUID="be57bd83-b455-4a72-9f75-1872db5253a6" pod="tigera-operator/tigera-operator-747864d56d-c2j67" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:49.024044 kubelet[2722]: I0813 01:12:49.023924 2722 kubelet.go:2405] "Pod admission denied" podUID="652df5a5-ccbd-4d14-aba4-06efa094328a" pod="tigera-operator/tigera-operator-747864d56d-tz6ml" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:49.058465 kubelet[2722]: E0813 01:12:49.058374 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount4112633976: write /var/lib/containerd/tmpmounts/containerd-mount4112633976/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-hq29b" podUID="3c0f3b86-7d63-44df-843e-763eb95a8b94" Aug 13 01:12:49.121700 kubelet[2722]: I0813 01:12:49.121635 2722 kubelet.go:2405] "Pod admission denied" podUID="edf1a8e4-81c4-4b74-94ae-5411f766f141" pod="tigera-operator/tigera-operator-747864d56d-hdlzq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:49.221934 kubelet[2722]: I0813 01:12:49.221875 2722 kubelet.go:2405] "Pod admission denied" podUID="ccda0606-ea63-42cd-bdb5-29058ca62097" pod="tigera-operator/tigera-operator-747864d56d-slckt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:49.319349 kubelet[2722]: I0813 01:12:49.319235 2722 kubelet.go:2405] "Pod admission denied" podUID="c5a05544-fe80-460e-8d11-0f5d2e1d6ad8" pod="tigera-operator/tigera-operator-747864d56d-nh8bl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:49.372504 kubelet[2722]: I0813 01:12:49.372190 2722 kubelet.go:2405] "Pod admission denied" podUID="4b90e343-40ec-42fb-9d96-b16146c084fe" pod="tigera-operator/tigera-operator-747864d56d-j4f2n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:49.470707 kubelet[2722]: I0813 01:12:49.470663 2722 kubelet.go:2405] "Pod admission denied" podUID="29b51217-0c1d-46ca-acf9-55318a027d8b" pod="tigera-operator/tigera-operator-747864d56d-cdjf2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:49.576043 kubelet[2722]: I0813 01:12:49.575952 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:12:49.576043 kubelet[2722]: I0813 01:12:49.575990 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:12:49.577578 kubelet[2722]: I0813 01:12:49.577564 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:12:49.585568 kubelet[2722]: I0813 01:12:49.585534 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:12:49.585695 kubelet[2722]: I0813 01:12:49.585604 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-node-hq29b","calico-system/csi-node-driver-l7lv4","calico-system/calico-typha-55bf5cd98c-8lqpc","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:12:49.585695 kubelet[2722]: E0813 01:12:49.585633 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:12:49.585695 kubelet[2722]: E0813 01:12:49.585650 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:12:49.585695 kubelet[2722]: E0813 01:12:49.585657 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:12:49.585695 kubelet[2722]: E0813 01:12:49.585664 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:12:49.585695 kubelet[2722]: E0813 01:12:49.585670 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:12:49.585695 kubelet[2722]: E0813 01:12:49.585679 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:12:49.585695 kubelet[2722]: E0813 01:12:49.585687 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:12:49.585695 kubelet[2722]: E0813 01:12:49.585695 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:12:49.585942 kubelet[2722]: E0813 01:12:49.585703 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:12:49.585942 kubelet[2722]: E0813 01:12:49.585710 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:12:49.585942 kubelet[2722]: I0813 01:12:49.585719 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:12:49.670906 kubelet[2722]: I0813 01:12:49.670864 2722 kubelet.go:2405] "Pod admission denied" podUID="b5dd8815-bb06-45dd-a20e-92a3f0db12ae" pod="tigera-operator/tigera-operator-747864d56d-8mwsq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:49.772612 kubelet[2722]: I0813 01:12:49.772544 2722 kubelet.go:2405] "Pod admission denied" podUID="a4c05dfd-399a-4ca4-9817-18fb20a639e3" pod="tigera-operator/tigera-operator-747864d56d-4pvlz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:49.829546 kubelet[2722]: I0813 01:12:49.828940 2722 kubelet.go:2405] "Pod admission denied" podUID="0f634d74-81a5-44a8-bbe8-4c72d65d822c" pod="tigera-operator/tigera-operator-747864d56d-pxhtn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:49.926123 kubelet[2722]: I0813 01:12:49.926080 2722 kubelet.go:2405] "Pod admission denied" podUID="6a52db1f-70d8-4bf2-86be-fb559f024d8e" pod="tigera-operator/tigera-operator-747864d56d-zwk2r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:50.124862 kubelet[2722]: I0813 01:12:50.124354 2722 kubelet.go:2405] "Pod admission denied" podUID="f5e3a461-e28a-46f6-a570-1e369b353318" pod="tigera-operator/tigera-operator-747864d56d-j5js7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:50.226023 kubelet[2722]: I0813 01:12:50.225950 2722 kubelet.go:2405] "Pod admission denied" podUID="4ee816a9-6c74-47a0-a262-0b776b262bb6" pod="tigera-operator/tigera-operator-747864d56d-n57vp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:50.323962 kubelet[2722]: I0813 01:12:50.323921 2722 kubelet.go:2405] "Pod admission denied" podUID="e236bd02-e180-41fe-a0b3-763de9346a10" pod="tigera-operator/tigera-operator-747864d56d-pncbt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:50.419885 kubelet[2722]: I0813 01:12:50.419782 2722 kubelet.go:2405] "Pod admission denied" podUID="adc50bda-e220-4a77-ad6b-8ed712037483" pod="tigera-operator/tigera-operator-747864d56d-z84hk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:50.519659 kubelet[2722]: I0813 01:12:50.519629 2722 kubelet.go:2405] "Pod admission denied" podUID="5117f57a-0c40-4a82-8839-512a6932dca0" pod="tigera-operator/tigera-operator-747864d56d-j988v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:50.619710 kubelet[2722]: I0813 01:12:50.619673 2722 kubelet.go:2405] "Pod admission denied" podUID="697a230c-23d3-4ca0-85a8-b7436571d863" pod="tigera-operator/tigera-operator-747864d56d-4l9gw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:50.718666 kubelet[2722]: I0813 01:12:50.718641 2722 kubelet.go:2405] "Pod admission denied" podUID="6a72e6a1-728f-481f-be5c-1fb1cb4af637" pod="tigera-operator/tigera-operator-747864d56d-7rnjb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:50.920173 kubelet[2722]: I0813 01:12:50.920136 2722 kubelet.go:2405] "Pod admission denied" podUID="bd9e50a2-3ffd-40c6-9c53-ecbeeb1c41fc" pod="tigera-operator/tigera-operator-747864d56d-w2487" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:51.020235 kubelet[2722]: I0813 01:12:51.020143 2722 kubelet.go:2405] "Pod admission denied" podUID="60740f61-540d-4410-a01e-2cdcffc50ec4" pod="tigera-operator/tigera-operator-747864d56d-zp62x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:51.128048 kubelet[2722]: I0813 01:12:51.127988 2722 kubelet.go:2405] "Pod admission denied" podUID="76e17323-d87b-45a0-af33-0fa2c83cfb33" pod="tigera-operator/tigera-operator-747864d56d-xs2xr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:51.226375 kubelet[2722]: I0813 01:12:51.226289 2722 kubelet.go:2405] "Pod admission denied" podUID="768e5ed2-bee4-4e03-b853-9aeb63e932f0" pod="tigera-operator/tigera-operator-747864d56d-57t54" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:51.325775 kubelet[2722]: I0813 01:12:51.325215 2722 kubelet.go:2405] "Pod admission denied" podUID="e16788a4-2d51-47a3-ac8d-43a60700554b" pod="tigera-operator/tigera-operator-747864d56d-dhmhq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:51.421475 kubelet[2722]: I0813 01:12:51.421437 2722 kubelet.go:2405] "Pod admission denied" podUID="6327d587-7b33-4d47-a802-8a20bc25fb35" pod="tigera-operator/tigera-operator-747864d56d-nx95j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:51.521730 kubelet[2722]: I0813 01:12:51.521684 2722 kubelet.go:2405] "Pod admission denied" podUID="c087bc47-4d94-4052-a453-9c125ac22418" pod="tigera-operator/tigera-operator-747864d56d-qpzfj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:51.622349 kubelet[2722]: I0813 01:12:51.622226 2722 kubelet.go:2405] "Pod admission denied" podUID="936ed1f2-5221-4482-ade0-a79e759725d1" pod="tigera-operator/tigera-operator-747864d56d-xsmlb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:51.720795 kubelet[2722]: I0813 01:12:51.720753 2722 kubelet.go:2405] "Pod admission denied" podUID="06d30c07-7a1d-4d3d-89dd-8dfce49224e6" pod="tigera-operator/tigera-operator-747864d56d-6zxlb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:51.820261 kubelet[2722]: I0813 01:12:51.820223 2722 kubelet.go:2405] "Pod admission denied" podUID="3239b766-6336-4800-9d68-334a788b97e2" pod="tigera-operator/tigera-operator-747864d56d-42z8p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:51.921347 kubelet[2722]: I0813 01:12:51.921143 2722 kubelet.go:2405] "Pod admission denied" podUID="848ebb90-0593-4339-beb2-48a8c9339700" pod="tigera-operator/tigera-operator-747864d56d-8pv7r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:52.058952 containerd[1550]: time="2025-08-13T01:12:52.058882642Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,}" Aug 13 01:12:52.102557 containerd[1550]: time="2025-08-13T01:12:52.102026388Z" level=error msg="Failed to destroy network for sandbox \"0b486e5f4dc88e03da1b3a8dd52f1b2bd81f2b67611d2ac4dd91162ad728a0f3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:52.104498 systemd[1]: run-netns-cni\x2d74702675\x2d1fbe\x2d9126\x2d23bb\x2df770781eda61.mount: Deactivated successfully. Aug 13 01:12:52.106014 containerd[1550]: time="2025-08-13T01:12:52.105935173Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b486e5f4dc88e03da1b3a8dd52f1b2bd81f2b67611d2ac4dd91162ad728a0f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:52.106434 kubelet[2722]: E0813 01:12:52.106398 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b486e5f4dc88e03da1b3a8dd52f1b2bd81f2b67611d2ac4dd91162ad728a0f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:52.106523 kubelet[2722]: E0813 01:12:52.106480 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b486e5f4dc88e03da1b3a8dd52f1b2bd81f2b67611d2ac4dd91162ad728a0f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:12:52.106523 kubelet[2722]: E0813 01:12:52.106504 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0b486e5f4dc88e03da1b3a8dd52f1b2bd81f2b67611d2ac4dd91162ad728a0f3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:12:52.106634 kubelet[2722]: E0813 01:12:52.106565 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0b486e5f4dc88e03da1b3a8dd52f1b2bd81f2b67611d2ac4dd91162ad728a0f3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7lv4" podUID="6b834979-32a4-464b-9898-ef87b1042a9e" Aug 13 01:12:52.129216 kubelet[2722]: I0813 01:12:52.128401 2722 kubelet.go:2405] "Pod admission denied" podUID="c2c51f69-c678-45c5-86f9-0b9735fca9c9" pod="tigera-operator/tigera-operator-747864d56d-2klhf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:52.220299 kubelet[2722]: I0813 01:12:52.220257 2722 kubelet.go:2405] "Pod admission denied" podUID="3c5a4cc8-1c7b-4506-9e8c-92a1c25d6ecf" pod="tigera-operator/tigera-operator-747864d56d-h9p6j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:52.321258 kubelet[2722]: I0813 01:12:52.321212 2722 kubelet.go:2405] "Pod admission denied" podUID="50ffb49a-e15e-49ea-a600-2a18394c720b" pod="tigera-operator/tigera-operator-747864d56d-b8kr8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:52.419781 kubelet[2722]: I0813 01:12:52.419746 2722 kubelet.go:2405] "Pod admission denied" podUID="c34f6091-321d-4d53-b15f-fc56dc195db3" pod="tigera-operator/tigera-operator-747864d56d-mn4p4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:52.530154 kubelet[2722]: I0813 01:12:52.528923 2722 kubelet.go:2405] "Pod admission denied" podUID="05d4296d-3755-4704-af4d-df2adb32f3c1" pod="tigera-operator/tigera-operator-747864d56d-2gsgt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:52.622006 kubelet[2722]: I0813 01:12:52.621954 2722 kubelet.go:2405] "Pod admission denied" podUID="2d4244ca-3108-46bb-9af9-ace5270209c8" pod="tigera-operator/tigera-operator-747864d56d-rnxtq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:52.672843 kubelet[2722]: I0813 01:12:52.672785 2722 kubelet.go:2405] "Pod admission denied" podUID="f9daa514-f5c8-4575-b344-587b63c1cddb" pod="tigera-operator/tigera-operator-747864d56d-f85hb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:52.772436 kubelet[2722]: I0813 01:12:52.772385 2722 kubelet.go:2405] "Pod admission denied" podUID="19d39abc-b54e-4ef5-b625-c6551d675262" pod="tigera-operator/tigera-operator-747864d56d-8lbx7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:52.879288 kubelet[2722]: I0813 01:12:52.879115 2722 kubelet.go:2405] "Pod admission denied" podUID="789dda4c-0f9e-4040-befa-29bfff93caff" pod="tigera-operator/tigera-operator-747864d56d-5hgvw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:52.970411 kubelet[2722]: I0813 01:12:52.970356 2722 kubelet.go:2405] "Pod admission denied" podUID="519a6690-018a-49d3-83c1-12e24ef40987" pod="tigera-operator/tigera-operator-747864d56d-pn95r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:53.175913 kubelet[2722]: I0813 01:12:53.175028 2722 kubelet.go:2405] "Pod admission denied" podUID="75f34ee9-33ff-418a-b511-a3d71e3648de" pod="tigera-operator/tigera-operator-747864d56d-79wtq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:53.274277 kubelet[2722]: I0813 01:12:53.274222 2722 kubelet.go:2405] "Pod admission denied" podUID="bf8608bb-1116-4f49-bdd2-45106c51763c" pod="tigera-operator/tigera-operator-747864d56d-c7f9x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:53.383920 kubelet[2722]: I0813 01:12:53.383214 2722 kubelet.go:2405] "Pod admission denied" podUID="f22afb46-8d93-435d-9c0e-a9c05a46c3b9" pod="tigera-operator/tigera-operator-747864d56d-rmn49" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:53.473400 kubelet[2722]: I0813 01:12:53.473265 2722 kubelet.go:2405] "Pod admission denied" podUID="11f32501-9ccf-4838-aeb0-d40b3dab7ef8" pod="tigera-operator/tigera-operator-747864d56d-7dqt4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:53.571663 kubelet[2722]: I0813 01:12:53.571621 2722 kubelet.go:2405] "Pod admission denied" podUID="23769184-e0aa-4df0-b625-ebc6bfdbb616" pod="tigera-operator/tigera-operator-747864d56d-dz4p5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:53.672534 kubelet[2722]: I0813 01:12:53.672498 2722 kubelet.go:2405] "Pod admission denied" podUID="39336d36-08b0-4f61-8631-9e94a999fd39" pod="tigera-operator/tigera-operator-747864d56d-7wjwq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:53.779372 kubelet[2722]: I0813 01:12:53.778065 2722 kubelet.go:2405] "Pod admission denied" podUID="f44a41e9-e1cf-4987-b07c-d94f5c2c63c7" pod="tigera-operator/tigera-operator-747864d56d-98n8r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:53.983177 kubelet[2722]: I0813 01:12:53.983091 2722 kubelet.go:2405] "Pod admission denied" podUID="95a5ad49-8357-46e2-b5e9-d3fe085cc1ce" pod="tigera-operator/tigera-operator-747864d56d-8c7st" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:54.075789 kubelet[2722]: I0813 01:12:54.075575 2722 kubelet.go:2405] "Pod admission denied" podUID="f87f18d6-74b3-4008-843c-461395c77d47" pod="tigera-operator/tigera-operator-747864d56d-jqzch" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:54.176553 kubelet[2722]: I0813 01:12:54.176326 2722 kubelet.go:2405] "Pod admission denied" podUID="00ba4f0f-9494-48e7-b78a-8ca987c9019d" pod="tigera-operator/tigera-operator-747864d56d-2tt4j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:54.271650 kubelet[2722]: I0813 01:12:54.271603 2722 kubelet.go:2405] "Pod admission denied" podUID="a1b28c6e-8b11-40d6-ab98-31bd9a3bf30a" pod="tigera-operator/tigera-operator-747864d56d-l6hjl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:54.371797 kubelet[2722]: I0813 01:12:54.371689 2722 kubelet.go:2405] "Pod admission denied" podUID="ce1aec53-5f4d-4214-8231-76afababc4b9" pod="tigera-operator/tigera-operator-747864d56d-wz7gj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:54.486843 kubelet[2722]: I0813 01:12:54.486333 2722 kubelet.go:2405] "Pod admission denied" podUID="8f91da6e-b148-4ac2-861f-2e0863177746" pod="tigera-operator/tigera-operator-747864d56d-mvg6t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:54.580241 kubelet[2722]: I0813 01:12:54.579885 2722 kubelet.go:2405] "Pod admission denied" podUID="6a8c0593-641e-4b0b-9c79-68fa3cab86f6" pod="tigera-operator/tigera-operator-747864d56d-78z2g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:54.674982 kubelet[2722]: I0813 01:12:54.674474 2722 kubelet.go:2405] "Pod admission denied" podUID="eacf6e2b-dfa8-4b3d-a72f-c235588b56f3" pod="tigera-operator/tigera-operator-747864d56d-6p2r9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:54.772963 kubelet[2722]: I0813 01:12:54.772917 2722 kubelet.go:2405] "Pod admission denied" podUID="6129893f-ea92-4edf-b3ec-f259b67002f8" pod="tigera-operator/tigera-operator-747864d56d-jhb7c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:54.983299 kubelet[2722]: I0813 01:12:54.982887 2722 kubelet.go:2405] "Pod admission denied" podUID="9253615b-7aa9-4516-ab25-b51f23f1df8c" pod="tigera-operator/tigera-operator-747864d56d-rn66f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:55.080178 kubelet[2722]: I0813 01:12:55.080108 2722 kubelet.go:2405] "Pod admission denied" podUID="32376d90-5dcb-493b-a2a0-940e07bbe004" pod="tigera-operator/tigera-operator-747864d56d-thsf2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:55.124169 kubelet[2722]: I0813 01:12:55.123583 2722 kubelet.go:2405] "Pod admission denied" podUID="c1517220-b9ce-43ae-8917-82c850af1b7b" pod="tigera-operator/tigera-operator-747864d56d-kzmms" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:55.222974 kubelet[2722]: I0813 01:12:55.222925 2722 kubelet.go:2405] "Pod admission denied" podUID="0b1e8d24-de90-4e3f-a2ef-e050ee5164c4" pod="tigera-operator/tigera-operator-747864d56d-pk82r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:55.423679 kubelet[2722]: I0813 01:12:55.423403 2722 kubelet.go:2405] "Pod admission denied" podUID="10fe52fe-7e09-4473-aa0d-03189f8ebd1b" pod="tigera-operator/tigera-operator-747864d56d-4jkhp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:55.532925 kubelet[2722]: I0813 01:12:55.532650 2722 kubelet.go:2405] "Pod admission denied" podUID="f646595d-536d-466d-a41f-bea4e90fe883" pod="tigera-operator/tigera-operator-747864d56d-mxlps" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:55.623719 kubelet[2722]: I0813 01:12:55.623679 2722 kubelet.go:2405] "Pod admission denied" podUID="04df46a3-e028-4c19-ae2a-374891f5bb66" pod="tigera-operator/tigera-operator-747864d56d-c8bkf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:55.724222 kubelet[2722]: I0813 01:12:55.724176 2722 kubelet.go:2405] "Pod admission denied" podUID="807ae52a-bc0e-4162-a027-73d8c5a64814" pod="tigera-operator/tigera-operator-747864d56d-dnwk4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:55.820771 kubelet[2722]: I0813 01:12:55.820725 2722 kubelet.go:2405] "Pod admission denied" podUID="444479df-4444-493e-a9e1-6ea73a639dd6" pod="tigera-operator/tigera-operator-747864d56d-ln2mp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:55.934934 kubelet[2722]: I0813 01:12:55.934415 2722 kubelet.go:2405] "Pod admission denied" podUID="d05a753e-21eb-4da1-94f4-dcaa0372f22e" pod="tigera-operator/tigera-operator-747864d56d-tjqft" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:56.024501 kubelet[2722]: I0813 01:12:56.024361 2722 kubelet.go:2405] "Pod admission denied" podUID="f52ef9e5-6965-49b2-8774-43e203d22ae0" pod="tigera-operator/tigera-operator-747864d56d-h7z68" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:56.122607 kubelet[2722]: I0813 01:12:56.122554 2722 kubelet.go:2405] "Pod admission denied" podUID="dbfb83a7-389a-4a20-a8d3-10f4b55a9f7c" pod="tigera-operator/tigera-operator-747864d56d-tc848" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:56.224937 kubelet[2722]: I0813 01:12:56.224499 2722 kubelet.go:2405] "Pod admission denied" podUID="96f7f5f3-cbf2-43e4-8574-a5c69d5d55ee" pod="tigera-operator/tigera-operator-747864d56d-2hkr9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:56.431589 kubelet[2722]: I0813 01:12:56.431550 2722 kubelet.go:2405] "Pod admission denied" podUID="e7fb7f28-dbde-4b85-8eb8-2aa84fd65809" pod="tigera-operator/tigera-operator-747864d56d-4bxkt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:56.529701 kubelet[2722]: I0813 01:12:56.529644 2722 kubelet.go:2405] "Pod admission denied" podUID="35a425a6-3116-4d7f-b16d-de3e7cf13e70" pod="tigera-operator/tigera-operator-747864d56d-h9n7z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:56.575149 kubelet[2722]: I0813 01:12:56.575074 2722 kubelet.go:2405] "Pod admission denied" podUID="be5f532b-aae2-4b73-849a-41d0fbb82b8c" pod="tigera-operator/tigera-operator-747864d56d-wlhqr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:56.677540 kubelet[2722]: I0813 01:12:56.677385 2722 kubelet.go:2405] "Pod admission denied" podUID="ae9093b9-5f6d-4c2c-b504-a542fad5b426" pod="tigera-operator/tigera-operator-747864d56d-z7brs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:56.773338 kubelet[2722]: I0813 01:12:56.773210 2722 kubelet.go:2405] "Pod admission denied" podUID="68635c60-8b45-4f40-8f69-c601c2b567a2" pod="tigera-operator/tigera-operator-747864d56d-pm4tq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:56.872355 kubelet[2722]: I0813 01:12:56.872313 2722 kubelet.go:2405] "Pod admission denied" podUID="28579af0-cb23-406f-a848-4fb89ae96de2" pod="tigera-operator/tigera-operator-747864d56d-6chdn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:56.972122 kubelet[2722]: I0813 01:12:56.972075 2722 kubelet.go:2405] "Pod admission denied" podUID="2c6a5aaf-f507-413a-aa57-993f51c63dae" pod="tigera-operator/tigera-operator-747864d56d-8jkqd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:57.057879 kubelet[2722]: E0813 01:12:57.057564 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:57.058820 containerd[1550]: time="2025-08-13T01:12:57.058791362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,}" Aug 13 01:12:57.081233 kubelet[2722]: I0813 01:12:57.081172 2722 kubelet.go:2405] "Pod admission denied" podUID="4f4042a7-4177-400c-b981-733395a76b22" pod="tigera-operator/tigera-operator-747864d56d-6z7z6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:57.114403 containerd[1550]: time="2025-08-13T01:12:57.114366873Z" level=error msg="Failed to destroy network for sandbox \"5335aa26eab5586063409de95bdcd9336276f73c8f9e2f758c68dafa63299a8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:57.117526 systemd[1]: run-netns-cni\x2d05933b48\x2d8a39\x2d6b84\x2d7caf\x2dfd9142a9d1f1.mount: Deactivated successfully. Aug 13 01:12:57.119110 containerd[1550]: time="2025-08-13T01:12:57.119003178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5335aa26eab5586063409de95bdcd9336276f73c8f9e2f758c68dafa63299a8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:57.119426 kubelet[2722]: E0813 01:12:57.119385 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5335aa26eab5586063409de95bdcd9336276f73c8f9e2f758c68dafa63299a8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:57.119493 kubelet[2722]: E0813 01:12:57.119460 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5335aa26eab5586063409de95bdcd9336276f73c8f9e2f758c68dafa63299a8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:12:57.119493 kubelet[2722]: E0813 01:12:57.119483 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5335aa26eab5586063409de95bdcd9336276f73c8f9e2f758c68dafa63299a8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:12:57.119578 kubelet[2722]: E0813 01:12:57.119554 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5335aa26eab5586063409de95bdcd9336276f73c8f9e2f758c68dafa63299a8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:12:57.279308 kubelet[2722]: I0813 01:12:57.279234 2722 kubelet.go:2405] "Pod admission denied" podUID="c63156c9-5b79-4cbd-b0b5-68f0c8ce3c94" pod="tigera-operator/tigera-operator-747864d56d-l2tmr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:57.374334 kubelet[2722]: I0813 01:12:57.374212 2722 kubelet.go:2405] "Pod admission denied" podUID="bdaf0ba7-6984-4a6d-96e4-300e24f98ff5" pod="tigera-operator/tigera-operator-747864d56d-dvkw7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:57.477434 kubelet[2722]: I0813 01:12:57.477385 2722 kubelet.go:2405] "Pod admission denied" podUID="30b464b8-2bcb-4c25-8d6d-9c74293f34f9" pod="tigera-operator/tigera-operator-747864d56d-5qbvd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:57.674274 kubelet[2722]: I0813 01:12:57.674092 2722 kubelet.go:2405] "Pod admission denied" podUID="646516f2-b280-4b51-8527-0452b243460b" pod="tigera-operator/tigera-operator-747864d56d-lsrpc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:57.774151 kubelet[2722]: I0813 01:12:57.774090 2722 kubelet.go:2405] "Pod admission denied" podUID="5e23eae1-1efb-41d2-87a2-2172eefc5185" pod="tigera-operator/tigera-operator-747864d56d-c2xn8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:57.874388 kubelet[2722]: I0813 01:12:57.874061 2722 kubelet.go:2405] "Pod admission denied" podUID="58a14777-3801-4bb9-8b59-47c591545d41" pod="tigera-operator/tigera-operator-747864d56d-kpv8g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:57.988807 kubelet[2722]: I0813 01:12:57.988153 2722 kubelet.go:2405] "Pod admission denied" podUID="2fd65ecd-09f8-4ec7-8963-fc7bf4fb08d6" pod="tigera-operator/tigera-operator-747864d56d-w6hs6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:58.074937 kubelet[2722]: I0813 01:12:58.074853 2722 kubelet.go:2405] "Pod admission denied" podUID="82bb8b15-cd99-45a9-8478-90efc998d489" pod="tigera-operator/tigera-operator-747864d56d-gh9nv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:58.172367 kubelet[2722]: I0813 01:12:58.172304 2722 kubelet.go:2405] "Pod admission denied" podUID="d13d1690-8344-4eb5-b75b-1458f10f43bc" pod="tigera-operator/tigera-operator-747864d56d-tdxhc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:58.275343 kubelet[2722]: I0813 01:12:58.275201 2722 kubelet.go:2405] "Pod admission denied" podUID="cc84f149-3dfd-4b2a-8783-36e3605c4978" pod="tigera-operator/tigera-operator-747864d56d-9jh9j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:58.374632 kubelet[2722]: I0813 01:12:58.374569 2722 kubelet.go:2405] "Pod admission denied" podUID="ba914bf8-d99e-43d6-a87d-e5d5cef62deb" pod="tigera-operator/tigera-operator-747864d56d-q8sj6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:58.480245 kubelet[2722]: I0813 01:12:58.480180 2722 kubelet.go:2405] "Pod admission denied" podUID="1597d999-df26-40bf-83ea-6333c244a0ba" pod="tigera-operator/tigera-operator-747864d56d-fxcc6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:58.576697 kubelet[2722]: I0813 01:12:58.576542 2722 kubelet.go:2405] "Pod admission denied" podUID="e9213369-fbee-435c-a84c-dd26bb861916" pod="tigera-operator/tigera-operator-747864d56d-zqtmk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:58.681916 kubelet[2722]: I0813 01:12:58.681489 2722 kubelet.go:2405] "Pod admission denied" podUID="2d12b9ab-6c35-4373-9537-967fbdfbb807" pod="tigera-operator/tigera-operator-747864d56d-9lgt8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:58.777705 kubelet[2722]: I0813 01:12:58.777655 2722 kubelet.go:2405] "Pod admission denied" podUID="5a697f99-7cb2-48bd-982d-808a2a0a820d" pod="tigera-operator/tigera-operator-747864d56d-cpchs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:58.875298 kubelet[2722]: I0813 01:12:58.875161 2722 kubelet.go:2405] "Pod admission denied" podUID="2e92012a-53d1-4d95-a4ae-69e199b444d1" pod="tigera-operator/tigera-operator-747864d56d-pst4n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:58.975919 kubelet[2722]: I0813 01:12:58.975724 2722 kubelet.go:2405] "Pod admission denied" podUID="d0afe623-c47e-41ed-a44f-9e15dc733cb4" pod="tigera-operator/tigera-operator-747864d56d-jphdn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:59.042069 kubelet[2722]: I0813 01:12:59.042039 2722 kubelet.go:2405] "Pod admission denied" podUID="c230a7f1-858d-496e-8873-50c001861637" pod="tigera-operator/tigera-operator-747864d56d-4jrqs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:59.058009 kubelet[2722]: E0813 01:12:59.057784 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:12:59.058433 containerd[1550]: time="2025-08-13T01:12:59.058392887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,}" Aug 13 01:12:59.136277 containerd[1550]: time="2025-08-13T01:12:59.133541883Z" level=error msg="Failed to destroy network for sandbox \"42972a64ba4aebdeafb279a77c4b31e6e8cef14ebd86f5d06b6f0da94aa16b8a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:59.137553 systemd[1]: run-netns-cni\x2dbea2b1cc\x2de2dc\x2dfb57\x2da6fd\x2d1b2867b24947.mount: Deactivated successfully. Aug 13 01:12:59.139845 kubelet[2722]: I0813 01:12:59.138615 2722 kubelet.go:2405] "Pod admission denied" podUID="e222a05a-6448-40d3-bf1c-c02f48d4fba1" pod="tigera-operator/tigera-operator-747864d56d-m54vg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:59.140749 containerd[1550]: time="2025-08-13T01:12:59.140684844Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"42972a64ba4aebdeafb279a77c4b31e6e8cef14ebd86f5d06b6f0da94aa16b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:59.141615 kubelet[2722]: E0813 01:12:59.141583 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42972a64ba4aebdeafb279a77c4b31e6e8cef14ebd86f5d06b6f0da94aa16b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:12:59.141762 kubelet[2722]: E0813 01:12:59.141712 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42972a64ba4aebdeafb279a77c4b31e6e8cef14ebd86f5d06b6f0da94aa16b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:12:59.141762 kubelet[2722]: E0813 01:12:59.141749 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"42972a64ba4aebdeafb279a77c4b31e6e8cef14ebd86f5d06b6f0da94aa16b8a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:12:59.141934 kubelet[2722]: E0813 01:12:59.141816 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"42972a64ba4aebdeafb279a77c4b31e6e8cef14ebd86f5d06b6f0da94aa16b8a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:12:59.226305 kubelet[2722]: I0813 01:12:59.226252 2722 kubelet.go:2405] "Pod admission denied" podUID="ecdcdb74-df3e-4d57-9ef1-c7af48e21e53" pod="tigera-operator/tigera-operator-747864d56d-dl2mk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:59.274977 kubelet[2722]: I0813 01:12:59.274919 2722 kubelet.go:2405] "Pod admission denied" podUID="a8218095-4b4e-47e3-b73a-8c49457bff51" pod="tigera-operator/tigera-operator-747864d56d-pwvm7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:59.387993 kubelet[2722]: I0813 01:12:59.387599 2722 kubelet.go:2405] "Pod admission denied" podUID="62b4c8c6-7ce6-430a-988e-1b6f5c4c5103" pod="tigera-operator/tigera-operator-747864d56d-2b9jl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:59.473433 kubelet[2722]: I0813 01:12:59.473376 2722 kubelet.go:2405] "Pod admission denied" podUID="74d88a96-1209-4498-922d-7bcaaf26d131" pod="tigera-operator/tigera-operator-747864d56d-ctlqq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:59.575064 kubelet[2722]: I0813 01:12:59.575004 2722 kubelet.go:2405] "Pod admission denied" podUID="034f261c-5062-4da7-a4d0-ff4edc2a3b6b" pod="tigera-operator/tigera-operator-747864d56d-ffvdh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:59.605385 kubelet[2722]: I0813 01:12:59.605350 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:12:59.605620 kubelet[2722]: I0813 01:12:59.605514 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:12:59.608237 kubelet[2722]: I0813 01:12:59.608211 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:12:59.623992 kubelet[2722]: I0813 01:12:59.623965 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:12:59.624090 kubelet[2722]: I0813 01:12:59.624061 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","calico-system/calico-node-hq29b","calico-system/csi-node-driver-l7lv4","calico-system/calico-typha-55bf5cd98c-8lqpc","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:12:59.624167 kubelet[2722]: E0813 01:12:59.624107 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:12:59.624167 kubelet[2722]: E0813 01:12:59.624120 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:12:59.624167 kubelet[2722]: E0813 01:12:59.624127 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:12:59.624167 kubelet[2722]: E0813 01:12:59.624134 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:12:59.624167 kubelet[2722]: E0813 01:12:59.624141 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:12:59.624167 kubelet[2722]: E0813 01:12:59.624153 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:12:59.624167 kubelet[2722]: E0813 01:12:59.624163 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:12:59.624167 kubelet[2722]: E0813 01:12:59.624171 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:12:59.624332 kubelet[2722]: E0813 01:12:59.624182 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:12:59.624332 kubelet[2722]: E0813 01:12:59.624192 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:12:59.624332 kubelet[2722]: I0813 01:12:59.624204 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:12:59.778798 kubelet[2722]: I0813 01:12:59.778271 2722 kubelet.go:2405] "Pod admission denied" podUID="f211aefe-671c-42f7-bed4-fbb422e0d204" pod="tigera-operator/tigera-operator-747864d56d-hhv8h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:59.874004 kubelet[2722]: I0813 01:12:59.873957 2722 kubelet.go:2405] "Pod admission denied" podUID="80d08770-f225-4fad-b831-8e64dabe67f6" pod="tigera-operator/tigera-operator-747864d56d-ql47j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:12:59.978381 kubelet[2722]: I0813 01:12:59.978304 2722 kubelet.go:2405] "Pod admission denied" podUID="4012fe52-ee15-45a3-ae8f-2a4f3351d12d" pod="tigera-operator/tigera-operator-747864d56d-msjb4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:00.060218 containerd[1550]: time="2025-08-13T01:13:00.059028899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:00.060218 containerd[1550]: time="2025-08-13T01:13:00.059826871Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:13:00.104921 kubelet[2722]: I0813 01:13:00.103008 2722 kubelet.go:2405] "Pod admission denied" podUID="0e8b24f8-6e0d-4b41-9658-a974498e6372" pod="tigera-operator/tigera-operator-747864d56d-6z5ls" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:00.146881 kubelet[2722]: I0813 01:13:00.146841 2722 kubelet.go:2405] "Pod admission denied" podUID="2dabe940-f721-478d-820a-db76c3111088" pod="tigera-operator/tigera-operator-747864d56d-fvwnx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:00.165122 containerd[1550]: time="2025-08-13T01:13:00.165043393Z" level=error msg="Failed to destroy network for sandbox \"9faafeee22b511b49fe613f973577fa377075e81ca721dde2a9ffe75c5b6bb45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:00.168692 systemd[1]: run-netns-cni\x2d78b78180\x2d0d39\x2db85b\x2d0580\x2d9de8b7a2995f.mount: Deactivated successfully. Aug 13 01:13:00.169748 containerd[1550]: time="2025-08-13T01:13:00.169401936Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9faafeee22b511b49fe613f973577fa377075e81ca721dde2a9ffe75c5b6bb45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:00.171519 kubelet[2722]: E0813 01:13:00.171495 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9faafeee22b511b49fe613f973577fa377075e81ca721dde2a9ffe75c5b6bb45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:00.171761 kubelet[2722]: E0813 01:13:00.171744 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9faafeee22b511b49fe613f973577fa377075e81ca721dde2a9ffe75c5b6bb45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:13:00.171944 kubelet[2722]: E0813 01:13:00.171924 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9faafeee22b511b49fe613f973577fa377075e81ca721dde2a9ffe75c5b6bb45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:13:00.172231 kubelet[2722]: E0813 01:13:00.172127 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9faafeee22b511b49fe613f973577fa377075e81ca721dde2a9ffe75c5b6bb45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:13:00.226237 kubelet[2722]: I0813 01:13:00.226184 2722 kubelet.go:2405] "Pod admission denied" podUID="a072103a-a369-4b51-a0e9-d97e33d2c714" pod="tigera-operator/tigera-operator-747864d56d-6jgzg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:00.324671 kubelet[2722]: I0813 01:13:00.324519 2722 kubelet.go:2405] "Pod admission denied" podUID="0a587de3-3a3b-4dfc-8fd8-5d82d59c2540" pod="tigera-operator/tigera-operator-747864d56d-m4v9g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:00.428645 kubelet[2722]: I0813 01:13:00.428586 2722 kubelet.go:2405] "Pod admission denied" podUID="f0fb935a-399b-4341-b086-54bfa567b301" pod="tigera-operator/tigera-operator-747864d56d-mf5mh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:00.641207 kubelet[2722]: I0813 01:13:00.641022 2722 kubelet.go:2405] "Pod admission denied" podUID="2c516201-4ae5-48a6-a009-4ffd04261580" pod="tigera-operator/tigera-operator-747864d56d-bgrwb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:00.742910 kubelet[2722]: I0813 01:13:00.742816 2722 kubelet.go:2405] "Pod admission denied" podUID="ac1ec1a4-9a6f-4ae5-86eb-fd63b309b9e2" pod="tigera-operator/tigera-operator-747864d56d-nrqcb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:00.839780 kubelet[2722]: I0813 01:13:00.839727 2722 kubelet.go:2405] "Pod admission denied" podUID="03c49459-d8b8-4c33-b75e-0efa54175016" pod="tigera-operator/tigera-operator-747864d56d-hgc7s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:00.935590 kubelet[2722]: I0813 01:13:00.935545 2722 kubelet.go:2405] "Pod admission denied" podUID="d48969b2-5d3b-4eb2-83b1-0a1b80f5a7e0" pod="tigera-operator/tigera-operator-747864d56d-fw2zm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:00.992496 kubelet[2722]: I0813 01:13:00.992440 2722 kubelet.go:2405] "Pod admission denied" podUID="b095d500-1567-4026-9e86-bf65881550ae" pod="tigera-operator/tigera-operator-747864d56d-thcq2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:01.088653 kubelet[2722]: I0813 01:13:01.087310 2722 kubelet.go:2405] "Pod admission denied" podUID="4391e536-ecfd-4732-b8e4-d80b68a4ca51" pod="tigera-operator/tigera-operator-747864d56d-456n8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:01.286690 kubelet[2722]: I0813 01:13:01.286527 2722 kubelet.go:2405] "Pod admission denied" podUID="62bf6e80-c46e-494f-94be-77f414f8bceb" pod="tigera-operator/tigera-operator-747864d56d-wc29g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:01.385335 kubelet[2722]: I0813 01:13:01.385154 2722 kubelet.go:2405] "Pod admission denied" podUID="3a03220d-c77e-4a2a-a413-3d666bf5a99c" pod="tigera-operator/tigera-operator-747864d56d-qmcvw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:01.495213 kubelet[2722]: I0813 01:13:01.495160 2722 kubelet.go:2405] "Pod admission denied" podUID="a50a98c8-d30c-408c-9c95-36adcbc33bc0" pod="tigera-operator/tigera-operator-747864d56d-5dglg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:01.608470 kubelet[2722]: I0813 01:13:01.607292 2722 kubelet.go:2405] "Pod admission denied" podUID="959e754f-04ea-47ae-913d-7f027ea9143f" pod="tigera-operator/tigera-operator-747864d56d-rtxw2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:01.743819 kubelet[2722]: I0813 01:13:01.743762 2722 kubelet.go:2405] "Pod admission denied" podUID="f0f4f621-6555-4e37-b7d3-b18b302d823b" pod="tigera-operator/tigera-operator-747864d56d-2pnzq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:01.840819 kubelet[2722]: I0813 01:13:01.840729 2722 kubelet.go:2405] "Pod admission denied" podUID="c69998d4-904d-4fd1-bdb9-4b0eabf67570" pod="tigera-operator/tigera-operator-747864d56d-pssb6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:01.939337 kubelet[2722]: I0813 01:13:01.939285 2722 kubelet.go:2405] "Pod admission denied" podUID="47e199bc-026c-445e-8b3b-c11f798f4099" pod="tigera-operator/tigera-operator-747864d56d-vqnb4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:01.978243 kubelet[2722]: I0813 01:13:01.978139 2722 kubelet.go:2405] "Pod admission denied" podUID="f9c21200-bc34-48ce-86df-f10f83ce00e3" pod="tigera-operator/tigera-operator-747864d56d-s8q86" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:02.084245 kubelet[2722]: I0813 01:13:02.084196 2722 kubelet.go:2405] "Pod admission denied" podUID="bc9fe2a0-34cd-4b4b-9ff0-24075a703f55" pod="tigera-operator/tigera-operator-747864d56d-4btjx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:02.180945 kubelet[2722]: I0813 01:13:02.180856 2722 kubelet.go:2405] "Pod admission denied" podUID="650c8d78-a69e-4dd8-a969-61b25ef29635" pod="tigera-operator/tigera-operator-747864d56d-vnz7t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:02.281042 kubelet[2722]: I0813 01:13:02.280858 2722 kubelet.go:2405] "Pod admission denied" podUID="c2f7fc02-a738-4dbc-b609-47b8d0a7718d" pod="tigera-operator/tigera-operator-747864d56d-zqdfn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:02.390616 kubelet[2722]: I0813 01:13:02.390478 2722 kubelet.go:2405] "Pod admission denied" podUID="7cbf1494-ac7c-42e2-9aab-ef33f20d75b4" pod="tigera-operator/tigera-operator-747864d56d-ktz8v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:02.503331 kubelet[2722]: I0813 01:13:02.502459 2722 kubelet.go:2405] "Pod admission denied" podUID="9d203d8d-6751-4a39-bdf2-5be0d6d2b326" pod="tigera-operator/tigera-operator-747864d56d-znp2s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:02.521136 containerd[1550]: time="2025-08-13T01:13:02.520290286Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3991429278: write /var/lib/containerd/tmpmounts/containerd-mount3991429278/usr/bin/calico-node: no space left on device" Aug 13 01:13:02.521136 containerd[1550]: time="2025-08-13T01:13:02.520582627Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:13:02.520716 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3991429278.mount: Deactivated successfully. Aug 13 01:13:02.527233 kubelet[2722]: E0813 01:13:02.527079 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3991429278: write /var/lib/containerd/tmpmounts/containerd-mount3991429278/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:13:02.528099 kubelet[2722]: E0813 01:13:02.527359 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3991429278: write /var/lib/containerd/tmpmounts/containerd-mount3991429278/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:13:02.528169 kubelet[2722]: E0813 01:13:02.527636 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq7j6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},StopSignal:nil,},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-hq29b_calico-system(3c0f3b86-7d63-44df-843e-763eb95a8b94): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3991429278: write /var/lib/containerd/tmpmounts/containerd-mount3991429278/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:13:02.528916 kubelet[2722]: E0813 01:13:02.528760 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3991429278: write /var/lib/containerd/tmpmounts/containerd-mount3991429278/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-hq29b" podUID="3c0f3b86-7d63-44df-843e-763eb95a8b94" Aug 13 01:13:02.678106 kubelet[2722]: I0813 01:13:02.678012 2722 kubelet.go:2405] "Pod admission denied" podUID="600f5576-c5bf-4f5b-b24d-f4a16acecf45" pod="tigera-operator/tigera-operator-747864d56d-4h6mt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:02.778099 kubelet[2722]: I0813 01:13:02.778036 2722 kubelet.go:2405] "Pod admission denied" podUID="30b77eb2-5387-44bb-bb9a-a3b8d103dea6" pod="tigera-operator/tigera-operator-747864d56d-zzkvk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:02.825555 kubelet[2722]: I0813 01:13:02.825508 2722 kubelet.go:2405] "Pod admission denied" podUID="26f6accc-3fe1-4e71-bbbb-2ed5a3657d4b" pod="tigera-operator/tigera-operator-747864d56d-f6dnf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:02.929297 kubelet[2722]: I0813 01:13:02.929149 2722 kubelet.go:2405] "Pod admission denied" podUID="366f4b02-88b6-4e88-b352-45ebda1606f5" pod="tigera-operator/tigera-operator-747864d56d-8wpcx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:03.025158 kubelet[2722]: I0813 01:13:03.025092 2722 kubelet.go:2405] "Pod admission denied" podUID="f030dfaf-030f-4340-b581-9b96c6fb5e6e" pod="tigera-operator/tigera-operator-747864d56d-hvhf9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:03.071921 kubelet[2722]: I0813 01:13:03.071836 2722 kubelet.go:2405] "Pod admission denied" podUID="9515b1cf-7e07-46d1-9b1e-3ab91e6b8631" pod="tigera-operator/tigera-operator-747864d56d-v5zjj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:03.180211 kubelet[2722]: I0813 01:13:03.178775 2722 kubelet.go:2405] "Pod admission denied" podUID="7818fd10-b48b-4bbd-91a9-b828486a1932" pod="tigera-operator/tigera-operator-747864d56d-hhqqd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:03.272915 kubelet[2722]: I0813 01:13:03.272846 2722 kubelet.go:2405] "Pod admission denied" podUID="7edab9ec-4e37-4a4d-ac36-9d9c9315947f" pod="tigera-operator/tigera-operator-747864d56d-ntblk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:03.324829 kubelet[2722]: I0813 01:13:03.324773 2722 kubelet.go:2405] "Pod admission denied" podUID="f00eb931-bd81-41b6-8a23-0923e054909b" pod="tigera-operator/tigera-operator-747864d56d-46jcr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:03.429762 kubelet[2722]: I0813 01:13:03.429656 2722 kubelet.go:2405] "Pod admission denied" podUID="bc4923d8-4c05-4284-b626-1be4b22b36d7" pod="tigera-operator/tigera-operator-747864d56d-j66zq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:03.628528 kubelet[2722]: I0813 01:13:03.628394 2722 kubelet.go:2405] "Pod admission denied" podUID="d5a3f6a2-41f1-42ca-9fd5-08c0d754aa58" pod="tigera-operator/tigera-operator-747864d56d-gm8zn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:03.727720 kubelet[2722]: I0813 01:13:03.727649 2722 kubelet.go:2405] "Pod admission denied" podUID="63f54f2c-fac3-45d9-9894-0d4c81c1160b" pod="tigera-operator/tigera-operator-747864d56d-jv5rz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:03.827230 kubelet[2722]: I0813 01:13:03.827176 2722 kubelet.go:2405] "Pod admission denied" podUID="7c24ab57-b936-40f6-8dd6-1e4856be7ee3" pod="tigera-operator/tigera-operator-747864d56d-z686g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:03.938716 kubelet[2722]: I0813 01:13:03.938674 2722 kubelet.go:2405] "Pod admission denied" podUID="4b7f1db3-1b94-49ef-956e-4dc554acfeac" pod="tigera-operator/tigera-operator-747864d56d-l244h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:04.025251 kubelet[2722]: I0813 01:13:04.025202 2722 kubelet.go:2405] "Pod admission denied" podUID="189fcd8e-915e-4443-aef2-5cf0723826ad" pod="tigera-operator/tigera-operator-747864d56d-ccd4z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:04.131121 kubelet[2722]: I0813 01:13:04.131062 2722 kubelet.go:2405] "Pod admission denied" podUID="5baf233f-64ae-49e5-8e91-63c9d81f72c0" pod="tigera-operator/tigera-operator-747864d56d-997xx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:04.232609 kubelet[2722]: I0813 01:13:04.232480 2722 kubelet.go:2405] "Pod admission denied" podUID="dcb64b91-427d-4d4f-8b19-c3a748794921" pod="tigera-operator/tigera-operator-747864d56d-tn6lb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:04.325481 kubelet[2722]: I0813 01:13:04.325423 2722 kubelet.go:2405] "Pod admission denied" podUID="5486a024-f066-4b49-9905-60927680aaed" pod="tigera-operator/tigera-operator-747864d56d-n7r5v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:04.427847 kubelet[2722]: I0813 01:13:04.427790 2722 kubelet.go:2405] "Pod admission denied" podUID="586cef39-5bbb-481e-9784-4d10fd750f11" pod="tigera-operator/tigera-operator-747864d56d-h4drs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:04.533035 kubelet[2722]: I0813 01:13:04.532917 2722 kubelet.go:2405] "Pod admission denied" podUID="39cace45-7c4a-4c1c-8d71-8f448bcd4e92" pod="tigera-operator/tigera-operator-747864d56d-r66q4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:04.631219 kubelet[2722]: I0813 01:13:04.631156 2722 kubelet.go:2405] "Pod admission denied" podUID="b635bc5d-a2e5-4dc7-99c6-47a1f15916e2" pod="tigera-operator/tigera-operator-747864d56d-rhxdp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:04.729730 kubelet[2722]: I0813 01:13:04.729684 2722 kubelet.go:2405] "Pod admission denied" podUID="78050701-c3e5-43e1-923c-848aaedd7c9f" pod="tigera-operator/tigera-operator-747864d56d-mh79c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:04.831354 kubelet[2722]: I0813 01:13:04.831235 2722 kubelet.go:2405] "Pod admission denied" podUID="f61fed46-8cb3-4240-bdf3-359d63fb6698" pod="tigera-operator/tigera-operator-747864d56d-5wt49" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:04.929426 kubelet[2722]: I0813 01:13:04.929365 2722 kubelet.go:2405] "Pod admission denied" podUID="74523a7c-4d10-4180-bf02-b49697c3d353" pod="tigera-operator/tigera-operator-747864d56d-7qcck" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:05.027804 kubelet[2722]: I0813 01:13:05.027726 2722 kubelet.go:2405] "Pod admission denied" podUID="a70aabf7-99a7-4686-af7f-c0b03830e76e" pod="tigera-operator/tigera-operator-747864d56d-nbr67" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:05.133932 kubelet[2722]: I0813 01:13:05.131782 2722 kubelet.go:2405] "Pod admission denied" podUID="71d1b8b2-2c90-4d89-96e7-6aa43df3f4d5" pod="tigera-operator/tigera-operator-747864d56d-lxms5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:05.225130 kubelet[2722]: I0813 01:13:05.225070 2722 kubelet.go:2405] "Pod admission denied" podUID="bd2d9651-cac3-49be-a8f3-8af023fdf943" pod="tigera-operator/tigera-operator-747864d56d-8k29z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:05.425285 kubelet[2722]: I0813 01:13:05.424761 2722 kubelet.go:2405] "Pod admission denied" podUID="48582034-d0c9-4977-8d77-95126531ba02" pod="tigera-operator/tigera-operator-747864d56d-dnqxl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:05.526067 kubelet[2722]: I0813 01:13:05.526009 2722 kubelet.go:2405] "Pod admission denied" podUID="505f9f1d-6433-4117-837e-2825e219e0a1" pod="tigera-operator/tigera-operator-747864d56d-hdr9b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:05.578287 kubelet[2722]: I0813 01:13:05.578208 2722 kubelet.go:2405] "Pod admission denied" podUID="09103f59-2e3e-4a02-9121-00732412a652" pod="tigera-operator/tigera-operator-747864d56d-vk9f8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:05.675517 kubelet[2722]: I0813 01:13:05.675399 2722 kubelet.go:2405] "Pod admission denied" podUID="e4df6f00-17e1-48b7-ac3e-b5056ccb1f9a" pod="tigera-operator/tigera-operator-747864d56d-xjd5k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:05.772002 kubelet[2722]: I0813 01:13:05.771946 2722 kubelet.go:2405] "Pod admission denied" podUID="ca67f71a-b8a9-44ca-9351-3cc759e5fdcb" pod="tigera-operator/tigera-operator-747864d56d-c68kb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:05.879865 kubelet[2722]: I0813 01:13:05.879804 2722 kubelet.go:2405] "Pod admission denied" podUID="8beda5a4-6672-4e0f-95c0-244cd536e474" pod="tigera-operator/tigera-operator-747864d56d-x9ktb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:05.980551 kubelet[2722]: I0813 01:13:05.980063 2722 kubelet.go:2405] "Pod admission denied" podUID="eb117592-c2b9-473a-8cd1-ab21876d051a" pod="tigera-operator/tigera-operator-747864d56d-fzl9f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:06.025332 kubelet[2722]: I0813 01:13:06.025270 2722 kubelet.go:2405] "Pod admission denied" podUID="c38c9538-781f-4bfd-9592-05263543bc82" pod="tigera-operator/tigera-operator-747864d56d-nlrsr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:06.057925 containerd[1550]: time="2025-08-13T01:13:06.057621622Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:06.124566 containerd[1550]: time="2025-08-13T01:13:06.124504284Z" level=error msg="Failed to destroy network for sandbox \"d8a8775b086981ffedced4806b2c44e2fbd0ea592df46fdbfee365a526a4e88e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:06.127168 systemd[1]: run-netns-cni\x2dc24be73c\x2d6a8b\x2d1b9b\x2db3db\x2d66acbbec79e0.mount: Deactivated successfully. Aug 13 01:13:06.130218 containerd[1550]: time="2025-08-13T01:13:06.130052088Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8a8775b086981ffedced4806b2c44e2fbd0ea592df46fdbfee365a526a4e88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:06.130945 kubelet[2722]: E0813 01:13:06.130859 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8a8775b086981ffedced4806b2c44e2fbd0ea592df46fdbfee365a526a4e88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:06.131155 kubelet[2722]: E0813 01:13:06.131084 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8a8775b086981ffedced4806b2c44e2fbd0ea592df46fdbfee365a526a4e88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:13:06.131241 kubelet[2722]: E0813 01:13:06.131224 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8a8775b086981ffedced4806b2c44e2fbd0ea592df46fdbfee365a526a4e88e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:13:06.131360 kubelet[2722]: E0813 01:13:06.131335 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8a8775b086981ffedced4806b2c44e2fbd0ea592df46fdbfee365a526a4e88e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7lv4" podUID="6b834979-32a4-464b-9898-ef87b1042a9e" Aug 13 01:13:06.134206 kubelet[2722]: I0813 01:13:06.134137 2722 kubelet.go:2405] "Pod admission denied" podUID="b572a5de-76f4-49da-83fc-ec8bede28fae" pod="tigera-operator/tigera-operator-747864d56d-lzldl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:06.331501 kubelet[2722]: I0813 01:13:06.331309 2722 kubelet.go:2405] "Pod admission denied" podUID="76c5d510-543b-4584-b90c-cddd8b4e1e45" pod="tigera-operator/tigera-operator-747864d56d-bkbwh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:06.430446 kubelet[2722]: I0813 01:13:06.430363 2722 kubelet.go:2405] "Pod admission denied" podUID="db245fa7-9791-43d5-9b41-709f8c030571" pod="tigera-operator/tigera-operator-747864d56d-qmfgs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:06.527958 kubelet[2722]: I0813 01:13:06.527876 2722 kubelet.go:2405] "Pod admission denied" podUID="cb51fef7-6a60-42de-b146-7e2b2d04247f" pod="tigera-operator/tigera-operator-747864d56d-bxxzb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:06.626220 kubelet[2722]: I0813 01:13:06.626057 2722 kubelet.go:2405] "Pod admission denied" podUID="08d52b0f-8c12-4e91-8dee-9cf55e5ba95f" pod="tigera-operator/tigera-operator-747864d56d-vv77j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:06.722976 kubelet[2722]: I0813 01:13:06.722916 2722 kubelet.go:2405] "Pod admission denied" podUID="006c68cc-56ed-4dfc-8436-db033ebb8077" pod="tigera-operator/tigera-operator-747864d56d-wt2l2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:06.824004 kubelet[2722]: I0813 01:13:06.823943 2722 kubelet.go:2405] "Pod admission denied" podUID="7b0ec038-fb54-4e77-9bc2-79a4c3b6a2ae" pod="tigera-operator/tigera-operator-747864d56d-tz92d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:06.927995 kubelet[2722]: I0813 01:13:06.927957 2722 kubelet.go:2405] "Pod admission denied" podUID="626df850-0644-4aaa-96dd-3d8e76fe82fb" pod="tigera-operator/tigera-operator-747864d56d-prgp8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:07.023600 kubelet[2722]: I0813 01:13:07.023545 2722 kubelet.go:2405] "Pod admission denied" podUID="acd012d4-4cd2-4851-b310-39f833ed39b2" pod="tigera-operator/tigera-operator-747864d56d-j2w9l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:07.128432 kubelet[2722]: I0813 01:13:07.128370 2722 kubelet.go:2405] "Pod admission denied" podUID="5c5fb0d4-1641-4fdb-becd-0484b9d74abc" pod="tigera-operator/tigera-operator-747864d56d-z6kbb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:07.228303 kubelet[2722]: I0813 01:13:07.228143 2722 kubelet.go:2405] "Pod admission denied" podUID="3393a86f-82e9-4709-b514-d4fc05e04cd8" pod="tigera-operator/tigera-operator-747864d56d-qcmfc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:07.328472 kubelet[2722]: I0813 01:13:07.328404 2722 kubelet.go:2405] "Pod admission denied" podUID="a9b0d6c5-79bf-4d38-8582-ad5fbfeda260" pod="tigera-operator/tigera-operator-747864d56d-h2rff" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:07.426112 kubelet[2722]: I0813 01:13:07.426050 2722 kubelet.go:2405] "Pod admission denied" podUID="36eba0ba-067c-4196-b57c-83b39fab88be" pod="tigera-operator/tigera-operator-747864d56d-xnqc6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:07.530629 kubelet[2722]: I0813 01:13:07.530467 2722 kubelet.go:2405] "Pod admission denied" podUID="87e4c41f-77f1-4ff3-b382-7448de04d550" pod="tigera-operator/tigera-operator-747864d56d-t9d9j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:07.731780 kubelet[2722]: I0813 01:13:07.731716 2722 kubelet.go:2405] "Pod admission denied" podUID="57c7236d-532b-4d26-ae1f-c4a64757c409" pod="tigera-operator/tigera-operator-747864d56d-9kztm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:07.830668 kubelet[2722]: I0813 01:13:07.830124 2722 kubelet.go:2405] "Pod admission denied" podUID="07acc7a3-ac9f-4978-8d64-68b4d4a8658e" pod="tigera-operator/tigera-operator-747864d56d-dcmb4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:07.928698 kubelet[2722]: I0813 01:13:07.928632 2722 kubelet.go:2405] "Pod admission denied" podUID="51acfc72-408b-4381-bdd4-501cac7cd591" pod="tigera-operator/tigera-operator-747864d56d-84tm4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:08.130806 kubelet[2722]: I0813 01:13:08.130643 2722 kubelet.go:2405] "Pod admission denied" podUID="6d08d8ef-ad86-478b-a6b2-941a6743976a" pod="tigera-operator/tigera-operator-747864d56d-754lc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:08.228869 kubelet[2722]: I0813 01:13:08.228808 2722 kubelet.go:2405] "Pod admission denied" podUID="b545aae3-bb03-47dd-9453-0000e58e92f1" pod="tigera-operator/tigera-operator-747864d56d-wmk85" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:08.329602 kubelet[2722]: I0813 01:13:08.328274 2722 kubelet.go:2405] "Pod admission denied" podUID="921602c9-6d7b-4f73-a493-aa269b99cb5d" pod="tigera-operator/tigera-operator-747864d56d-d6jmg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:08.532771 kubelet[2722]: I0813 01:13:08.532703 2722 kubelet.go:2405] "Pod admission denied" podUID="efeaeed6-d110-4428-b8fa-ca6d1033fb6f" pod="tigera-operator/tigera-operator-747864d56d-nw2z7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:08.626620 kubelet[2722]: I0813 01:13:08.626558 2722 kubelet.go:2405] "Pod admission denied" podUID="6d38651c-0b08-4561-9f40-49c32ec133c0" pod="tigera-operator/tigera-operator-747864d56d-qg6jv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:08.678297 kubelet[2722]: I0813 01:13:08.678230 2722 kubelet.go:2405] "Pod admission denied" podUID="444eacfe-051a-4956-b494-af6884488cbc" pod="tigera-operator/tigera-operator-747864d56d-dbnv6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:08.778425 kubelet[2722]: I0813 01:13:08.778371 2722 kubelet.go:2405] "Pod admission denied" podUID="36105cff-6c7b-4bcd-ba27-31da45864154" pod="tigera-operator/tigera-operator-747864d56d-42rfh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:08.878837 kubelet[2722]: I0813 01:13:08.878649 2722 kubelet.go:2405] "Pod admission denied" podUID="1086a211-3195-49b0-8b6d-5803a19d52bf" pod="tigera-operator/tigera-operator-747864d56d-zb6nh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:08.928087 kubelet[2722]: I0813 01:13:08.928016 2722 kubelet.go:2405] "Pod admission denied" podUID="44f81097-5a7a-4e56-a1f4-6e337b562b3f" pod="tigera-operator/tigera-operator-747864d56d-x59ng" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:09.026143 kubelet[2722]: I0813 01:13:09.026084 2722 kubelet.go:2405] "Pod admission denied" podUID="eba72e31-8920-4f4e-ad23-2df04089f1a8" pod="tigera-operator/tigera-operator-747864d56d-pm85f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:09.130218 kubelet[2722]: I0813 01:13:09.129584 2722 kubelet.go:2405] "Pod admission denied" podUID="d4529b9a-6a12-4939-bd3b-3a99eb80d3e6" pod="tigera-operator/tigera-operator-747864d56d-qvs7r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:09.228253 kubelet[2722]: I0813 01:13:09.228198 2722 kubelet.go:2405] "Pod admission denied" podUID="dfa87f4b-d987-4804-80eb-b811ad9ed307" pod="tigera-operator/tigera-operator-747864d56d-n9pwd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:09.337242 kubelet[2722]: I0813 01:13:09.337141 2722 kubelet.go:2405] "Pod admission denied" podUID="400a36fc-5679-49bc-a024-31d271d73231" pod="tigera-operator/tigera-operator-747864d56d-ffxkn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:09.429626 kubelet[2722]: I0813 01:13:09.429547 2722 kubelet.go:2405] "Pod admission denied" podUID="aac4fe4d-998c-4577-bd8c-dda53bdcf4a0" pod="tigera-operator/tigera-operator-747864d56d-jmgbk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:09.633207 kubelet[2722]: I0813 01:13:09.633158 2722 kubelet.go:2405] "Pod admission denied" podUID="4e0de4c8-58a5-4d4c-b926-5f19e2d5409a" pod="tigera-operator/tigera-operator-747864d56d-j4xj8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:09.650058 kubelet[2722]: I0813 01:13:09.650016 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:13:09.650058 kubelet[2722]: I0813 01:13:09.650055 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:13:09.653166 kubelet[2722]: I0813 01:13:09.653146 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:13:09.665987 kubelet[2722]: I0813 01:13:09.665954 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:13:09.666072 kubelet[2722]: I0813 01:13:09.666046 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","calico-system/calico-node-hq29b","calico-system/csi-node-driver-l7lv4","calico-system/calico-typha-55bf5cd98c-8lqpc","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:13:09.666157 kubelet[2722]: E0813 01:13:09.666077 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:13:09.666157 kubelet[2722]: E0813 01:13:09.666088 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:13:09.666157 kubelet[2722]: E0813 01:13:09.666095 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:13:09.666157 kubelet[2722]: E0813 01:13:09.666102 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:13:09.666157 kubelet[2722]: E0813 01:13:09.666109 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:13:09.666157 kubelet[2722]: E0813 01:13:09.666120 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:13:09.666157 kubelet[2722]: E0813 01:13:09.666131 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:13:09.666157 kubelet[2722]: E0813 01:13:09.666143 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:13:09.666157 kubelet[2722]: E0813 01:13:09.666152 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:13:09.666157 kubelet[2722]: E0813 01:13:09.666161 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:13:09.666157 kubelet[2722]: I0813 01:13:09.666172 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:13:09.730336 kubelet[2722]: I0813 01:13:09.730187 2722 kubelet.go:2405] "Pod admission denied" podUID="70180882-be17-4df3-a816-4449cca584b7" pod="tigera-operator/tigera-operator-747864d56d-tzm5j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:09.845583 kubelet[2722]: I0813 01:13:09.845510 2722 kubelet.go:2405] "Pod admission denied" podUID="ed5c7636-d0aa-40d2-831b-87e74ef96f24" pod="tigera-operator/tigera-operator-747864d56d-t2npg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:09.934180 kubelet[2722]: I0813 01:13:09.934102 2722 kubelet.go:2405] "Pod admission denied" podUID="53ca9058-a18d-4ef0-898f-33472a800f58" pod="tigera-operator/tigera-operator-747864d56d-sm4bg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:10.029944 kubelet[2722]: I0813 01:13:10.029740 2722 kubelet.go:2405] "Pod admission denied" podUID="04f43848-55e1-472e-9195-80609207e08f" pod="tigera-operator/tigera-operator-747864d56d-4t57p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:10.244634 kubelet[2722]: I0813 01:13:10.244480 2722 kubelet.go:2405] "Pod admission denied" podUID="d407cede-3199-4c7c-9028-f8efc7dc694f" pod="tigera-operator/tigera-operator-747864d56d-n9lds" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:10.331383 kubelet[2722]: I0813 01:13:10.331196 2722 kubelet.go:2405] "Pod admission denied" podUID="cea6674f-40d3-4a95-8ea0-236c1dfc03ed" pod="tigera-operator/tigera-operator-747864d56d-ljjnr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:10.431342 kubelet[2722]: I0813 01:13:10.431282 2722 kubelet.go:2405] "Pod admission denied" podUID="68a1ad85-e0c2-4b5f-ba65-217041ed8021" pod="tigera-operator/tigera-operator-747864d56d-tjlbw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:10.529261 kubelet[2722]: I0813 01:13:10.529188 2722 kubelet.go:2405] "Pod admission denied" podUID="4229af85-bfcf-4727-bae6-37f72744c423" pod="tigera-operator/tigera-operator-747864d56d-bpks9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:10.580145 kubelet[2722]: I0813 01:13:10.580078 2722 kubelet.go:2405] "Pod admission denied" podUID="b2a5de0a-36f8-4f50-9aac-77a93d9d9e99" pod="tigera-operator/tigera-operator-747864d56d-6tnst" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:10.678334 kubelet[2722]: I0813 01:13:10.678280 2722 kubelet.go:2405] "Pod admission denied" podUID="f8401c17-edf6-49fa-989d-41840f6f489a" pod="tigera-operator/tigera-operator-747864d56d-zhfjd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:10.780272 kubelet[2722]: I0813 01:13:10.780194 2722 kubelet.go:2405] "Pod admission denied" podUID="f4c0a1d0-9eda-476e-b20b-2d863e012dea" pod="tigera-operator/tigera-operator-747864d56d-qc9vc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:10.832232 kubelet[2722]: I0813 01:13:10.832159 2722 kubelet.go:2405] "Pod admission denied" podUID="766bea87-27b8-470a-854c-648c340d1b9a" pod="tigera-operator/tigera-operator-747864d56d-wz79b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:10.929882 kubelet[2722]: I0813 01:13:10.929152 2722 kubelet.go:2405] "Pod admission denied" podUID="cef87185-13e4-4bc7-8611-25ca327ec983" pod="tigera-operator/tigera-operator-747864d56d-dmcrk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:11.031567 kubelet[2722]: I0813 01:13:11.031504 2722 kubelet.go:2405] "Pod admission denied" podUID="ffc6dd7d-497f-45e2-a7bd-669d517293ab" pod="tigera-operator/tigera-operator-747864d56d-mjscg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:11.129781 kubelet[2722]: I0813 01:13:11.129715 2722 kubelet.go:2405] "Pod admission denied" podUID="3bb5fc26-2b18-4415-abe6-ea982e3189ef" pod="tigera-operator/tigera-operator-747864d56d-qgmtw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:11.331956 kubelet[2722]: I0813 01:13:11.331451 2722 kubelet.go:2405] "Pod admission denied" podUID="0e10ddff-437c-4181-a4e0-e786ed073f71" pod="tigera-operator/tigera-operator-747864d56d-cnz57" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:11.428018 kubelet[2722]: I0813 01:13:11.427955 2722 kubelet.go:2405] "Pod admission denied" podUID="318ba893-a1a3-49e8-afe2-d0a9b7eac85a" pod="tigera-operator/tigera-operator-747864d56d-z8969" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:11.484960 kubelet[2722]: I0813 01:13:11.484854 2722 kubelet.go:2405] "Pod admission denied" podUID="b5ad0091-e9af-4201-8ebe-3e8993e3dfdf" pod="tigera-operator/tigera-operator-747864d56d-gfhdf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:11.578771 kubelet[2722]: I0813 01:13:11.578696 2722 kubelet.go:2405] "Pod admission denied" podUID="78d17379-1b91-4a06-997c-0d156e230327" pod="tigera-operator/tigera-operator-747864d56d-dfsv5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:11.678300 kubelet[2722]: I0813 01:13:11.678245 2722 kubelet.go:2405] "Pod admission denied" podUID="c44bbb5c-6ae5-4484-9ee3-345cd51c02d3" pod="tigera-operator/tigera-operator-747864d56d-gw6xz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:11.780589 kubelet[2722]: I0813 01:13:11.779294 2722 kubelet.go:2405] "Pod admission denied" podUID="5056cede-1e81-4966-b1cb-28dadbe0f740" pod="tigera-operator/tigera-operator-747864d56d-vvktd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:11.876494 kubelet[2722]: I0813 01:13:11.876433 2722 kubelet.go:2405] "Pod admission denied" podUID="22469b88-adc7-4615-ac1b-7bf9e151164d" pod="tigera-operator/tigera-operator-747864d56d-6h84f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:11.977300 kubelet[2722]: I0813 01:13:11.976678 2722 kubelet.go:2405] "Pod admission denied" podUID="c4684e6f-a752-46e0-a6fa-80499de55cda" pod="tigera-operator/tigera-operator-747864d56d-48h4w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:12.057932 kubelet[2722]: E0813 01:13:12.057389 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:13:12.058856 containerd[1550]: time="2025-08-13T01:13:12.058383354Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,}" Aug 13 01:13:12.091961 kubelet[2722]: I0813 01:13:12.091715 2722 kubelet.go:2405] "Pod admission denied" podUID="887ff828-d23a-4890-9e50-f28b525e2ac7" pod="tigera-operator/tigera-operator-747864d56d-xc6cq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:12.145086 containerd[1550]: time="2025-08-13T01:13:12.145024085Z" level=error msg="Failed to destroy network for sandbox \"0f53603055beecaedd38d48d48f714cd89f6f5bc8a214381599a3a563a500d97\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:12.147570 systemd[1]: run-netns-cni\x2d175715ac\x2d1859\x2d986f\x2d2026\x2d351aed4295bc.mount: Deactivated successfully. Aug 13 01:13:12.149499 containerd[1550]: time="2025-08-13T01:13:12.149340535Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f53603055beecaedd38d48d48f714cd89f6f5bc8a214381599a3a563a500d97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:12.150051 kubelet[2722]: E0813 01:13:12.150002 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f53603055beecaedd38d48d48f714cd89f6f5bc8a214381599a3a563a500d97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:12.150274 kubelet[2722]: E0813 01:13:12.150059 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f53603055beecaedd38d48d48f714cd89f6f5bc8a214381599a3a563a500d97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:13:12.150274 kubelet[2722]: E0813 01:13:12.150086 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f53603055beecaedd38d48d48f714cd89f6f5bc8a214381599a3a563a500d97\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:13:12.150274 kubelet[2722]: E0813 01:13:12.150139 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f53603055beecaedd38d48d48f714cd89f6f5bc8a214381599a3a563a500d97\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:13:12.179615 kubelet[2722]: I0813 01:13:12.179558 2722 kubelet.go:2405] "Pod admission denied" podUID="4ed0831a-231c-410f-a032-04952b9127c1" pod="tigera-operator/tigera-operator-747864d56d-9qcgt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:12.280189 kubelet[2722]: I0813 01:13:12.278702 2722 kubelet.go:2405] "Pod admission denied" podUID="026414ab-656e-4550-b8c4-1dfa393ff840" pod="tigera-operator/tigera-operator-747864d56d-m7fch" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:12.380011 kubelet[2722]: I0813 01:13:12.379948 2722 kubelet.go:2405] "Pod admission denied" podUID="4440a8d7-f80c-4ad1-ba76-1b3de0e9d134" pod="tigera-operator/tigera-operator-747864d56d-wqpn9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:12.478306 kubelet[2722]: I0813 01:13:12.478254 2722 kubelet.go:2405] "Pod admission denied" podUID="cfb31e06-1a56-4d0c-940b-da42a52354a8" pod="tigera-operator/tigera-operator-747864d56d-mswnf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:12.583085 kubelet[2722]: I0813 01:13:12.582932 2722 kubelet.go:2405] "Pod admission denied" podUID="fad92c9c-29c4-494c-a159-9f7926963e0b" pod="tigera-operator/tigera-operator-747864d56d-hr88z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:12.685875 kubelet[2722]: I0813 01:13:12.685796 2722 kubelet.go:2405] "Pod admission denied" podUID="be52a282-7c24-476e-9b40-655d6440f439" pod="tigera-operator/tigera-operator-747864d56d-vb4nh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:12.778875 kubelet[2722]: I0813 01:13:12.778808 2722 kubelet.go:2405] "Pod admission denied" podUID="7b198687-a603-417e-8f1d-f9fdbdd42ce9" pod="tigera-operator/tigera-operator-747864d56d-ktps5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:12.878578 kubelet[2722]: I0813 01:13:12.878381 2722 kubelet.go:2405] "Pod admission denied" podUID="95f538df-e2d3-4ec3-9994-63f4a23b356d" pod="tigera-operator/tigera-operator-747864d56d-57lzv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:12.983158 kubelet[2722]: I0813 01:13:12.983084 2722 kubelet.go:2405] "Pod admission denied" podUID="c98332c1-e1d1-4a45-abda-75c3da532dcb" pod="tigera-operator/tigera-operator-747864d56d-fx5ms" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:13.081168 kubelet[2722]: I0813 01:13:13.081112 2722 kubelet.go:2405] "Pod admission denied" podUID="4ff1fbb1-2f4d-4005-aa48-32e61aabc8a0" pod="tigera-operator/tigera-operator-747864d56d-qngvh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:13.128345 kubelet[2722]: I0813 01:13:13.128277 2722 kubelet.go:2405] "Pod admission denied" podUID="bc6c32f9-51d5-4721-99e8-31ceceb68cf2" pod="tigera-operator/tigera-operator-747864d56d-vfdml" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:13.231688 kubelet[2722]: I0813 01:13:13.231617 2722 kubelet.go:2405] "Pod admission denied" podUID="687889c4-4320-4c03-bfd5-aff3cdf3d909" pod="tigera-operator/tigera-operator-747864d56d-zcbxt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:13.332671 kubelet[2722]: I0813 01:13:13.332610 2722 kubelet.go:2405] "Pod admission denied" podUID="5a154f48-940d-4a4f-a7e2-f25004ef41ee" pod="tigera-operator/tigera-operator-747864d56d-zwmnx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:13.429389 kubelet[2722]: I0813 01:13:13.429326 2722 kubelet.go:2405] "Pod admission denied" podUID="54a6b0b7-69a5-44cf-b5f0-f97a517421eb" pod="tigera-operator/tigera-operator-747864d56d-z4vg6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:13.631525 kubelet[2722]: I0813 01:13:13.631344 2722 kubelet.go:2405] "Pod admission denied" podUID="7c90ee77-4546-4d60-9ba2-925631299e5a" pod="tigera-operator/tigera-operator-747864d56d-5jq97" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:13.733698 kubelet[2722]: I0813 01:13:13.733611 2722 kubelet.go:2405] "Pod admission denied" podUID="4ee28fc7-49a1-4e8f-97f0-634a0cb8e323" pod="tigera-operator/tigera-operator-747864d56d-vs2wj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:13.834307 kubelet[2722]: I0813 01:13:13.834236 2722 kubelet.go:2405] "Pod admission denied" podUID="96befe8a-8b31-4bca-9631-014442cea713" pod="tigera-operator/tigera-operator-747864d56d-z6kjv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:14.032495 kubelet[2722]: I0813 01:13:14.032428 2722 kubelet.go:2405] "Pod admission denied" podUID="cdff8c8f-fd09-4749-af7d-1bbdbb33d964" pod="tigera-operator/tigera-operator-747864d56d-mkjzl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:14.063922 containerd[1550]: time="2025-08-13T01:13:14.063568735Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:14.064670 kubelet[2722]: E0813 01:13:14.064584 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:13:14.065926 containerd[1550]: time="2025-08-13T01:13:14.065147329Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,}" Aug 13 01:13:14.142563 kubelet[2722]: I0813 01:13:14.142511 2722 kubelet.go:2405] "Pod admission denied" podUID="89db8d67-0526-47b2-8f5d-d1917aa85f5e" pod="tigera-operator/tigera-operator-747864d56d-x7nhz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:14.177589 containerd[1550]: time="2025-08-13T01:13:14.177241783Z" level=error msg="Failed to destroy network for sandbox \"ecf00696a0f84fad2ba1efc0c6599acc6a8a5d88c560d1fc789d5b83e6287700\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:14.180362 systemd[1]: run-netns-cni\x2d4106716a\x2dd528\x2de7fd\x2d0629\x2d51bb851e0b7b.mount: Deactivated successfully. Aug 13 01:13:14.183737 containerd[1550]: time="2025-08-13T01:13:14.183671738Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf00696a0f84fad2ba1efc0c6599acc6a8a5d88c560d1fc789d5b83e6287700\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:14.187542 kubelet[2722]: E0813 01:13:14.186139 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf00696a0f84fad2ba1efc0c6599acc6a8a5d88c560d1fc789d5b83e6287700\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:14.187542 kubelet[2722]: E0813 01:13:14.186240 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf00696a0f84fad2ba1efc0c6599acc6a8a5d88c560d1fc789d5b83e6287700\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:13:14.187542 kubelet[2722]: E0813 01:13:14.186270 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ecf00696a0f84fad2ba1efc0c6599acc6a8a5d88c560d1fc789d5b83e6287700\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:13:14.187542 kubelet[2722]: E0813 01:13:14.186334 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ecf00696a0f84fad2ba1efc0c6599acc6a8a5d88c560d1fc789d5b83e6287700\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:13:14.198511 kubelet[2722]: I0813 01:13:14.198483 2722 kubelet.go:2405] "Pod admission denied" podUID="1f0009c9-31bb-453c-a475-9a741db04b43" pod="tigera-operator/tigera-operator-747864d56d-zb7lp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:14.211985 containerd[1550]: time="2025-08-13T01:13:14.210323051Z" level=error msg="Failed to destroy network for sandbox \"2677a7c0fc100adafa83feac3f9a2444ee240515da926932fe0bd066773b83b2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:14.219243 systemd[1]: run-netns-cni\x2dc74c6b2b\x2d6fc1\x2db605\x2d3a77\x2ddaa64395a01c.mount: Deactivated successfully. Aug 13 01:13:14.223138 containerd[1550]: time="2025-08-13T01:13:14.222480569Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2677a7c0fc100adafa83feac3f9a2444ee240515da926932fe0bd066773b83b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:14.225518 kubelet[2722]: E0813 01:13:14.225068 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2677a7c0fc100adafa83feac3f9a2444ee240515da926932fe0bd066773b83b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:14.225518 kubelet[2722]: E0813 01:13:14.225143 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2677a7c0fc100adafa83feac3f9a2444ee240515da926932fe0bd066773b83b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:13:14.225518 kubelet[2722]: E0813 01:13:14.225165 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2677a7c0fc100adafa83feac3f9a2444ee240515da926932fe0bd066773b83b2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:13:14.225518 kubelet[2722]: E0813 01:13:14.225215 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2677a7c0fc100adafa83feac3f9a2444ee240515da926932fe0bd066773b83b2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:13:14.283529 kubelet[2722]: I0813 01:13:14.282791 2722 kubelet.go:2405] "Pod admission denied" podUID="9c8e7c4c-d856-4e1c-8736-826f2bfccdb0" pod="tigera-operator/tigera-operator-747864d56d-gt44n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:14.381152 kubelet[2722]: I0813 01:13:14.381084 2722 kubelet.go:2405] "Pod admission denied" podUID="9fc1b5f4-6b33-41fe-96cc-9bfdde2b89d0" pod="tigera-operator/tigera-operator-747864d56d-4gpbq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:14.479328 kubelet[2722]: I0813 01:13:14.479250 2722 kubelet.go:2405] "Pod admission denied" podUID="94db1330-e478-4253-9bb7-d600a293a25d" pod="tigera-operator/tigera-operator-747864d56d-nq8zh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:14.679515 kubelet[2722]: I0813 01:13:14.679468 2722 kubelet.go:2405] "Pod admission denied" podUID="87403ba1-40de-461a-8d3b-9bb0c7393a52" pod="tigera-operator/tigera-operator-747864d56d-qvgdq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:14.787931 kubelet[2722]: I0813 01:13:14.787195 2722 kubelet.go:2405] "Pod admission denied" podUID="e48688f7-1383-4ae2-be00-aec22894f2cb" pod="tigera-operator/tigera-operator-747864d56d-zqq7w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:14.879492 kubelet[2722]: I0813 01:13:14.879433 2722 kubelet.go:2405] "Pod admission denied" podUID="7199ed4c-34f1-4f5b-af65-978670fd3fb3" pod="tigera-operator/tigera-operator-747864d56d-cnxq7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:15.080104 kubelet[2722]: I0813 01:13:15.079816 2722 kubelet.go:2405] "Pod admission denied" podUID="8ddc1f08-32f6-4fdf-add6-eb6b85c5c25a" pod="tigera-operator/tigera-operator-747864d56d-6plvv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:15.193301 kubelet[2722]: I0813 01:13:15.193262 2722 kubelet.go:2405] "Pod admission denied" podUID="af37a280-562d-4857-8928-524a77c40b60" pod="tigera-operator/tigera-operator-747864d56d-9xdxj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:15.284219 kubelet[2722]: I0813 01:13:15.284175 2722 kubelet.go:2405] "Pod admission denied" podUID="af5094ac-8856-48dd-9e83-615a922f0f88" pod="tigera-operator/tigera-operator-747864d56d-rwxz7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:15.377063 kubelet[2722]: I0813 01:13:15.376805 2722 kubelet.go:2405] "Pod admission denied" podUID="9ce5bebc-f697-4a4c-8ccc-bad2795217ae" pod="tigera-operator/tigera-operator-747864d56d-2ppxq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:15.485735 kubelet[2722]: I0813 01:13:15.485683 2722 kubelet.go:2405] "Pod admission denied" podUID="6559ee43-629e-41d2-ba43-4a3cfd398f80" pod="tigera-operator/tigera-operator-747864d56d-47ngw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:15.576259 kubelet[2722]: I0813 01:13:15.576206 2722 kubelet.go:2405] "Pod admission denied" podUID="3f20d7d9-67a3-47fe-bccc-a537228886df" pod="tigera-operator/tigera-operator-747864d56d-f4vgz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:15.623250 kubelet[2722]: I0813 01:13:15.623203 2722 kubelet.go:2405] "Pod admission denied" podUID="28141515-98f8-4976-b014-6b3520f5b8a3" pod="tigera-operator/tigera-operator-747864d56d-42k69" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:15.731730 kubelet[2722]: I0813 01:13:15.731041 2722 kubelet.go:2405] "Pod admission denied" podUID="d2a12a1d-1138-47bd-8813-b7b5f60e073a" pod="tigera-operator/tigera-operator-747864d56d-r98hm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:15.928215 kubelet[2722]: I0813 01:13:15.927948 2722 kubelet.go:2405] "Pod admission denied" podUID="76553bd0-09b3-4fbf-ad45-f68596e8a3c1" pod="tigera-operator/tigera-operator-747864d56d-789t7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:16.029753 kubelet[2722]: I0813 01:13:16.029107 2722 kubelet.go:2405] "Pod admission denied" podUID="5e4642b6-c7ca-4983-9512-a9484f84d3ea" pod="tigera-operator/tigera-operator-747864d56d-9fsr5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:16.084872 kubelet[2722]: I0813 01:13:16.084818 2722 kubelet.go:2405] "Pod admission denied" podUID="41c70fe9-e141-4d18-a44c-1387f5a4aed1" pod="tigera-operator/tigera-operator-747864d56d-hjcqk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:16.177724 kubelet[2722]: I0813 01:13:16.177678 2722 kubelet.go:2405] "Pod admission denied" podUID="d6f76253-4c18-492c-8239-9828c8ff9d33" pod="tigera-operator/tigera-operator-747864d56d-sbrkj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:16.382959 kubelet[2722]: I0813 01:13:16.382709 2722 kubelet.go:2405] "Pod admission denied" podUID="6e1a93ec-59fd-4afd-aa94-af48d51699cc" pod="tigera-operator/tigera-operator-747864d56d-4kqmw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:16.478308 kubelet[2722]: I0813 01:13:16.478256 2722 kubelet.go:2405] "Pod admission denied" podUID="93829a85-d0cd-45b4-b9f8-3bb7c08b1c91" pod="tigera-operator/tigera-operator-747864d56d-vfb7t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:16.522423 kubelet[2722]: I0813 01:13:16.522383 2722 kubelet.go:2405] "Pod admission denied" podUID="43471930-2939-42a7-abdb-820ef51fbf5a" pod="tigera-operator/tigera-operator-747864d56d-z9krm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:16.627776 kubelet[2722]: I0813 01:13:16.627744 2722 kubelet.go:2405] "Pod admission denied" podUID="f7b71f8d-a977-4b1e-8982-c86e3b7f29d2" pod="tigera-operator/tigera-operator-747864d56d-66ncj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:16.723987 kubelet[2722]: I0813 01:13:16.723946 2722 kubelet.go:2405] "Pod admission denied" podUID="f1e0f84c-1a44-4b41-a85b-c593e38844f5" pod="tigera-operator/tigera-operator-747864d56d-p8m2f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:16.832969 kubelet[2722]: I0813 01:13:16.832885 2722 kubelet.go:2405] "Pod admission denied" podUID="c3db7f7a-0bbf-4fcc-bc51-4fd39f6905ed" pod="tigera-operator/tigera-operator-747864d56d-4jk8l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:17.030967 kubelet[2722]: I0813 01:13:17.030754 2722 kubelet.go:2405] "Pod admission denied" podUID="591fd0c6-b959-48a4-9eda-6b6817c18895" pod="tigera-operator/tigera-operator-747864d56d-cwfg7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:17.133221 kubelet[2722]: I0813 01:13:17.133164 2722 kubelet.go:2405] "Pod admission denied" podUID="9220e699-645a-431d-9dd5-0de7640b0638" pod="tigera-operator/tigera-operator-747864d56d-7glc9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:17.181915 kubelet[2722]: I0813 01:13:17.181871 2722 kubelet.go:2405] "Pod admission denied" podUID="042c0e1d-5d68-4f21-b09c-7f563aff224c" pod="tigera-operator/tigera-operator-747864d56d-gff2f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:17.274668 kubelet[2722]: I0813 01:13:17.274618 2722 kubelet.go:2405] "Pod admission denied" podUID="bf14ed1b-0fa8-4cd8-89ad-cd8d0d8de5d3" pod="tigera-operator/tigera-operator-747864d56d-5p7tw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:17.377960 kubelet[2722]: I0813 01:13:17.377705 2722 kubelet.go:2405] "Pod admission denied" podUID="46a02008-dc1d-48e5-8097-aed270f73784" pod="tigera-operator/tigera-operator-747864d56d-lvdcm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:17.480754 kubelet[2722]: I0813 01:13:17.480701 2722 kubelet.go:2405] "Pod admission denied" podUID="e040cccd-bd21-4211-8f06-ce7f4a48c677" pod="tigera-operator/tigera-operator-747864d56d-4xmxv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:17.577854 kubelet[2722]: I0813 01:13:17.577812 2722 kubelet.go:2405] "Pod admission denied" podUID="75f31334-9b2e-414e-af5d-fee8df2ecc73" pod="tigera-operator/tigera-operator-747864d56d-k7bc8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:17.677968 kubelet[2722]: I0813 01:13:17.677938 2722 kubelet.go:2405] "Pod admission denied" podUID="c932982d-b988-4329-bc89-216c5e3d899e" pod="tigera-operator/tigera-operator-747864d56d-vvbrw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:17.777689 kubelet[2722]: I0813 01:13:17.777634 2722 kubelet.go:2405] "Pod admission denied" podUID="792a6cc4-f130-440b-8887-b051b47ae70c" pod="tigera-operator/tigera-operator-747864d56d-lnfk6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:17.881676 kubelet[2722]: I0813 01:13:17.881613 2722 kubelet.go:2405] "Pod admission denied" podUID="b0f9e433-73e2-4d6f-98a2-edeaa8e959c6" pod="tigera-operator/tigera-operator-747864d56d-9n8df" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:17.994228 kubelet[2722]: I0813 01:13:17.993254 2722 kubelet.go:2405] "Pod admission denied" podUID="f7f6d1c8-d182-4c75-bd8c-0757aff632fb" pod="tigera-operator/tigera-operator-747864d56d-9596z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:18.059614 containerd[1550]: time="2025-08-13T01:13:18.058352494Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:18.061272 kubelet[2722]: E0813 01:13:18.061223 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3991429278: write /var/lib/containerd/tmpmounts/containerd-mount3991429278/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-hq29b" podUID="3c0f3b86-7d63-44df-843e-763eb95a8b94" Aug 13 01:13:18.103696 kubelet[2722]: I0813 01:13:18.103625 2722 kubelet.go:2405] "Pod admission denied" podUID="1ed7c17d-1fac-45aa-9321-ca608e3e35a2" pod="tigera-operator/tigera-operator-747864d56d-p8hr2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:18.160229 containerd[1550]: time="2025-08-13T01:13:18.160029158Z" level=error msg="Failed to destroy network for sandbox \"c26aa44bd94f5c3677c7904ec2f2698212b1eacbf367c39be1334064fc12640e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:18.165091 containerd[1550]: time="2025-08-13T01:13:18.163382836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26aa44bd94f5c3677c7904ec2f2698212b1eacbf367c39be1334064fc12640e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:18.165309 kubelet[2722]: E0813 01:13:18.165272 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26aa44bd94f5c3677c7904ec2f2698212b1eacbf367c39be1334064fc12640e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:18.165582 kubelet[2722]: E0813 01:13:18.165330 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26aa44bd94f5c3677c7904ec2f2698212b1eacbf367c39be1334064fc12640e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:13:18.165582 kubelet[2722]: E0813 01:13:18.165351 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c26aa44bd94f5c3677c7904ec2f2698212b1eacbf367c39be1334064fc12640e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:13:18.165582 kubelet[2722]: E0813 01:13:18.165398 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c26aa44bd94f5c3677c7904ec2f2698212b1eacbf367c39be1334064fc12640e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7lv4" podUID="6b834979-32a4-464b-9898-ef87b1042a9e" Aug 13 01:13:18.165822 systemd[1]: run-netns-cni\x2ddcb1d762\x2d5bcc\x2d95d8\x2d53cf\x2de7b205a61733.mount: Deactivated successfully. Aug 13 01:13:18.202916 kubelet[2722]: I0813 01:13:18.201418 2722 kubelet.go:2405] "Pod admission denied" podUID="f6ae3e9b-0e18-4828-8710-60740a0f743b" pod="tigera-operator/tigera-operator-747864d56d-9lzzj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:18.285969 kubelet[2722]: I0813 01:13:18.284961 2722 kubelet.go:2405] "Pod admission denied" podUID="3c1ee21b-8934-4ff2-b5a9-cc7e2d285a73" pod="tigera-operator/tigera-operator-747864d56d-lmb5s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:18.376111 kubelet[2722]: I0813 01:13:18.376065 2722 kubelet.go:2405] "Pod admission denied" podUID="5ff53e04-01f0-48e8-9efb-ec42793ca24c" pod="tigera-operator/tigera-operator-747864d56d-dlsc7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:18.424922 kubelet[2722]: I0813 01:13:18.424876 2722 kubelet.go:2405] "Pod admission denied" podUID="60383d53-beb5-4e83-9a23-21e47111ab82" pod="tigera-operator/tigera-operator-747864d56d-4rkvh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:18.526568 kubelet[2722]: I0813 01:13:18.526507 2722 kubelet.go:2405] "Pod admission denied" podUID="7a30a031-762d-4c77-a304-b876c6ea7dee" pod="tigera-operator/tigera-operator-747864d56d-659b5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:18.629554 kubelet[2722]: I0813 01:13:18.628606 2722 kubelet.go:2405] "Pod admission denied" podUID="31531b71-c121-4cb0-b7b4-ddbee9d33c47" pod="tigera-operator/tigera-operator-747864d56d-6cn8d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:18.726607 kubelet[2722]: I0813 01:13:18.726552 2722 kubelet.go:2405] "Pod admission denied" podUID="21fe6422-dca7-41f5-99ed-7cc789db7b57" pod="tigera-operator/tigera-operator-747864d56d-pgsnh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:18.823365 kubelet[2722]: I0813 01:13:18.823324 2722 kubelet.go:2405] "Pod admission denied" podUID="2260f232-1439-4261-8bf8-e66de6830622" pod="tigera-operator/tigera-operator-747864d56d-d75jw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:18.926036 kubelet[2722]: I0813 01:13:18.925999 2722 kubelet.go:2405] "Pod admission denied" podUID="3a64dae6-aa4e-4e10-8f36-5294d5c7044c" pod="tigera-operator/tigera-operator-747864d56d-2l6hp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:19.025989 kubelet[2722]: I0813 01:13:19.025940 2722 kubelet.go:2405] "Pod admission denied" podUID="38cf6026-bef8-4a87-bf97-2ec8e8ed52e2" pod="tigera-operator/tigera-operator-747864d56d-vfkc6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:19.124222 kubelet[2722]: I0813 01:13:19.123963 2722 kubelet.go:2405] "Pod admission denied" podUID="1d26e864-8a4d-4af7-95e0-d9e2c627ca2f" pod="tigera-operator/tigera-operator-747864d56d-4n8fg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:19.327652 kubelet[2722]: I0813 01:13:19.327532 2722 kubelet.go:2405] "Pod admission denied" podUID="7037eb40-9c51-480d-b386-7db8d112418a" pod="tigera-operator/tigera-operator-747864d56d-9mv5b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:19.426007 kubelet[2722]: I0813 01:13:19.425955 2722 kubelet.go:2405] "Pod admission denied" podUID="ebfc8d00-42b5-4839-9b4c-c02a0895996d" pod="tigera-operator/tigera-operator-747864d56d-l7mks" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:19.473461 kubelet[2722]: I0813 01:13:19.473421 2722 kubelet.go:2405] "Pod admission denied" podUID="07cc41aa-7089-4d8b-9f05-0cfd44c97a31" pod="tigera-operator/tigera-operator-747864d56d-8pmdg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:19.578496 kubelet[2722]: I0813 01:13:19.577380 2722 kubelet.go:2405] "Pod admission denied" podUID="b7b56f61-eb63-4fc4-b58c-9aba38fb3dae" pod="tigera-operator/tigera-operator-747864d56d-5nzbc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:19.677373 kubelet[2722]: I0813 01:13:19.677332 2722 kubelet.go:2405] "Pod admission denied" podUID="a4093f59-ffd8-4211-aa62-b01b84f2594f" pod="tigera-operator/tigera-operator-747864d56d-75m7n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:19.679399 kubelet[2722]: I0813 01:13:19.679383 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:13:19.679491 kubelet[2722]: I0813 01:13:19.679482 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:13:19.681207 kubelet[2722]: I0813 01:13:19.681195 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:13:19.694010 kubelet[2722]: I0813 01:13:19.693992 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:13:19.694129 kubelet[2722]: I0813 01:13:19.694116 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","calico-system/csi-node-driver-l7lv4","calico-system/calico-node-hq29b","calico-system/calico-typha-55bf5cd98c-8lqpc","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:13:19.694280 kubelet[2722]: E0813 01:13:19.694269 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:13:19.694320 kubelet[2722]: E0813 01:13:19.694312 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:13:19.694359 kubelet[2722]: E0813 01:13:19.694352 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:13:19.694399 kubelet[2722]: E0813 01:13:19.694392 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:13:19.694431 kubelet[2722]: E0813 01:13:19.694424 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:13:19.694466 kubelet[2722]: E0813 01:13:19.694459 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:13:19.694506 kubelet[2722]: E0813 01:13:19.694498 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:13:19.694549 kubelet[2722]: E0813 01:13:19.694543 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:13:19.694586 kubelet[2722]: E0813 01:13:19.694580 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:13:19.694642 kubelet[2722]: E0813 01:13:19.694616 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:13:19.694642 kubelet[2722]: I0813 01:13:19.694627 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:13:19.723261 kubelet[2722]: I0813 01:13:19.723226 2722 kubelet.go:2405] "Pod admission denied" podUID="200519f7-f47f-4a81-8c1b-d7974f751490" pod="tigera-operator/tigera-operator-747864d56d-bfsnx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:19.826119 kubelet[2722]: I0813 01:13:19.826069 2722 kubelet.go:2405] "Pod admission denied" podUID="8b02d287-6771-4be6-af68-7c6fd720ec83" pod="tigera-operator/tigera-operator-747864d56d-xn7cv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:19.924926 kubelet[2722]: I0813 01:13:19.924157 2722 kubelet.go:2405] "Pod admission denied" podUID="837b27de-7db9-40a4-86fc-83f1ac4c1465" pod="tigera-operator/tigera-operator-747864d56d-znb7p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:20.027728 kubelet[2722]: I0813 01:13:20.027677 2722 kubelet.go:2405] "Pod admission denied" podUID="38b01413-f06a-45c1-bade-ae6f6d126b63" pod="tigera-operator/tigera-operator-747864d56d-ttnwb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:20.228591 kubelet[2722]: I0813 01:13:20.228534 2722 kubelet.go:2405] "Pod admission denied" podUID="05bebe19-c6dd-4036-83d2-f37c2497c2fd" pod="tigera-operator/tigera-operator-747864d56d-gdsm5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:20.325434 kubelet[2722]: I0813 01:13:20.325367 2722 kubelet.go:2405] "Pod admission denied" podUID="8119d151-206d-4303-aa0b-b51490b0855e" pod="tigera-operator/tigera-operator-747864d56d-5h577" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:20.435583 kubelet[2722]: I0813 01:13:20.434651 2722 kubelet.go:2405] "Pod admission denied" podUID="13916d1b-f0be-4ab4-8387-30aaac09c2d4" pod="tigera-operator/tigera-operator-747864d56d-bwv4z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:20.525680 kubelet[2722]: I0813 01:13:20.525549 2722 kubelet.go:2405] "Pod admission denied" podUID="dfe573a3-7a1d-4a33-8245-69038f7e1c57" pod="tigera-operator/tigera-operator-747864d56d-tjbnr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:20.627428 kubelet[2722]: I0813 01:13:20.627373 2722 kubelet.go:2405] "Pod admission denied" podUID="b92852b8-fee8-413c-a2bd-16836e1d443b" pod="tigera-operator/tigera-operator-747864d56d-l5srv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:20.730952 kubelet[2722]: I0813 01:13:20.730872 2722 kubelet.go:2405] "Pod admission denied" podUID="9b9457f3-ece4-47a7-bb35-6e9c94448b8f" pod="tigera-operator/tigera-operator-747864d56d-l4phv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:20.775326 kubelet[2722]: I0813 01:13:20.775285 2722 kubelet.go:2405] "Pod admission denied" podUID="091a0ffb-b61c-4513-aec2-d69c4907b987" pod="tigera-operator/tigera-operator-747864d56d-f9ljv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:20.883859 kubelet[2722]: I0813 01:13:20.882875 2722 kubelet.go:2405] "Pod admission denied" podUID="b92298ad-f8aa-4891-80d5-f3b6bd5aae02" pod="tigera-operator/tigera-operator-747864d56d-8b27k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:20.977749 kubelet[2722]: I0813 01:13:20.977711 2722 kubelet.go:2405] "Pod admission denied" podUID="fadcd5f9-6adf-405a-addb-6b4cf081bdaa" pod="tigera-operator/tigera-operator-747864d56d-tvlxs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:21.075578 kubelet[2722]: I0813 01:13:21.075511 2722 kubelet.go:2405] "Pod admission denied" podUID="cc247285-5fb7-4c7f-8a72-741fcc9ae6d2" pod="tigera-operator/tigera-operator-747864d56d-9fpwf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:21.276981 kubelet[2722]: I0813 01:13:21.276936 2722 kubelet.go:2405] "Pod admission denied" podUID="1c1ed83e-b743-4b12-be96-61f2457378d0" pod="tigera-operator/tigera-operator-747864d56d-gwvf4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:21.375338 kubelet[2722]: I0813 01:13:21.375285 2722 kubelet.go:2405] "Pod admission denied" podUID="44672221-78d0-48ab-9f6b-f2388246adc0" pod="tigera-operator/tigera-operator-747864d56d-nzwgv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:21.481915 kubelet[2722]: I0813 01:13:21.481537 2722 kubelet.go:2405] "Pod admission denied" podUID="499b54f0-d590-40b6-8c6e-cd3381182da5" pod="tigera-operator/tigera-operator-747864d56d-6gcvv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:21.573608 kubelet[2722]: I0813 01:13:21.573444 2722 kubelet.go:2405] "Pod admission denied" podUID="47bc4baf-6ce7-449a-923a-4bfae3c6461e" pod="tigera-operator/tigera-operator-747864d56d-vr9rw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:21.674968 kubelet[2722]: I0813 01:13:21.674870 2722 kubelet.go:2405] "Pod admission denied" podUID="bbcb7271-8946-4810-8a0f-e3ba489f6c59" pod="tigera-operator/tigera-operator-747864d56d-bwnrr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:21.776198 kubelet[2722]: I0813 01:13:21.776150 2722 kubelet.go:2405] "Pod admission denied" podUID="aace39d2-675d-43ab-9c18-a56a905c0221" pod="tigera-operator/tigera-operator-747864d56d-2zzgz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:21.877341 kubelet[2722]: I0813 01:13:21.876583 2722 kubelet.go:2405] "Pod admission denied" podUID="6aa190bd-0fd8-4f73-b255-9868a02bcae8" pod="tigera-operator/tigera-operator-747864d56d-8fm2b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:21.979432 kubelet[2722]: I0813 01:13:21.977626 2722 kubelet.go:2405] "Pod admission denied" podUID="219d0863-b54c-4149-9e55-8b7e5b90e1c3" pod="tigera-operator/tigera-operator-747864d56d-j2h52" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:22.057361 kubelet[2722]: E0813 01:13:22.057324 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:13:22.074377 kubelet[2722]: I0813 01:13:22.074344 2722 kubelet.go:2405] "Pod admission denied" podUID="f2e5993b-3d2b-4596-81de-ee12ddcca9c4" pod="tigera-operator/tigera-operator-747864d56d-zx5ws" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:22.175506 kubelet[2722]: I0813 01:13:22.175469 2722 kubelet.go:2405] "Pod admission denied" podUID="c96319b8-44d7-4427-8b98-98ae4efea6a4" pod="tigera-operator/tigera-operator-747864d56d-dkrnp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:22.225171 kubelet[2722]: I0813 01:13:22.225117 2722 kubelet.go:2405] "Pod admission denied" podUID="3048c915-c020-44f1-a429-fa38b37d33b2" pod="tigera-operator/tigera-operator-747864d56d-lj766" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:22.329977 kubelet[2722]: I0813 01:13:22.329932 2722 kubelet.go:2405] "Pod admission denied" podUID="6cd74530-c010-4df0-9aa4-ce091e7d4578" pod="tigera-operator/tigera-operator-747864d56d-gghcp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:22.427623 kubelet[2722]: I0813 01:13:22.426849 2722 kubelet.go:2405] "Pod admission denied" podUID="a253abb3-9253-4d7b-b5d0-c5da0f6805ca" pod="tigera-operator/tigera-operator-747864d56d-6p64m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:22.530374 kubelet[2722]: I0813 01:13:22.530325 2722 kubelet.go:2405] "Pod admission denied" podUID="289be8ce-59d0-480a-89d9-742f9ad60d91" pod="tigera-operator/tigera-operator-747864d56d-j7j8x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:22.627764 kubelet[2722]: I0813 01:13:22.627716 2722 kubelet.go:2405] "Pod admission denied" podUID="6db983ab-4e97-44f7-bef0-831ff60f79a0" pod="tigera-operator/tigera-operator-747864d56d-kfmk4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:22.730237 kubelet[2722]: I0813 01:13:22.730116 2722 kubelet.go:2405] "Pod admission denied" podUID="c9989494-0f2a-490b-a31e-0807343e90cd" pod="tigera-operator/tigera-operator-747864d56d-6f2ng" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:22.832051 kubelet[2722]: I0813 01:13:22.831998 2722 kubelet.go:2405] "Pod admission denied" podUID="34d94e5e-67a7-4f41-b481-109f5e8dd8ae" pod="tigera-operator/tigera-operator-747864d56d-djh42" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:22.876516 kubelet[2722]: I0813 01:13:22.876471 2722 kubelet.go:2405] "Pod admission denied" podUID="4bfd7e9f-3a7f-44b7-a190-c85245bbfa34" pod="tigera-operator/tigera-operator-747864d56d-dtz4l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:22.977310 kubelet[2722]: I0813 01:13:22.977260 2722 kubelet.go:2405] "Pod admission denied" podUID="1ee021c9-cbd2-45d0-b403-a816babad612" pod="tigera-operator/tigera-operator-747864d56d-cggws" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:23.182010 kubelet[2722]: I0813 01:13:23.181949 2722 kubelet.go:2405] "Pod admission denied" podUID="0d570601-303c-43de-8599-b785637b3a5a" pod="tigera-operator/tigera-operator-747864d56d-zcwtw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:23.282764 kubelet[2722]: I0813 01:13:23.282720 2722 kubelet.go:2405] "Pod admission denied" podUID="806b0c7d-f052-4881-acc7-9baf0de1a842" pod="tigera-operator/tigera-operator-747864d56d-dvvnx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:23.376827 kubelet[2722]: I0813 01:13:23.376767 2722 kubelet.go:2405] "Pod admission denied" podUID="c6498745-875f-4707-a693-6f0b6418bb6e" pod="tigera-operator/tigera-operator-747864d56d-6b82r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:23.478982 kubelet[2722]: I0813 01:13:23.477991 2722 kubelet.go:2405] "Pod admission denied" podUID="6f5f2424-6a6e-4581-a98e-8662d9847ea4" pod="tigera-operator/tigera-operator-747864d56d-snmth" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:23.578317 kubelet[2722]: I0813 01:13:23.578269 2722 kubelet.go:2405] "Pod admission denied" podUID="20cfcd81-4953-42df-a628-8a9135d70b89" pod="tigera-operator/tigera-operator-747864d56d-stkfz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:23.675796 kubelet[2722]: I0813 01:13:23.675754 2722 kubelet.go:2405] "Pod admission denied" podUID="76c966c2-cd35-45c3-9ca1-223bb393e530" pod="tigera-operator/tigera-operator-747864d56d-gsjh9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:23.736769 kubelet[2722]: I0813 01:13:23.735077 2722 kubelet.go:2405] "Pod admission denied" podUID="204a2250-2d6f-41a7-9b48-c397c826386a" pod="tigera-operator/tigera-operator-747864d56d-jbqmv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:23.827871 kubelet[2722]: I0813 01:13:23.827829 2722 kubelet.go:2405] "Pod admission denied" podUID="291d919b-ee16-4baa-b4ef-e115bd7edae4" pod="tigera-operator/tigera-operator-747864d56d-9kwkg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:23.925918 kubelet[2722]: I0813 01:13:23.925842 2722 kubelet.go:2405] "Pod admission denied" podUID="2a418618-bdea-4ea0-9bf2-0754a7aa9a61" pod="tigera-operator/tigera-operator-747864d56d-ch5bs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:23.989733 kubelet[2722]: I0813 01:13:23.988162 2722 kubelet.go:2405] "Pod admission denied" podUID="e15428d5-35b2-4182-9319-af59d06955df" pod="tigera-operator/tigera-operator-747864d56d-pdnpd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:24.075626 kubelet[2722]: I0813 01:13:24.075581 2722 kubelet.go:2405] "Pod admission denied" podUID="e21c6fbe-17ba-4a8c-95c5-bebbd4e4e3c4" pod="tigera-operator/tigera-operator-747864d56d-8hhc5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:24.279508 kubelet[2722]: I0813 01:13:24.279333 2722 kubelet.go:2405] "Pod admission denied" podUID="0e4925bf-04f0-4fab-b4b8-737b9d1d672b" pod="tigera-operator/tigera-operator-747864d56d-9rglz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:24.381281 kubelet[2722]: I0813 01:13:24.381229 2722 kubelet.go:2405] "Pod admission denied" podUID="e2074950-f850-4faa-8d51-deb5f8d3c22f" pod="tigera-operator/tigera-operator-747864d56d-zvlxv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:24.478485 kubelet[2722]: I0813 01:13:24.478441 2722 kubelet.go:2405] "Pod admission denied" podUID="6abafbc7-3809-4010-8858-120255e6c88a" pod="tigera-operator/tigera-operator-747864d56d-gh98s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:24.682936 kubelet[2722]: I0813 01:13:24.682880 2722 kubelet.go:2405] "Pod admission denied" podUID="0c0aa755-027e-480c-b979-08fd2aca6e94" pod="tigera-operator/tigera-operator-747864d56d-2h4zp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:24.776001 kubelet[2722]: I0813 01:13:24.775956 2722 kubelet.go:2405] "Pod admission denied" podUID="6af8b587-0e2a-49b4-a65e-1726d90f9976" pod="tigera-operator/tigera-operator-747864d56d-9ttbm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:24.880980 kubelet[2722]: I0813 01:13:24.880943 2722 kubelet.go:2405] "Pod admission denied" podUID="630324b4-62a6-4227-b0bf-e43e25b4314d" pod="tigera-operator/tigera-operator-747864d56d-j9m6d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:24.977220 kubelet[2722]: I0813 01:13:24.976776 2722 kubelet.go:2405] "Pod admission denied" podUID="a29e7886-2a20-4f35-b142-9f199ea09280" pod="tigera-operator/tigera-operator-747864d56d-tzdxh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:25.024856 kubelet[2722]: I0813 01:13:25.024808 2722 kubelet.go:2405] "Pod admission denied" podUID="593a10a9-61fe-4dc5-8549-449ee2974603" pod="tigera-operator/tigera-operator-747864d56d-8klm7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:25.057834 kubelet[2722]: E0813 01:13:25.057814 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:13:25.058395 containerd[1550]: time="2025-08-13T01:13:25.058345249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,}" Aug 13 01:13:25.102324 containerd[1550]: time="2025-08-13T01:13:25.102275166Z" level=error msg="Failed to destroy network for sandbox \"04c64003fa0914727cdb60a579204fdf5485f9880ae2b0c96111486fe474c4f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:25.104126 containerd[1550]: time="2025-08-13T01:13:25.104048520Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"04c64003fa0914727cdb60a579204fdf5485f9880ae2b0c96111486fe474c4f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:25.105055 systemd[1]: run-netns-cni\x2dc42f51eb\x2d3076\x2de318\x2dc4e7\x2d0d5bbd82a709.mount: Deactivated successfully. Aug 13 01:13:25.106242 kubelet[2722]: E0813 01:13:25.106207 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04c64003fa0914727cdb60a579204fdf5485f9880ae2b0c96111486fe474c4f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:25.106390 kubelet[2722]: E0813 01:13:25.106351 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04c64003fa0914727cdb60a579204fdf5485f9880ae2b0c96111486fe474c4f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:13:25.106452 kubelet[2722]: E0813 01:13:25.106436 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"04c64003fa0914727cdb60a579204fdf5485f9880ae2b0c96111486fe474c4f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:13:25.106630 kubelet[2722]: E0813 01:13:25.106587 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"04c64003fa0914727cdb60a579204fdf5485f9880ae2b0c96111486fe474c4f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:13:25.131957 kubelet[2722]: I0813 01:13:25.131838 2722 kubelet.go:2405] "Pod admission denied" podUID="7cd79604-695e-494f-83ab-a754d07315dc" pod="tigera-operator/tigera-operator-747864d56d-g99np" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:25.225436 kubelet[2722]: I0813 01:13:25.225376 2722 kubelet.go:2405] "Pod admission denied" podUID="aabeb534-6a89-4a70-bcba-d5f4d541a7ff" pod="tigera-operator/tigera-operator-747864d56d-djm9q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:25.328445 kubelet[2722]: I0813 01:13:25.328067 2722 kubelet.go:2405] "Pod admission denied" podUID="885c6c35-8f88-4865-9c26-b934a75d3469" pod="tigera-operator/tigera-operator-747864d56d-vmmcg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:25.434597 kubelet[2722]: I0813 01:13:25.434541 2722 kubelet.go:2405] "Pod admission denied" podUID="4ecc78e3-d3e0-48ec-a975-15a1693aaf14" pod="tigera-operator/tigera-operator-747864d56d-z5w6g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:25.477241 kubelet[2722]: I0813 01:13:25.477208 2722 kubelet.go:2405] "Pod admission denied" podUID="b400adc5-a214-432f-96c7-bb911db20eed" pod="tigera-operator/tigera-operator-747864d56d-6v4xf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:25.587992 kubelet[2722]: I0813 01:13:25.587305 2722 kubelet.go:2405] "Pod admission denied" podUID="fb9cfb10-332a-4b04-809d-4af3af4718cc" pod="tigera-operator/tigera-operator-747864d56d-56v6p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:25.678962 kubelet[2722]: I0813 01:13:25.678916 2722 kubelet.go:2405] "Pod admission denied" podUID="7c958c57-1a49-4c63-b75c-6880cd33a436" pod="tigera-operator/tigera-operator-747864d56d-8fkhr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:25.782085 kubelet[2722]: I0813 01:13:25.781839 2722 kubelet.go:2405] "Pod admission denied" podUID="288afc4c-44e2-4987-9164-5c261083045f" pod="tigera-operator/tigera-operator-747864d56d-b9n95" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:25.978620 kubelet[2722]: I0813 01:13:25.978581 2722 kubelet.go:2405] "Pod admission denied" podUID="84d11401-2194-44ad-a50e-f8a069b0b71d" pod="tigera-operator/tigera-operator-747864d56d-nfk76" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:26.076583 kubelet[2722]: I0813 01:13:26.076535 2722 kubelet.go:2405] "Pod admission denied" podUID="711347d0-e5f9-4551-8bdf-6be95bc95b24" pod="tigera-operator/tigera-operator-747864d56d-cbxwp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:26.185695 kubelet[2722]: I0813 01:13:26.184456 2722 kubelet.go:2405] "Pod admission denied" podUID="3eb6f394-6d1e-405c-9789-ca4e48f1779c" pod="tigera-operator/tigera-operator-747864d56d-nt85f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:26.291336 kubelet[2722]: I0813 01:13:26.291078 2722 kubelet.go:2405] "Pod admission denied" podUID="986e7d27-7671-4122-a385-10b994689571" pod="tigera-operator/tigera-operator-747864d56d-sk2bs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:26.376084 kubelet[2722]: I0813 01:13:26.376042 2722 kubelet.go:2405] "Pod admission denied" podUID="16d10d1f-a42d-45f9-b399-ccd060949bea" pod="tigera-operator/tigera-operator-747864d56d-6k9d2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:26.582649 kubelet[2722]: I0813 01:13:26.582226 2722 kubelet.go:2405] "Pod admission denied" podUID="9cdcdd58-ff4c-483e-a239-f10c39e2dea5" pod="tigera-operator/tigera-operator-747864d56d-wdpkq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:26.682252 kubelet[2722]: I0813 01:13:26.682195 2722 kubelet.go:2405] "Pod admission denied" podUID="7258156d-722d-4501-bc2e-60d2ad77d4c7" pod="tigera-operator/tigera-operator-747864d56d-p9q46" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:26.786923 kubelet[2722]: I0813 01:13:26.786841 2722 kubelet.go:2405] "Pod admission denied" podUID="88008029-e251-4261-b570-5f46f836890a" pod="tigera-operator/tigera-operator-747864d56d-fh72b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:26.876021 kubelet[2722]: I0813 01:13:26.875633 2722 kubelet.go:2405] "Pod admission denied" podUID="ba81de2d-fd67-43bb-8d30-d2724b852f17" pod="tigera-operator/tigera-operator-747864d56d-bdwpj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:26.978328 kubelet[2722]: I0813 01:13:26.978255 2722 kubelet.go:2405] "Pod admission denied" podUID="19cb67e9-8c71-4c02-842d-098a0fd73e2a" pod="tigera-operator/tigera-operator-747864d56d-vqp7b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:27.057639 kubelet[2722]: E0813 01:13:27.057608 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:13:27.178563 kubelet[2722]: I0813 01:13:27.178516 2722 kubelet.go:2405] "Pod admission denied" podUID="07979a93-3562-44e3-85a7-db677d4dffea" pod="tigera-operator/tigera-operator-747864d56d-mcgst" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:27.281442 kubelet[2722]: I0813 01:13:27.281380 2722 kubelet.go:2405] "Pod admission denied" podUID="a572a608-786e-4d50-bdd7-e4c7a088a7f9" pod="tigera-operator/tigera-operator-747864d56d-7qcgb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:27.396721 kubelet[2722]: I0813 01:13:27.396650 2722 kubelet.go:2405] "Pod admission denied" podUID="4e4f4b15-b3a6-4f6a-b93d-7bc050b4b00b" pod="tigera-operator/tigera-operator-747864d56d-8kxrz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:27.583202 kubelet[2722]: I0813 01:13:27.582777 2722 kubelet.go:2405] "Pod admission denied" podUID="792ee439-a295-491c-a7f4-428146ca8a87" pod="tigera-operator/tigera-operator-747864d56d-cw4tt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:27.677986 kubelet[2722]: I0813 01:13:27.677920 2722 kubelet.go:2405] "Pod admission denied" podUID="bbcdb056-0f32-4a45-b2d1-efa1a7eaeb40" pod="tigera-operator/tigera-operator-747864d56d-rfdph" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:27.783474 kubelet[2722]: I0813 01:13:27.783398 2722 kubelet.go:2405] "Pod admission denied" podUID="cc182a8f-94ad-4a76-aa32-c8e72e1a78e8" pod="tigera-operator/tigera-operator-747864d56d-25xj9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:27.986482 kubelet[2722]: I0813 01:13:27.986397 2722 kubelet.go:2405] "Pod admission denied" podUID="56710b5c-b163-4573-82ae-3975d29f4b6d" pod="tigera-operator/tigera-operator-747864d56d-gtv89" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:28.058606 kubelet[2722]: E0813 01:13:28.058150 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:13:28.060560 containerd[1550]: time="2025-08-13T01:13:28.060498338Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:28.151130 kubelet[2722]: I0813 01:13:28.151036 2722 kubelet.go:2405] "Pod admission denied" podUID="6a424d31-488e-4f61-8508-431225436b9d" pod="tigera-operator/tigera-operator-747864d56d-wnts9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:28.155354 containerd[1550]: time="2025-08-13T01:13:28.155314228Z" level=error msg="Failed to destroy network for sandbox \"a95b7de8d342986dcb0357d98d3526efe74c5c302d7af2fc7d84b9311c12a327\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:28.159599 systemd[1]: run-netns-cni\x2df20ffc05\x2d1bf9\x2dee7b\x2d04c3\x2d99e5c8b479f5.mount: Deactivated successfully. Aug 13 01:13:28.163148 containerd[1550]: time="2025-08-13T01:13:28.162870853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a95b7de8d342986dcb0357d98d3526efe74c5c302d7af2fc7d84b9311c12a327\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:28.163617 kubelet[2722]: E0813 01:13:28.163554 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a95b7de8d342986dcb0357d98d3526efe74c5c302d7af2fc7d84b9311c12a327\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:28.163666 kubelet[2722]: E0813 01:13:28.163634 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a95b7de8d342986dcb0357d98d3526efe74c5c302d7af2fc7d84b9311c12a327\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:13:28.163666 kubelet[2722]: E0813 01:13:28.163662 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a95b7de8d342986dcb0357d98d3526efe74c5c302d7af2fc7d84b9311c12a327\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:13:28.167034 kubelet[2722]: E0813 01:13:28.166785 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a95b7de8d342986dcb0357d98d3526efe74c5c302d7af2fc7d84b9311c12a327\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:13:28.237000 kubelet[2722]: I0813 01:13:28.236834 2722 kubelet.go:2405] "Pod admission denied" podUID="771b629c-b1e0-4644-a2b6-6190ccf552e7" pod="tigera-operator/tigera-operator-747864d56d-2nh86" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:28.336311 kubelet[2722]: I0813 01:13:28.336242 2722 kubelet.go:2405] "Pod admission denied" podUID="8d429c27-0c46-48f8-beee-6ba2adf22aa1" pod="tigera-operator/tigera-operator-747864d56d-ksqrc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:28.437686 kubelet[2722]: I0813 01:13:28.437615 2722 kubelet.go:2405] "Pod admission denied" podUID="85bf1ab6-c731-447d-bbfa-52d5cf8c0ff4" pod="tigera-operator/tigera-operator-747864d56d-qs72f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:28.537027 kubelet[2722]: I0813 01:13:28.536849 2722 kubelet.go:2405] "Pod admission denied" podUID="0d52d272-5bfb-4f48-b155-a219e4469e83" pod="tigera-operator/tigera-operator-747864d56d-c9htq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:28.648630 kubelet[2722]: I0813 01:13:28.648561 2722 kubelet.go:2405] "Pod admission denied" podUID="28b6d72e-6794-4168-9ca8-1472b21aab9c" pod="tigera-operator/tigera-operator-747864d56d-b6g45" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:28.738734 kubelet[2722]: I0813 01:13:28.738666 2722 kubelet.go:2405] "Pod admission denied" podUID="119f2e73-b995-4467-8910-9e87cb2b102f" pod="tigera-operator/tigera-operator-747864d56d-pdzx9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:28.935246 kubelet[2722]: I0813 01:13:28.935173 2722 kubelet.go:2405] "Pod admission denied" podUID="f29e364d-3f5f-4bee-984e-cd1106fa2c97" pod="tigera-operator/tigera-operator-747864d56d-5bbjl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:29.033446 kubelet[2722]: I0813 01:13:29.033370 2722 kubelet.go:2405] "Pod admission denied" podUID="28eed804-db40-4c2e-bd12-8de962c536e2" pod="tigera-operator/tigera-operator-747864d56d-c5nn9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:29.057534 kubelet[2722]: E0813 01:13:29.057342 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:13:29.058198 containerd[1550]: time="2025-08-13T01:13:29.058169826Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,}" Aug 13 01:13:29.124798 containerd[1550]: time="2025-08-13T01:13:29.124649851Z" level=error msg="Failed to destroy network for sandbox \"0a7bb09c05807eb04de69dc50422215f58864d0f6703dd3dc7377da8cf55be52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.128941 containerd[1550]: time="2025-08-13T01:13:29.128015116Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7bb09c05807eb04de69dc50422215f58864d0f6703dd3dc7377da8cf55be52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.129975 kubelet[2722]: E0813 01:13:29.128457 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7bb09c05807eb04de69dc50422215f58864d0f6703dd3dc7377da8cf55be52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:29.130051 kubelet[2722]: E0813 01:13:29.129974 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7bb09c05807eb04de69dc50422215f58864d0f6703dd3dc7377da8cf55be52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:13:29.130051 kubelet[2722]: E0813 01:13:29.130008 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a7bb09c05807eb04de69dc50422215f58864d0f6703dd3dc7377da8cf55be52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:13:29.130094 kubelet[2722]: E0813 01:13:29.130072 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a7bb09c05807eb04de69dc50422215f58864d0f6703dd3dc7377da8cf55be52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:13:29.130157 systemd[1]: run-netns-cni\x2d7d4e4e2c\x2d8d10\x2d0fc3\x2d37a9\x2d72fa4096998d.mount: Deactivated successfully. Aug 13 01:13:29.141615 kubelet[2722]: I0813 01:13:29.141566 2722 kubelet.go:2405] "Pod admission denied" podUID="8d904d56-1b13-45c7-82c8-77034b883400" pod="tigera-operator/tigera-operator-747864d56d-bzllv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:29.351678 kubelet[2722]: I0813 01:13:29.350446 2722 kubelet.go:2405] "Pod admission denied" podUID="bd9ab75a-4401-41cd-9c9e-b8070b582707" pod="tigera-operator/tigera-operator-747864d56d-8p4ck" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:29.428179 kubelet[2722]: I0813 01:13:29.428146 2722 kubelet.go:2405] "Pod admission denied" podUID="3912b1b6-493a-4bc2-b194-01781d9e2575" pod="tigera-operator/tigera-operator-747864d56d-fs5lt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:29.484854 kubelet[2722]: I0813 01:13:29.484796 2722 kubelet.go:2405] "Pod admission denied" podUID="30056db7-2e89-4827-bec9-e4fda64e6619" pod="tigera-operator/tigera-operator-747864d56d-rhq7n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:29.587882 kubelet[2722]: I0813 01:13:29.587762 2722 kubelet.go:2405] "Pod admission denied" podUID="dd46ab45-ad7b-49ea-b32f-03210ba3c8dd" pod="tigera-operator/tigera-operator-747864d56d-7w6rm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:29.711910 kubelet[2722]: I0813 01:13:29.711857 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:13:29.712321 kubelet[2722]: I0813 01:13:29.712040 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:13:29.715308 kubelet[2722]: I0813 01:13:29.715290 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:13:29.727516 kubelet[2722]: I0813 01:13:29.727463 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:13:29.727516 kubelet[2722]: I0813 01:13:29.727531 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","calico-system/calico-node-hq29b","calico-system/csi-node-driver-l7lv4","calico-system/calico-typha-55bf5cd98c-8lqpc","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:13:29.727775 kubelet[2722]: E0813 01:13:29.727564 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:13:29.727775 kubelet[2722]: E0813 01:13:29.727575 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:13:29.727775 kubelet[2722]: E0813 01:13:29.727583 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:13:29.727775 kubelet[2722]: E0813 01:13:29.727590 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:13:29.727775 kubelet[2722]: E0813 01:13:29.727596 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:13:29.727775 kubelet[2722]: E0813 01:13:29.727605 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:13:29.727775 kubelet[2722]: E0813 01:13:29.727613 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:13:29.727775 kubelet[2722]: E0813 01:13:29.727623 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:13:29.727775 kubelet[2722]: E0813 01:13:29.727631 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:13:29.727775 kubelet[2722]: E0813 01:13:29.727639 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:13:29.727775 kubelet[2722]: I0813 01:13:29.727648 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:13:29.786963 kubelet[2722]: I0813 01:13:29.786909 2722 kubelet.go:2405] "Pod admission denied" podUID="93de0332-79a3-456a-b9d5-91f66c8938cb" pod="tigera-operator/tigera-operator-747864d56d-758br" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:29.888527 kubelet[2722]: I0813 01:13:29.888453 2722 kubelet.go:2405] "Pod admission denied" podUID="278e1c9a-fd33-4002-a240-de655818107e" pod="tigera-operator/tigera-operator-747864d56d-r8t6l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:29.985056 kubelet[2722]: I0813 01:13:29.984844 2722 kubelet.go:2405] "Pod admission denied" podUID="5598d371-43c2-45dc-b6ec-ff4026c4919d" pod="tigera-operator/tigera-operator-747864d56d-hflz6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:30.063958 kubelet[2722]: E0813 01:13:30.063864 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3991429278: write /var/lib/containerd/tmpmounts/containerd-mount3991429278/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-hq29b" podUID="3c0f3b86-7d63-44df-843e-763eb95a8b94" Aug 13 01:13:30.184716 kubelet[2722]: I0813 01:13:30.184650 2722 kubelet.go:2405] "Pod admission denied" podUID="b8d9970a-e82c-40c3-8ea9-6d44a589ae36" pod="tigera-operator/tigera-operator-747864d56d-jqp5z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:30.283034 kubelet[2722]: I0813 01:13:30.282862 2722 kubelet.go:2405] "Pod admission denied" podUID="4274b2b9-c215-4f9d-835f-f91b7a743400" pod="tigera-operator/tigera-operator-747864d56d-587tl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:30.403139 kubelet[2722]: I0813 01:13:30.402768 2722 kubelet.go:2405] "Pod admission denied" podUID="41e30e87-800f-429d-82f8-a9b34f6e249a" pod="tigera-operator/tigera-operator-747864d56d-twtk5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:30.487736 kubelet[2722]: I0813 01:13:30.487662 2722 kubelet.go:2405] "Pod admission denied" podUID="41eab71d-cf24-4321-9cbb-91e7850322c6" pod="tigera-operator/tigera-operator-747864d56d-wjxgp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:30.584608 kubelet[2722]: I0813 01:13:30.584435 2722 kubelet.go:2405] "Pod admission denied" podUID="8bf9a3bc-635e-4ea8-af93-21e241161515" pod="tigera-operator/tigera-operator-747864d56d-7lh8x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:30.686792 kubelet[2722]: I0813 01:13:30.686726 2722 kubelet.go:2405] "Pod admission denied" podUID="adb906ad-aa7f-4d7b-a1b2-f357d55a1406" pod="tigera-operator/tigera-operator-747864d56d-96dhb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:30.784363 kubelet[2722]: I0813 01:13:30.784293 2722 kubelet.go:2405] "Pod admission denied" podUID="fd21835f-dbd9-43df-8195-58095096833a" pod="tigera-operator/tigera-operator-747864d56d-cl8cc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:30.902613 kubelet[2722]: I0813 01:13:30.901978 2722 kubelet.go:2405] "Pod admission denied" podUID="edd5e9e9-ab26-4385-842a-d774fe3bbece" pod="tigera-operator/tigera-operator-747864d56d-pw2gb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:30.985913 kubelet[2722]: I0813 01:13:30.985821 2722 kubelet.go:2405] "Pod admission denied" podUID="3bf2bf41-3e0f-49bd-88e7-6ff86f7a4c1a" pod="tigera-operator/tigera-operator-747864d56d-gj5j6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:31.187555 kubelet[2722]: I0813 01:13:31.187502 2722 kubelet.go:2405] "Pod admission denied" podUID="cf85de8e-fd0c-4f75-9109-36b333e3d136" pod="tigera-operator/tigera-operator-747864d56d-szfhc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:31.284697 kubelet[2722]: I0813 01:13:31.284612 2722 kubelet.go:2405] "Pod admission denied" podUID="9bbab536-c07c-4f99-91fe-307e748a504c" pod="tigera-operator/tigera-operator-747864d56d-jhhwq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:31.390270 kubelet[2722]: I0813 01:13:31.390180 2722 kubelet.go:2405] "Pod admission denied" podUID="30937c4f-a89e-4c40-8b04-a01967efb8bb" pod="tigera-operator/tigera-operator-747864d56d-vl5gk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:31.493224 kubelet[2722]: I0813 01:13:31.492851 2722 kubelet.go:2405] "Pod admission denied" podUID="d267dc62-03c3-4581-bfbf-b4ad1564f9de" pod="tigera-operator/tigera-operator-747864d56d-j5b4r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:31.583200 kubelet[2722]: I0813 01:13:31.583137 2722 kubelet.go:2405] "Pod admission denied" podUID="4db082e1-a2ba-4dc1-9c4b-263ae5ba5184" pod="tigera-operator/tigera-operator-747864d56d-hmzx9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:31.784334 kubelet[2722]: I0813 01:13:31.783615 2722 kubelet.go:2405] "Pod admission denied" podUID="bc2476b9-48a1-4ae8-8965-0749b97b8cfe" pod="tigera-operator/tigera-operator-747864d56d-w4mnx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:31.882099 kubelet[2722]: I0813 01:13:31.882004 2722 kubelet.go:2405] "Pod admission denied" podUID="9f8aac95-7fa5-45ca-9d38-250a72a3d62c" pod="tigera-operator/tigera-operator-747864d56d-r8dh4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:31.933192 kubelet[2722]: I0813 01:13:31.933120 2722 kubelet.go:2405] "Pod admission denied" podUID="b7c172b9-ada1-4911-b9ad-6a0d8aa611df" pod="tigera-operator/tigera-operator-747864d56d-bppvp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:32.040579 kubelet[2722]: I0813 01:13:32.040397 2722 kubelet.go:2405] "Pod admission denied" podUID="f84427b5-1f57-4bb8-8dd0-321d92e1fa2e" pod="tigera-operator/tigera-operator-747864d56d-v9n8s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:32.060813 kubelet[2722]: E0813 01:13:32.060770 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:13:32.063170 containerd[1550]: time="2025-08-13T01:13:32.062956173Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:32.130012 containerd[1550]: time="2025-08-13T01:13:32.129916434Z" level=error msg="Failed to destroy network for sandbox \"59b2f48a04e7a70ae6b4062011f1a50ae50e35a2e19b2f172fe3c4ac2b0dee45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:32.132311 systemd[1]: run-netns-cni\x2d855115fa\x2d5a76\x2d7756\x2d8289\x2d372ea699336a.mount: Deactivated successfully. Aug 13 01:13:32.133085 containerd[1550]: time="2025-08-13T01:13:32.133032620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"59b2f48a04e7a70ae6b4062011f1a50ae50e35a2e19b2f172fe3c4ac2b0dee45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:32.133620 kubelet[2722]: E0813 01:13:32.133422 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59b2f48a04e7a70ae6b4062011f1a50ae50e35a2e19b2f172fe3c4ac2b0dee45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:32.133620 kubelet[2722]: E0813 01:13:32.133514 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59b2f48a04e7a70ae6b4062011f1a50ae50e35a2e19b2f172fe3c4ac2b0dee45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:13:32.133620 kubelet[2722]: E0813 01:13:32.133545 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"59b2f48a04e7a70ae6b4062011f1a50ae50e35a2e19b2f172fe3c4ac2b0dee45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:13:32.134295 kubelet[2722]: E0813 01:13:32.133622 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"59b2f48a04e7a70ae6b4062011f1a50ae50e35a2e19b2f172fe3c4ac2b0dee45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7lv4" podUID="6b834979-32a4-464b-9898-ef87b1042a9e" Aug 13 01:13:32.235844 kubelet[2722]: I0813 01:13:32.235768 2722 kubelet.go:2405] "Pod admission denied" podUID="1e005a1f-85b9-4a45-9f71-dc83401ad8a4" pod="tigera-operator/tigera-operator-747864d56d-xsp6q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:32.339197 kubelet[2722]: I0813 01:13:32.334821 2722 kubelet.go:2405] "Pod admission denied" podUID="5997b70f-ab18-4a15-be0d-58f401c3e1cd" pod="tigera-operator/tigera-operator-747864d56d-zw5fh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:32.400994 kubelet[2722]: I0813 01:13:32.400931 2722 kubelet.go:2405] "Pod admission denied" podUID="1468bf55-80de-447d-882d-2538e222bf44" pod="tigera-operator/tigera-operator-747864d56d-nn4ls" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:32.486324 kubelet[2722]: I0813 01:13:32.486261 2722 kubelet.go:2405] "Pod admission denied" podUID="c73e0a35-67a8-4fac-b685-f28daf0db37f" pod="tigera-operator/tigera-operator-747864d56d-jl6p6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:32.579464 kubelet[2722]: I0813 01:13:32.579414 2722 kubelet.go:2405] "Pod admission denied" podUID="3dadc688-36db-426f-9ad0-9a88df188421" pod="tigera-operator/tigera-operator-747864d56d-dgqbm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:32.687863 kubelet[2722]: I0813 01:13:32.687811 2722 kubelet.go:2405] "Pod admission denied" podUID="68661e58-7714-4fa1-86c6-84dc2e78684f" pod="tigera-operator/tigera-operator-747864d56d-6nmvs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:32.887392 kubelet[2722]: I0813 01:13:32.887329 2722 kubelet.go:2405] "Pod admission denied" podUID="717ddb54-7568-4104-b10e-f0dfc9b53126" pod="tigera-operator/tigera-operator-747864d56d-nwbsc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:32.997133 kubelet[2722]: I0813 01:13:32.994460 2722 kubelet.go:2405] "Pod admission denied" podUID="cadaaef8-a862-40a2-aa3d-1014f8f1df46" pod="tigera-operator/tigera-operator-747864d56d-pvqzh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:33.083445 kubelet[2722]: I0813 01:13:33.083381 2722 kubelet.go:2405] "Pod admission denied" podUID="999b5d3a-23db-477f-99ea-fd2725e25c51" pod="tigera-operator/tigera-operator-747864d56d-glmkl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:33.184346 kubelet[2722]: I0813 01:13:33.184284 2722 kubelet.go:2405] "Pod admission denied" podUID="26fa593d-6de1-40fd-9a03-98e563dd2e4a" pod="tigera-operator/tigera-operator-747864d56d-577s5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:33.243474 kubelet[2722]: I0813 01:13:33.243387 2722 kubelet.go:2405] "Pod admission denied" podUID="3f70fc02-d254-4b67-9615-b076f69e186c" pod="tigera-operator/tigera-operator-747864d56d-2bhg9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:33.337198 kubelet[2722]: I0813 01:13:33.336602 2722 kubelet.go:2405] "Pod admission denied" podUID="e93e6c94-6b06-4c17-8ac2-48b8b6f05658" pod="tigera-operator/tigera-operator-747864d56d-mtmkm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:33.541099 kubelet[2722]: I0813 01:13:33.541019 2722 kubelet.go:2405] "Pod admission denied" podUID="bef89050-380d-46fc-92f4-fcaced562ca2" pod="tigera-operator/tigera-operator-747864d56d-bdcgk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:33.633600 kubelet[2722]: I0813 01:13:33.633125 2722 kubelet.go:2405] "Pod admission denied" podUID="ebe1fe40-368f-4f96-bd46-a65bed0385cd" pod="tigera-operator/tigera-operator-747864d56d-55mw4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:33.732845 kubelet[2722]: I0813 01:13:33.732780 2722 kubelet.go:2405] "Pod admission denied" podUID="7d589140-a664-4346-9703-0e5e6b6f760c" pod="tigera-operator/tigera-operator-747864d56d-rdgkd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:33.831613 kubelet[2722]: I0813 01:13:33.831552 2722 kubelet.go:2405] "Pod admission denied" podUID="be3d4f5d-1f69-4958-bcc8-bc3dd923312a" pod="tigera-operator/tigera-operator-747864d56d-fcnz9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:33.892002 kubelet[2722]: I0813 01:13:33.891358 2722 kubelet.go:2405] "Pod admission denied" podUID="0e11f120-982b-41a7-bf56-ee65adec5ac7" pod="tigera-operator/tigera-operator-747864d56d-mnfqx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:33.997921 kubelet[2722]: I0813 01:13:33.997347 2722 kubelet.go:2405] "Pod admission denied" podUID="cf81df31-fab9-4b84-8e3e-e9383f6c4ee8" pod="tigera-operator/tigera-operator-747864d56d-8w65v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:34.089666 kubelet[2722]: I0813 01:13:34.089582 2722 kubelet.go:2405] "Pod admission denied" podUID="2d0d7387-a69a-4a77-91c3-7a895fdfddbb" pod="tigera-operator/tigera-operator-747864d56d-7c8h7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:34.186930 kubelet[2722]: I0813 01:13:34.186572 2722 kubelet.go:2405] "Pod admission denied" podUID="8ed22a0f-8b1f-411c-bf2d-7184adc7f38c" pod="tigera-operator/tigera-operator-747864d56d-shfbp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:34.390281 kubelet[2722]: I0813 01:13:34.389767 2722 kubelet.go:2405] "Pod admission denied" podUID="94a91b8a-0cca-48a2-8a91-686d6ca01063" pod="tigera-operator/tigera-operator-747864d56d-4r874" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:34.484674 kubelet[2722]: I0813 01:13:34.484496 2722 kubelet.go:2405] "Pod admission denied" podUID="c869c360-102c-47cd-9cb3-19ae46054fcf" pod="tigera-operator/tigera-operator-747864d56d-4hmkp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:34.595928 kubelet[2722]: I0813 01:13:34.594594 2722 kubelet.go:2405] "Pod admission denied" podUID="488d68e7-53a0-41f2-aa04-c2208c241308" pod="tigera-operator/tigera-operator-747864d56d-4fl65" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:34.684610 kubelet[2722]: I0813 01:13:34.684538 2722 kubelet.go:2405] "Pod admission denied" podUID="939d8203-dbce-4725-927f-05ea4828da9c" pod="tigera-operator/tigera-operator-747864d56d-54kn2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:34.794923 kubelet[2722]: I0813 01:13:34.793318 2722 kubelet.go:2405] "Pod admission denied" podUID="5e9a895c-e9d8-4edb-a932-78413f4198a8" pod="tigera-operator/tigera-operator-747864d56d-7vp4h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:35.038331 kubelet[2722]: I0813 01:13:35.038273 2722 kubelet.go:2405] "Pod admission denied" podUID="3185d812-3cc1-43c2-a767-cda839a0e088" pod="tigera-operator/tigera-operator-747864d56d-kxwzk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:35.139043 kubelet[2722]: I0813 01:13:35.138876 2722 kubelet.go:2405] "Pod admission denied" podUID="ff993eae-34aa-4a7d-9376-c4efd8960b24" pod="tigera-operator/tigera-operator-747864d56d-km5p2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:35.246602 kubelet[2722]: I0813 01:13:35.246539 2722 kubelet.go:2405] "Pod admission denied" podUID="8119d5b8-4b9e-4841-9072-6bae456f7f3a" pod="tigera-operator/tigera-operator-747864d56d-swlk6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:35.288101 kubelet[2722]: I0813 01:13:35.288057 2722 kubelet.go:2405] "Pod admission denied" podUID="65b7a2d9-38f2-4ab3-ac8e-068f6e0e169e" pod="tigera-operator/tigera-operator-747864d56d-xq259" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:35.381959 kubelet[2722]: I0813 01:13:35.381881 2722 kubelet.go:2405] "Pod admission denied" podUID="192222fb-841d-4153-af0b-4dc9f4e1175f" pod="tigera-operator/tigera-operator-747864d56d-5cxcr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:35.586655 kubelet[2722]: I0813 01:13:35.586573 2722 kubelet.go:2405] "Pod admission denied" podUID="dac6dc0f-af15-4afd-b2a9-57c82e5827b1" pod="tigera-operator/tigera-operator-747864d56d-5wrwt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:35.689137 kubelet[2722]: I0813 01:13:35.689079 2722 kubelet.go:2405] "Pod admission denied" podUID="12217dae-96cd-4bf1-8918-683ab4ce6fab" pod="tigera-operator/tigera-operator-747864d56d-s8wms" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:35.795936 kubelet[2722]: I0813 01:13:35.795253 2722 kubelet.go:2405] "Pod admission denied" podUID="c44bc403-f4dd-450f-a437-581105f9ab0e" pod="tigera-operator/tigera-operator-747864d56d-sr9ht" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:35.883478 kubelet[2722]: I0813 01:13:35.883348 2722 kubelet.go:2405] "Pod admission denied" podUID="40277ba2-c934-49b4-a797-f9d7c004592a" pod="tigera-operator/tigera-operator-747864d56d-7qzmm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:35.982586 kubelet[2722]: I0813 01:13:35.982545 2722 kubelet.go:2405] "Pod admission denied" podUID="906acab2-5a79-4ffe-afa7-0df2cb8db27c" pod="tigera-operator/tigera-operator-747864d56d-rsv28" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:36.082767 kubelet[2722]: I0813 01:13:36.082719 2722 kubelet.go:2405] "Pod admission denied" podUID="0c693822-2768-41fc-8ca3-c173e078f2e8" pod="tigera-operator/tigera-operator-747864d56d-bn6rn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:36.182260 kubelet[2722]: I0813 01:13:36.182226 2722 kubelet.go:2405] "Pod admission denied" podUID="9f994d6d-e718-4e89-a679-8cd71b955058" pod="tigera-operator/tigera-operator-747864d56d-wq4j8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:36.288754 kubelet[2722]: I0813 01:13:36.288268 2722 kubelet.go:2405] "Pod admission denied" podUID="00275c9f-32a2-47d1-8070-69cc6e32bed6" pod="tigera-operator/tigera-operator-747864d56d-4f8fh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:36.377657 kubelet[2722]: I0813 01:13:36.377605 2722 kubelet.go:2405] "Pod admission denied" podUID="350b2f2d-be79-4307-b211-5b552382fabb" pod="tigera-operator/tigera-operator-747864d56d-s77vd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:36.479650 kubelet[2722]: I0813 01:13:36.479466 2722 kubelet.go:2405] "Pod admission denied" podUID="6e1a9034-096d-4b5a-bdaa-0f6411a41f7c" pod="tigera-operator/tigera-operator-747864d56d-99xnc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:36.580628 kubelet[2722]: I0813 01:13:36.580571 2722 kubelet.go:2405] "Pod admission denied" podUID="642c6957-02ed-4fcc-9eb6-94e6fa5204c8" pod="tigera-operator/tigera-operator-747864d56d-c9x9x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:36.787526 kubelet[2722]: I0813 01:13:36.786845 2722 kubelet.go:2405] "Pod admission denied" podUID="be6ab81c-5e17-4d15-a83b-ab585d28eb20" pod="tigera-operator/tigera-operator-747864d56d-c9lzz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:36.891863 kubelet[2722]: I0813 01:13:36.890015 2722 kubelet.go:2405] "Pod admission denied" podUID="5bad5340-8b24-4cd6-b9f9-be76067d44d9" pod="tigera-operator/tigera-operator-747864d56d-nxrzr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:36.979013 kubelet[2722]: I0813 01:13:36.978973 2722 kubelet.go:2405] "Pod admission denied" podUID="4115d7b9-3668-4ef2-a39b-28892ea7465d" pod="tigera-operator/tigera-operator-747864d56d-25klt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:37.083753 kubelet[2722]: I0813 01:13:37.083606 2722 kubelet.go:2405] "Pod admission denied" podUID="f98e8704-fc99-4a78-97f5-df5c7574636f" pod="tigera-operator/tigera-operator-747864d56d-q8qt6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:37.186891 kubelet[2722]: I0813 01:13:37.186828 2722 kubelet.go:2405] "Pod admission denied" podUID="de9dc35c-982c-43e7-aad1-0b28ff8b15ed" pod="tigera-operator/tigera-operator-747864d56d-c6md4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:37.384625 kubelet[2722]: I0813 01:13:37.384033 2722 kubelet.go:2405] "Pod admission denied" podUID="279f7984-5d7b-45c5-9987-238cc794346c" pod="tigera-operator/tigera-operator-747864d56d-qkx2n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:37.491982 kubelet[2722]: I0813 01:13:37.491934 2722 kubelet.go:2405] "Pod admission denied" podUID="e6aebd00-59f1-448b-8953-e5cd5238e47a" pod="tigera-operator/tigera-operator-747864d56d-vb9k4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:37.577015 kubelet[2722]: I0813 01:13:37.576974 2722 kubelet.go:2405] "Pod admission denied" podUID="c6321ddd-3bd2-43f5-a586-afcada768582" pod="tigera-operator/tigera-operator-747864d56d-5ckc8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:37.682969 kubelet[2722]: I0813 01:13:37.682884 2722 kubelet.go:2405] "Pod admission denied" podUID="acb0945b-e36c-4db9-8b7e-caaff4607d78" pod="tigera-operator/tigera-operator-747864d56d-lgl4v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:37.781984 kubelet[2722]: I0813 01:13:37.781930 2722 kubelet.go:2405] "Pod admission denied" podUID="0a5ee681-4a27-44bf-a9eb-df12485c404b" pod="tigera-operator/tigera-operator-747864d56d-btkg5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:37.986921 kubelet[2722]: I0813 01:13:37.986771 2722 kubelet.go:2405] "Pod admission denied" podUID="6270b95a-413a-4171-a201-487e6dbbef5b" pod="tigera-operator/tigera-operator-747864d56d-x4w8j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:38.061624 kubelet[2722]: E0813 01:13:38.061212 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:13:38.061624 kubelet[2722]: E0813 01:13:38.061385 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:13:38.062623 containerd[1550]: time="2025-08-13T01:13:38.062597629Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,}" Aug 13 01:13:38.108560 kubelet[2722]: I0813 01:13:38.108527 2722 kubelet.go:2405] "Pod admission denied" podUID="eade0608-67f9-4783-be93-0bfa601ff058" pod="tigera-operator/tigera-operator-747864d56d-cvz85" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:38.151399 containerd[1550]: time="2025-08-13T01:13:38.151339439Z" level=error msg="Failed to destroy network for sandbox \"af3f6586cd1a11f5e2854ca2998f331b63e689a2e85095b88ceb8cc444537a2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:38.154043 systemd[1]: run-netns-cni\x2d28ad5612\x2d5349\x2d1791\x2d5b0a\x2d6db7feec5609.mount: Deactivated successfully. Aug 13 01:13:38.154995 containerd[1550]: time="2025-08-13T01:13:38.154156544Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"af3f6586cd1a11f5e2854ca2998f331b63e689a2e85095b88ceb8cc444537a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:38.155112 kubelet[2722]: E0813 01:13:38.155047 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af3f6586cd1a11f5e2854ca2998f331b63e689a2e85095b88ceb8cc444537a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:38.155112 kubelet[2722]: E0813 01:13:38.155090 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af3f6586cd1a11f5e2854ca2998f331b63e689a2e85095b88ceb8cc444537a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:13:38.155112 kubelet[2722]: E0813 01:13:38.155110 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"af3f6586cd1a11f5e2854ca2998f331b63e689a2e85095b88ceb8cc444537a2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:13:38.155274 kubelet[2722]: E0813 01:13:38.155150 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"af3f6586cd1a11f5e2854ca2998f331b63e689a2e85095b88ceb8cc444537a2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:13:38.232566 kubelet[2722]: I0813 01:13:38.232520 2722 kubelet.go:2405] "Pod admission denied" podUID="6f57266e-fa70-4400-b7e9-a68d484b4a13" pod="tigera-operator/tigera-operator-747864d56d-m2xzd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:38.282126 kubelet[2722]: I0813 01:13:38.281637 2722 kubelet.go:2405] "Pod admission denied" podUID="9ebb5110-7610-4f39-acb2-12b7eb81e7e6" pod="tigera-operator/tigera-operator-747864d56d-7rxzx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:38.377097 kubelet[2722]: I0813 01:13:38.377057 2722 kubelet.go:2405] "Pod admission denied" podUID="f4ef0b51-7114-41af-8433-f221a6977cd1" pod="tigera-operator/tigera-operator-747864d56d-s25fb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:38.580794 kubelet[2722]: I0813 01:13:38.580611 2722 kubelet.go:2405] "Pod admission denied" podUID="90ad82c2-dd8a-45e7-a78b-a19070deb7e1" pod="tigera-operator/tigera-operator-747864d56d-ml2pd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:38.692925 kubelet[2722]: I0813 01:13:38.692733 2722 kubelet.go:2405] "Pod admission denied" podUID="ffa340c9-b168-4769-a2d1-f59cb6993a31" pod="tigera-operator/tigera-operator-747864d56d-6rlgs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:38.783078 kubelet[2722]: I0813 01:13:38.783029 2722 kubelet.go:2405] "Pod admission denied" podUID="c34d587d-8b4c-4521-baa1-7248b7938c19" pod="tigera-operator/tigera-operator-747864d56d-ldmq7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:38.881936 kubelet[2722]: I0813 01:13:38.881010 2722 kubelet.go:2405] "Pod admission denied" podUID="8608c3f3-d17f-446d-b249-da0a8cdb5326" pod="tigera-operator/tigera-operator-747864d56d-fcmhg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:38.985710 kubelet[2722]: I0813 01:13:38.985656 2722 kubelet.go:2405] "Pod admission denied" podUID="56949a78-d84d-4127-80c5-40cf03c4604a" pod="tigera-operator/tigera-operator-747864d56d-t7cd8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:39.084052 kubelet[2722]: I0813 01:13:39.083998 2722 kubelet.go:2405] "Pod admission denied" podUID="db4292ab-b22d-4076-9cae-3c2080305936" pod="tigera-operator/tigera-operator-747864d56d-7ngnf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:39.192778 kubelet[2722]: I0813 01:13:39.192736 2722 kubelet.go:2405] "Pod admission denied" podUID="8b1bc591-017b-433c-9021-f579ca3a8b07" pod="tigera-operator/tigera-operator-747864d56d-r7c57" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:39.278530 kubelet[2722]: I0813 01:13:39.278482 2722 kubelet.go:2405] "Pod admission denied" podUID="3d520cea-a30e-4c8e-a00f-a5842171872a" pod="tigera-operator/tigera-operator-747864d56d-bbdzb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:39.382500 kubelet[2722]: I0813 01:13:39.382442 2722 kubelet.go:2405] "Pod admission denied" podUID="288242f1-2dfd-4869-ba42-4523ac7f14cb" pod="tigera-operator/tigera-operator-747864d56d-kkhxf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:39.481317 kubelet[2722]: I0813 01:13:39.480666 2722 kubelet.go:2405] "Pod admission denied" podUID="4632b6cb-b078-42ec-9e6e-33b94502e72c" pod="tigera-operator/tigera-operator-747864d56d-lwr6z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:39.579909 kubelet[2722]: I0813 01:13:39.579866 2722 kubelet.go:2405] "Pod admission denied" podUID="1abc665f-44af-4356-8051-7dd472e6b151" pod="tigera-operator/tigera-operator-747864d56d-lvtdv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:39.742046 kubelet[2722]: I0813 01:13:39.741944 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:13:39.742046 kubelet[2722]: I0813 01:13:39.741985 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:13:39.743944 kubelet[2722]: I0813 01:13:39.743915 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:13:39.766815 kubelet[2722]: I0813 01:13:39.766598 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:13:39.766815 kubelet[2722]: I0813 01:13:39.766697 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/csi-node-driver-l7lv4","calico-system/calico-node-hq29b","calico-system/calico-typha-55bf5cd98c-8lqpc","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:13:39.766815 kubelet[2722]: E0813 01:13:39.766732 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:13:39.766815 kubelet[2722]: E0813 01:13:39.766747 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:13:39.766815 kubelet[2722]: E0813 01:13:39.766755 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:13:39.766815 kubelet[2722]: E0813 01:13:39.766763 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:13:39.766815 kubelet[2722]: E0813 01:13:39.766770 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:13:39.766815 kubelet[2722]: E0813 01:13:39.766786 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:13:39.766815 kubelet[2722]: E0813 01:13:39.766797 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:13:39.766815 kubelet[2722]: E0813 01:13:39.766808 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:13:39.766815 kubelet[2722]: E0813 01:13:39.766820 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:13:39.766815 kubelet[2722]: E0813 01:13:39.766831 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:13:39.768224 kubelet[2722]: I0813 01:13:39.766842 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:13:39.787408 kubelet[2722]: I0813 01:13:39.787368 2722 kubelet.go:2405] "Pod admission denied" podUID="6c3a4bbd-d41b-4ec9-a0d5-63ab979e426a" pod="tigera-operator/tigera-operator-747864d56d-czkpf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:39.880629 kubelet[2722]: I0813 01:13:39.880580 2722 kubelet.go:2405] "Pod admission denied" podUID="5e4c65bb-f7d4-4195-9659-7f44d9528394" pod="tigera-operator/tigera-operator-747864d56d-5z5cd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:39.996326 kubelet[2722]: I0813 01:13:39.995453 2722 kubelet.go:2405] "Pod admission denied" podUID="f9f581e9-030b-4b4f-a0fd-48e27bf5aa96" pod="tigera-operator/tigera-operator-747864d56d-q2fgr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:40.081230 kubelet[2722]: I0813 01:13:40.081182 2722 kubelet.go:2405] "Pod admission denied" podUID="42ac1024-0256-4a6a-8a83-bc61a0744db8" pod="tigera-operator/tigera-operator-747864d56d-9qn85" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:40.182615 kubelet[2722]: I0813 01:13:40.182567 2722 kubelet.go:2405] "Pod admission denied" podUID="eca8f5d0-d627-4dcf-ac9c-8f8f09ad6736" pod="tigera-operator/tigera-operator-747864d56d-vc7sc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:40.281369 kubelet[2722]: I0813 01:13:40.280539 2722 kubelet.go:2405] "Pod admission denied" podUID="6a0e2fb8-497a-402a-86c8-849db72141e3" pod="tigera-operator/tigera-operator-747864d56d-2vp2b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:40.379180 kubelet[2722]: I0813 01:13:40.379136 2722 kubelet.go:2405] "Pod admission denied" podUID="c63d2369-4fce-4bc0-af4d-12bf998a5790" pod="tigera-operator/tigera-operator-747864d56d-m9hbr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:40.493012 kubelet[2722]: I0813 01:13:40.491932 2722 kubelet.go:2405] "Pod admission denied" podUID="e6f9b740-9d72-4420-8bec-5ff2d7de9dd1" pod="tigera-operator/tigera-operator-747864d56d-wwxk7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:40.577992 kubelet[2722]: I0813 01:13:40.577508 2722 kubelet.go:2405] "Pod admission denied" podUID="54eec09b-faf0-45b7-99f5-ff264c33e06b" pod="tigera-operator/tigera-operator-747864d56d-58ltx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:40.790071 kubelet[2722]: I0813 01:13:40.789990 2722 kubelet.go:2405] "Pod admission denied" podUID="c6a3b63b-fb82-4916-bf01-faefd8aceacf" pod="tigera-operator/tigera-operator-747864d56d-2r7hr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:40.887191 kubelet[2722]: I0813 01:13:40.887048 2722 kubelet.go:2405] "Pod admission denied" podUID="6217fcfe-6681-4e96-a492-d1a530aaac60" pod="tigera-operator/tigera-operator-747864d56d-gk8xg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:40.984554 kubelet[2722]: I0813 01:13:40.984502 2722 kubelet.go:2405] "Pod admission denied" podUID="8cf6b167-f877-48d7-bb53-f72e44950c6f" pod="tigera-operator/tigera-operator-747864d56d-g6lgq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:41.202918 kubelet[2722]: I0813 01:13:41.202838 2722 kubelet.go:2405] "Pod admission denied" podUID="a87521f4-91e7-40ed-8320-c29db3ab06b1" pod="tigera-operator/tigera-operator-747864d56d-k29wq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:41.292632 kubelet[2722]: I0813 01:13:41.292554 2722 kubelet.go:2405] "Pod admission denied" podUID="752ba0ea-61bb-4523-84f5-e804b14f07bc" pod="tigera-operator/tigera-operator-747864d56d-lk2wk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:41.391646 kubelet[2722]: I0813 01:13:41.391574 2722 kubelet.go:2405] "Pod admission denied" podUID="5f43f3bc-2737-4a03-9366-47d3e5c5e0b6" pod="tigera-operator/tigera-operator-747864d56d-pxf5c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:41.486377 kubelet[2722]: I0813 01:13:41.486177 2722 kubelet.go:2405] "Pod admission denied" podUID="72063c1d-cb5c-4aad-bc34-a0918fe5f729" pod="tigera-operator/tigera-operator-747864d56d-9xbdt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:41.579742 kubelet[2722]: I0813 01:13:41.579703 2722 kubelet.go:2405] "Pod admission denied" podUID="497ec12f-4a05-4313-ae0f-8da3e6996191" pod="tigera-operator/tigera-operator-747864d56d-wj4gh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:41.810531 kubelet[2722]: I0813 01:13:41.809711 2722 kubelet.go:2405] "Pod admission denied" podUID="c00e350e-8998-4980-b9f3-965aa94ccf13" pod="tigera-operator/tigera-operator-747864d56d-d2c9j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:41.888431 kubelet[2722]: I0813 01:13:41.888361 2722 kubelet.go:2405] "Pod admission denied" podUID="6b19eda0-dd3b-43bb-b54d-8dbaead398a7" pod="tigera-operator/tigera-operator-747864d56d-9bpf8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:41.990293 kubelet[2722]: I0813 01:13:41.990225 2722 kubelet.go:2405] "Pod admission denied" podUID="6fe9f158-29fe-497c-a663-625f1632d603" pod="tigera-operator/tigera-operator-747864d56d-9462m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:42.184020 kubelet[2722]: I0813 01:13:42.183988 2722 kubelet.go:2405] "Pod admission denied" podUID="41b8a425-fa89-4c1d-aed6-973c4220f496" pod="tigera-operator/tigera-operator-747864d56d-rbjfq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:42.284099 kubelet[2722]: I0813 01:13:42.284025 2722 kubelet.go:2405] "Pod admission denied" podUID="47f2924c-e20d-4a89-b293-89e767d63d25" pod="tigera-operator/tigera-operator-747864d56d-dczxd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:42.398890 kubelet[2722]: I0813 01:13:42.398097 2722 kubelet.go:2405] "Pod admission denied" podUID="a9658aa1-3b89-4dac-819d-3347c4d07e28" pod="tigera-operator/tigera-operator-747864d56d-8cvht" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:42.481261 kubelet[2722]: I0813 01:13:42.480407 2722 kubelet.go:2405] "Pod admission denied" podUID="19f5c11d-a99c-4bb9-83ce-38e6bb2a4bc3" pod="tigera-operator/tigera-operator-747864d56d-tkbd9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:42.581312 kubelet[2722]: I0813 01:13:42.581266 2722 kubelet.go:2405] "Pod admission denied" podUID="d8ddccf4-5e95-4de3-bbe4-b920f799f325" pod="tigera-operator/tigera-operator-747864d56d-fgt2l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:42.785240 kubelet[2722]: I0813 01:13:42.785073 2722 kubelet.go:2405] "Pod admission denied" podUID="47a7fbde-472a-4cba-8dee-16869c62b3f3" pod="tigera-operator/tigera-operator-747864d56d-82rzj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:42.881241 kubelet[2722]: I0813 01:13:42.881187 2722 kubelet.go:2405] "Pod admission denied" podUID="ee173850-0cbb-447c-875e-a4535674394b" pod="tigera-operator/tigera-operator-747864d56d-trtst" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:42.942922 kubelet[2722]: I0813 01:13:42.942626 2722 kubelet.go:2405] "Pod admission denied" podUID="20efb188-95f8-41d8-81dc-0062fef17ace" pod="tigera-operator/tigera-operator-747864d56d-pkbdw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:43.033325 kubelet[2722]: I0813 01:13:43.033275 2722 kubelet.go:2405] "Pod admission denied" podUID="bb550e2e-0883-4951-b889-ab92c2d11962" pod="tigera-operator/tigera-operator-747864d56d-2c9fl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:43.058139 containerd[1550]: time="2025-08-13T01:13:43.057840332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:43.058853 containerd[1550]: time="2025-08-13T01:13:43.058110002Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:43.140279 containerd[1550]: time="2025-08-13T01:13:43.139990313Z" level=error msg="Failed to destroy network for sandbox \"b840c5159242d45d148c37c2fcc3974689f1c73c0def76afaa9f61d92271bf20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:43.142873 systemd[1]: run-netns-cni\x2db0caf132\x2d14a6\x2d7b8c\x2d9872\x2d9fdb5548efea.mount: Deactivated successfully. Aug 13 01:13:43.144622 containerd[1550]: time="2025-08-13T01:13:43.144451560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b840c5159242d45d148c37c2fcc3974689f1c73c0def76afaa9f61d92271bf20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:43.145523 kubelet[2722]: E0813 01:13:43.145454 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b840c5159242d45d148c37c2fcc3974689f1c73c0def76afaa9f61d92271bf20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:43.145669 kubelet[2722]: E0813 01:13:43.145531 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b840c5159242d45d148c37c2fcc3974689f1c73c0def76afaa9f61d92271bf20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:13:43.145669 kubelet[2722]: E0813 01:13:43.145560 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b840c5159242d45d148c37c2fcc3974689f1c73c0def76afaa9f61d92271bf20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:13:43.145669 kubelet[2722]: E0813 01:13:43.145626 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b840c5159242d45d148c37c2fcc3974689f1c73c0def76afaa9f61d92271bf20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7lv4" podUID="6b834979-32a4-464b-9898-ef87b1042a9e" Aug 13 01:13:43.147015 containerd[1550]: time="2025-08-13T01:13:43.146946464Z" level=error msg="Failed to destroy network for sandbox \"ff879100e101e83d6f3d5c1279b066d10d67595de5708674f5c1a746e30bfba8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:43.148855 containerd[1550]: time="2025-08-13T01:13:43.148792047Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff879100e101e83d6f3d5c1279b066d10d67595de5708674f5c1a746e30bfba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:43.149166 kubelet[2722]: E0813 01:13:43.149108 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff879100e101e83d6f3d5c1279b066d10d67595de5708674f5c1a746e30bfba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:43.149313 kubelet[2722]: E0813 01:13:43.149268 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff879100e101e83d6f3d5c1279b066d10d67595de5708674f5c1a746e30bfba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:13:43.149428 kubelet[2722]: E0813 01:13:43.149359 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ff879100e101e83d6f3d5c1279b066d10d67595de5708674f5c1a746e30bfba8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:13:43.149503 kubelet[2722]: E0813 01:13:43.149479 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ff879100e101e83d6f3d5c1279b066d10d67595de5708674f5c1a746e30bfba8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:13:43.150356 systemd[1]: run-netns-cni\x2d6d099a7f\x2d8e59\x2d3b86\x2d9db2\x2da6f819fd2dcc.mount: Deactivated successfully. Aug 13 01:13:43.239730 kubelet[2722]: I0813 01:13:43.239611 2722 kubelet.go:2405] "Pod admission denied" podUID="79cc31fb-0270-4d4c-a738-1997974089f1" pod="tigera-operator/tigera-operator-747864d56d-74hpt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:43.340407 kubelet[2722]: I0813 01:13:43.339531 2722 kubelet.go:2405] "Pod admission denied" podUID="bf579995-03eb-45c5-8ace-516d1d9ef141" pod="tigera-operator/tigera-operator-747864d56d-btbjn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:43.436032 kubelet[2722]: I0813 01:13:43.435972 2722 kubelet.go:2405] "Pod admission denied" podUID="ef26caaa-d0a1-4c5d-9952-2adf6b000e93" pod="tigera-operator/tigera-operator-747864d56d-kcgl7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:43.586966 kubelet[2722]: I0813 01:13:43.586375 2722 kubelet.go:2405] "Pod admission denied" podUID="9d7a0a1e-d813-431d-9a21-662b1969e6e6" pod="tigera-operator/tigera-operator-747864d56d-pvz88" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:43.694569 kubelet[2722]: I0813 01:13:43.694517 2722 kubelet.go:2405] "Pod admission denied" podUID="20d05797-0107-4c40-995a-1c4938911fe6" pod="tigera-operator/tigera-operator-747864d56d-4kkx4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:43.793791 kubelet[2722]: I0813 01:13:43.793709 2722 kubelet.go:2405] "Pod admission denied" podUID="295df64c-7f11-4bab-8795-e7c20b7447bd" pod="tigera-operator/tigera-operator-747864d56d-tgmhg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:43.885417 kubelet[2722]: I0813 01:13:43.885345 2722 kubelet.go:2405] "Pod admission denied" podUID="3f086557-0832-4bb7-9e5b-f5a8fb83a8c4" pod="tigera-operator/tigera-operator-747864d56d-sqdbn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:43.938559 kubelet[2722]: I0813 01:13:43.938478 2722 kubelet.go:2405] "Pod admission denied" podUID="00f16d2d-6da0-482e-bde1-45cdfc4766eb" pod="tigera-operator/tigera-operator-747864d56d-n8zhw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:44.052014 kubelet[2722]: I0813 01:13:44.051835 2722 kubelet.go:2405] "Pod admission denied" podUID="cfa0a1b4-f309-4b83-bce1-a3471ddc19ad" pod="tigera-operator/tigera-operator-747864d56d-l2bjf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:44.060498 kubelet[2722]: E0813 01:13:44.059745 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:13:44.065845 containerd[1550]: time="2025-08-13T01:13:44.062576266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,}" Aug 13 01:13:44.071099 containerd[1550]: time="2025-08-13T01:13:44.070161437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:13:44.152263 containerd[1550]: time="2025-08-13T01:13:44.152187897Z" level=error msg="Failed to destroy network for sandbox \"ba1a7d32d39729b991c72329db920838ef968be62a68cdb3c2e5b37d2116c707\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:44.155067 containerd[1550]: time="2025-08-13T01:13:44.154977382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba1a7d32d39729b991c72329db920838ef968be62a68cdb3c2e5b37d2116c707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:44.155932 kubelet[2722]: E0813 01:13:44.155749 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba1a7d32d39729b991c72329db920838ef968be62a68cdb3c2e5b37d2116c707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:44.155932 kubelet[2722]: E0813 01:13:44.155811 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba1a7d32d39729b991c72329db920838ef968be62a68cdb3c2e5b37d2116c707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:13:44.155932 kubelet[2722]: E0813 01:13:44.155836 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba1a7d32d39729b991c72329db920838ef968be62a68cdb3c2e5b37d2116c707\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:13:44.155794 systemd[1]: run-netns-cni\x2d6c61f459\x2d2c74\x2d7fdb\x2d50cf\x2d1298ec66696b.mount: Deactivated successfully. Aug 13 01:13:44.156638 kubelet[2722]: E0813 01:13:44.155891 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba1a7d32d39729b991c72329db920838ef968be62a68cdb3c2e5b37d2116c707\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:13:44.241128 kubelet[2722]: I0813 01:13:44.241050 2722 kubelet.go:2405] "Pod admission denied" podUID="fe777014-33f6-4229-ab91-2afe97a7a91f" pod="tigera-operator/tigera-operator-747864d56d-nhng6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:44.338341 kubelet[2722]: I0813 01:13:44.336888 2722 kubelet.go:2405] "Pod admission denied" podUID="c09760a7-a8f5-4bdc-ada4-514d9a8afe04" pod="tigera-operator/tigera-operator-747864d56d-qnzjc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:44.441317 kubelet[2722]: I0813 01:13:44.441253 2722 kubelet.go:2405] "Pod admission denied" podUID="14209c4c-10cb-46ca-b5c4-7d2b5b52eea7" pod="tigera-operator/tigera-operator-747864d56d-d4mpv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:44.536666 kubelet[2722]: I0813 01:13:44.536604 2722 kubelet.go:2405] "Pod admission denied" podUID="eb25f593-905d-43fc-9555-c2f915965603" pod="tigera-operator/tigera-operator-747864d56d-kbnlw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:44.651084 kubelet[2722]: I0813 01:13:44.650107 2722 kubelet.go:2405] "Pod admission denied" podUID="a67715da-0c04-48a6-a01d-9b0051e2be87" pod="tigera-operator/tigera-operator-747864d56d-2td9j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:44.737798 kubelet[2722]: I0813 01:13:44.737746 2722 kubelet.go:2405] "Pod admission denied" podUID="bd92999c-e0c0-4f50-919b-14cfe504192c" pod="tigera-operator/tigera-operator-747864d56d-sk5sh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:44.836475 kubelet[2722]: I0813 01:13:44.836420 2722 kubelet.go:2405] "Pod admission denied" podUID="92d05c17-b843-4790-803b-4e5ab6938f4b" pod="tigera-operator/tigera-operator-747864d56d-2rjrk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:44.938861 kubelet[2722]: I0813 01:13:44.938745 2722 kubelet.go:2405] "Pod admission denied" podUID="711b2af0-67ea-415f-969a-dac137075613" pod="tigera-operator/tigera-operator-747864d56d-snp6v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:45.035772 kubelet[2722]: I0813 01:13:45.035460 2722 kubelet.go:2405] "Pod admission denied" podUID="c5bca639-f65f-4a0d-ad5f-198c449e2b70" pod="tigera-operator/tigera-operator-747864d56d-szgvq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:45.268203 kubelet[2722]: I0813 01:13:45.267577 2722 kubelet.go:2405] "Pod admission denied" podUID="dcb27524-c1b3-4eb7-9308-0ff27b142c2d" pod="tigera-operator/tigera-operator-747864d56d-8sffs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:45.345949 kubelet[2722]: I0813 01:13:45.345886 2722 kubelet.go:2405] "Pod admission denied" podUID="2320d479-9208-4205-abdf-a62abe0262ba" pod="tigera-operator/tigera-operator-747864d56d-fdgxs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:45.444153 kubelet[2722]: I0813 01:13:45.444119 2722 kubelet.go:2405] "Pod admission denied" podUID="16917c95-7609-4586-83ef-588039a3c7a3" pod="tigera-operator/tigera-operator-747864d56d-bktzr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:45.534305 kubelet[2722]: I0813 01:13:45.533985 2722 kubelet.go:2405] "Pod admission denied" podUID="fe44f7d0-1521-4f80-82df-9b6209f5b815" pod="tigera-operator/tigera-operator-747864d56d-hxfgw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:45.649007 kubelet[2722]: I0813 01:13:45.648964 2722 kubelet.go:2405] "Pod admission denied" podUID="80b52990-1d37-4004-8d35-6ca0ecd4137d" pod="tigera-operator/tigera-operator-747864d56d-ldz27" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:45.776007 kubelet[2722]: I0813 01:13:45.775923 2722 kubelet.go:2405] "Pod admission denied" podUID="2b2b1445-fa88-450b-a38a-6fbdad62456f" pod="tigera-operator/tigera-operator-747864d56d-p7xq4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:45.896673 kubelet[2722]: I0813 01:13:45.896488 2722 kubelet.go:2405] "Pod admission denied" podUID="6409efc6-d9a8-486c-a900-3c1c9af519df" pod="tigera-operator/tigera-operator-747864d56d-ck5d4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:45.998972 kubelet[2722]: I0813 01:13:45.998923 2722 kubelet.go:2405] "Pod admission denied" podUID="10a5e05e-6fae-4396-b802-e9addbd2360b" pod="tigera-operator/tigera-operator-747864d56d-6rr49" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:46.093957 kubelet[2722]: I0813 01:13:46.093914 2722 kubelet.go:2405] "Pod admission denied" podUID="778beeb8-3eec-4f9a-ba68-33c417afb4a6" pod="tigera-operator/tigera-operator-747864d56d-8k8r9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:46.200771 kubelet[2722]: I0813 01:13:46.200722 2722 kubelet.go:2405] "Pod admission denied" podUID="81bfb9a4-7898-4bb3-b80c-4545f6138ab7" pod="tigera-operator/tigera-operator-747864d56d-bjjm9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:46.423756 kubelet[2722]: I0813 01:13:46.423672 2722 kubelet.go:2405] "Pod admission denied" podUID="cb78d193-8e6b-44e2-b265-7ce979c035b3" pod="tigera-operator/tigera-operator-747864d56d-pclx8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:46.444801 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2063325269.mount: Deactivated successfully. Aug 13 01:13:46.449622 containerd[1550]: time="2025-08-13T01:13:46.449524854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2063325269: write /var/lib/containerd/tmpmounts/containerd-mount2063325269/usr/bin/calico-node: no space left on device" Aug 13 01:13:46.449622 containerd[1550]: time="2025-08-13T01:13:46.449592954Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:13:46.451067 kubelet[2722]: E0813 01:13:46.450986 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2063325269: write /var/lib/containerd/tmpmounts/containerd-mount2063325269/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:13:46.451067 kubelet[2722]: E0813 01:13:46.451029 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2063325269: write /var/lib/containerd/tmpmounts/containerd-mount2063325269/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:13:46.451342 kubelet[2722]: E0813 01:13:46.451215 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mq7j6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},StopSignal:nil,},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-hq29b_calico-system(3c0f3b86-7d63-44df-843e-763eb95a8b94): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2063325269: write /var/lib/containerd/tmpmounts/containerd-mount2063325269/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:13:46.452396 kubelet[2722]: E0813 01:13:46.452368 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2063325269: write /var/lib/containerd/tmpmounts/containerd-mount2063325269/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-hq29b" podUID="3c0f3b86-7d63-44df-843e-763eb95a8b94" Aug 13 01:13:46.479996 kubelet[2722]: I0813 01:13:46.479961 2722 kubelet.go:2405] "Pod admission denied" podUID="adcec0a2-d337-490b-8c9d-c3eda5df3e35" pod="tigera-operator/tigera-operator-747864d56d-jzfv2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:46.577890 kubelet[2722]: I0813 01:13:46.577782 2722 kubelet.go:2405] "Pod admission denied" podUID="2e3ddd63-cccc-4887-b42d-94f4a8c42cf1" pod="tigera-operator/tigera-operator-747864d56d-mphfn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:46.680795 kubelet[2722]: I0813 01:13:46.680741 2722 kubelet.go:2405] "Pod admission denied" podUID="0b3afb04-b782-433f-8d2b-7991a8c3a6ef" pod="tigera-operator/tigera-operator-747864d56d-z8t4d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:46.777751 kubelet[2722]: I0813 01:13:46.777650 2722 kubelet.go:2405] "Pod admission denied" podUID="4b9ee2e3-412a-4b0b-b35b-25b5ec16d2c6" pod="tigera-operator/tigera-operator-747864d56d-gmndd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:46.980845 kubelet[2722]: I0813 01:13:46.980765 2722 kubelet.go:2405] "Pod admission denied" podUID="17d8dce6-53f8-4e99-9db4-b10e811c0d4c" pod="tigera-operator/tigera-operator-747864d56d-xvp4h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:47.085329 kubelet[2722]: I0813 01:13:47.085145 2722 kubelet.go:2405] "Pod admission denied" podUID="0215c671-6571-4745-818b-60de634ee714" pod="tigera-operator/tigera-operator-747864d56d-zhp2g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:47.149919 kubelet[2722]: I0813 01:13:47.149399 2722 kubelet.go:2405] "Pod admission denied" podUID="56b410bb-5162-4b9a-a08c-5c4be0cdc168" pod="tigera-operator/tigera-operator-747864d56d-44xdx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:47.234877 kubelet[2722]: I0813 01:13:47.234839 2722 kubelet.go:2405] "Pod admission denied" podUID="611d5f5f-f3f5-4115-a0cb-478bf640e7db" pod="tigera-operator/tigera-operator-747864d56d-52cjb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:47.333461 kubelet[2722]: I0813 01:13:47.333415 2722 kubelet.go:2405] "Pod admission denied" podUID="d88e1480-979c-4b9f-9053-c2daa6bb7938" pod="tigera-operator/tigera-operator-747864d56d-czklh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:47.432623 kubelet[2722]: I0813 01:13:47.432582 2722 kubelet.go:2405] "Pod admission denied" podUID="523622da-0d65-4b7c-8484-80b734115f15" pod="tigera-operator/tigera-operator-747864d56d-7f89s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:47.636708 kubelet[2722]: I0813 01:13:47.636653 2722 kubelet.go:2405] "Pod admission denied" podUID="553d1987-0944-455f-ac32-cd19ef277d17" pod="tigera-operator/tigera-operator-747864d56d-62mcr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:47.732524 kubelet[2722]: I0813 01:13:47.732400 2722 kubelet.go:2405] "Pod admission denied" podUID="2bea75d0-c3b3-4347-933d-7d5116a59714" pod="tigera-operator/tigera-operator-747864d56d-dhhwm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:47.839688 kubelet[2722]: I0813 01:13:47.839634 2722 kubelet.go:2405] "Pod admission denied" podUID="47ab8eea-8df6-40e4-828f-ee9a83939e47" pod="tigera-operator/tigera-operator-747864d56d-j6zwg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:47.937007 kubelet[2722]: I0813 01:13:47.936933 2722 kubelet.go:2405] "Pod admission denied" podUID="8a398a9d-d5cd-4e98-9ac1-e5f2cb34907a" pod="tigera-operator/tigera-operator-747864d56d-dkplr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:48.036285 kubelet[2722]: I0813 01:13:48.036037 2722 kubelet.go:2405] "Pod admission denied" podUID="52d12228-ddfa-4183-b0d1-ebf2499ba0fe" pod="tigera-operator/tigera-operator-747864d56d-54ztg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:48.135407 kubelet[2722]: I0813 01:13:48.135356 2722 kubelet.go:2405] "Pod admission denied" podUID="66f16e4b-4a3c-4928-b3df-5716f4c64c14" pod="tigera-operator/tigera-operator-747864d56d-29zb6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:48.238806 kubelet[2722]: I0813 01:13:48.238731 2722 kubelet.go:2405] "Pod admission denied" podUID="d7814b76-dcc7-4895-b97b-ed3df3d54f8d" pod="tigera-operator/tigera-operator-747864d56d-wsml7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:48.338596 kubelet[2722]: I0813 01:13:48.337702 2722 kubelet.go:2405] "Pod admission denied" podUID="66df3d20-dc60-4803-8782-d76fa4f873d7" pod="tigera-operator/tigera-operator-747864d56d-pvbvd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:48.435473 kubelet[2722]: I0813 01:13:48.435421 2722 kubelet.go:2405] "Pod admission denied" podUID="341ad481-5b00-433c-95ec-f93abedba9df" pod="tigera-operator/tigera-operator-747864d56d-n885s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:48.558099 kubelet[2722]: I0813 01:13:48.558038 2722 kubelet.go:2405] "Pod admission denied" podUID="87b1ce0e-8b4c-40e6-ae36-f835e9cf5aad" pod="tigera-operator/tigera-operator-747864d56d-skfvm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:48.595942 kubelet[2722]: I0813 01:13:48.595304 2722 kubelet.go:2405] "Pod admission denied" podUID="36fdf664-387c-45b3-9a23-8d6ab50ea247" pod="tigera-operator/tigera-operator-747864d56d-xmcjc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:48.690623 kubelet[2722]: I0813 01:13:48.690559 2722 kubelet.go:2405] "Pod admission denied" podUID="487e7f1b-8188-410f-aa54-b96086fd10a3" pod="tigera-operator/tigera-operator-747864d56d-4dww6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:48.887492 kubelet[2722]: I0813 01:13:48.886745 2722 kubelet.go:2405] "Pod admission denied" podUID="a25248de-699c-432b-a506-9edf9f568301" pod="tigera-operator/tigera-operator-747864d56d-kdn6t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:48.980723 kubelet[2722]: I0813 01:13:48.980680 2722 kubelet.go:2405] "Pod admission denied" podUID="6b6d3b5c-0839-4061-a1b1-e93f4d71f09d" pod="tigera-operator/tigera-operator-747864d56d-pg2gw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:49.097268 kubelet[2722]: I0813 01:13:49.096940 2722 kubelet.go:2405] "Pod admission denied" podUID="63d63c59-bbe2-4911-9a57-a42ccdd7dff6" pod="tigera-operator/tigera-operator-747864d56d-4946p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:49.182328 kubelet[2722]: I0813 01:13:49.182287 2722 kubelet.go:2405] "Pod admission denied" podUID="f5778a0f-6dbd-47aa-9289-1498fb97bd0e" pod="tigera-operator/tigera-operator-747864d56d-nb4tt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:49.283025 kubelet[2722]: I0813 01:13:49.282968 2722 kubelet.go:2405] "Pod admission denied" podUID="f7926b74-5dbf-44f7-bfee-4c3e770b8d5e" pod="tigera-operator/tigera-operator-747864d56d-qgjks" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:49.382413 kubelet[2722]: I0813 01:13:49.382360 2722 kubelet.go:2405] "Pod admission denied" podUID="67771578-8052-4ff1-bf3e-389458909f7d" pod="tigera-operator/tigera-operator-747864d56d-pcjrk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:49.482462 kubelet[2722]: I0813 01:13:49.481789 2722 kubelet.go:2405] "Pod admission denied" podUID="93274050-fc2c-46e5-a31e-4906fa984417" pod="tigera-operator/tigera-operator-747864d56d-jjfbm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:49.593995 kubelet[2722]: I0813 01:13:49.593957 2722 kubelet.go:2405] "Pod admission denied" podUID="5296ad02-d0ac-4166-9dc8-c78800eff8c2" pod="tigera-operator/tigera-operator-747864d56d-rmkmh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:49.684681 kubelet[2722]: I0813 01:13:49.684586 2722 kubelet.go:2405] "Pod admission denied" podUID="8aea230f-f814-49f4-941d-3c19e080dde9" pod="tigera-operator/tigera-operator-747864d56d-lgmdp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:49.783359 kubelet[2722]: I0813 01:13:49.783234 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:13:49.783359 kubelet[2722]: I0813 01:13:49.783280 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:13:49.785002 kubelet[2722]: I0813 01:13:49.784978 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:13:49.794298 kubelet[2722]: I0813 01:13:49.794258 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:13:49.794343 kubelet[2722]: I0813 01:13:49.794320 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","calico-system/csi-node-driver-l7lv4","calico-system/calico-node-hq29b","calico-system/calico-typha-55bf5cd98c-8lqpc","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:13:49.794412 kubelet[2722]: E0813 01:13:49.794347 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:13:49.794412 kubelet[2722]: E0813 01:13:49.794357 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:13:49.794412 kubelet[2722]: E0813 01:13:49.794364 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:13:49.794412 kubelet[2722]: E0813 01:13:49.794370 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:13:49.794412 kubelet[2722]: E0813 01:13:49.794376 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:13:49.794412 kubelet[2722]: E0813 01:13:49.794385 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:13:49.794412 kubelet[2722]: E0813 01:13:49.794394 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:13:49.794412 kubelet[2722]: E0813 01:13:49.794401 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:13:49.794412 kubelet[2722]: E0813 01:13:49.794407 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:13:49.794412 kubelet[2722]: E0813 01:13:49.794414 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:13:49.794730 kubelet[2722]: I0813 01:13:49.794423 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:13:49.886192 kubelet[2722]: I0813 01:13:49.886138 2722 kubelet.go:2405] "Pod admission denied" podUID="b1bc625e-bb3d-43b2-9300-851c2ba456f8" pod="tigera-operator/tigera-operator-747864d56d-wpz5x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:49.983610 kubelet[2722]: I0813 01:13:49.983557 2722 kubelet.go:2405] "Pod admission denied" podUID="0629b602-4b02-4cd9-9dfc-54b4a1db6e48" pod="tigera-operator/tigera-operator-747864d56d-b8ncz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:50.087077 kubelet[2722]: I0813 01:13:50.086495 2722 kubelet.go:2405] "Pod admission denied" podUID="17a29f62-b806-49ff-89bf-2c8b8279c646" pod="tigera-operator/tigera-operator-747864d56d-xkl47" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:50.181164 kubelet[2722]: I0813 01:13:50.181110 2722 kubelet.go:2405] "Pod admission denied" podUID="07396e67-ecbf-4709-8cb4-e62a02ebc2a6" pod="tigera-operator/tigera-operator-747864d56d-gcdzc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:50.291089 kubelet[2722]: I0813 01:13:50.291030 2722 kubelet.go:2405] "Pod admission denied" podUID="367907a6-ddc2-48b4-8976-e8e4eaf7101b" pod="tigera-operator/tigera-operator-747864d56d-tbm74" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:50.502938 kubelet[2722]: I0813 01:13:50.502887 2722 kubelet.go:2405] "Pod admission denied" podUID="fecc6bfe-b8c7-4a00-9ecd-9d9e3d8af3cc" pod="tigera-operator/tigera-operator-747864d56d-92wkt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:50.580596 kubelet[2722]: I0813 01:13:50.580546 2722 kubelet.go:2405] "Pod admission denied" podUID="cd056db1-19b5-4264-930b-1f92295b0e65" pod="tigera-operator/tigera-operator-747864d56d-n9628" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:50.683659 kubelet[2722]: I0813 01:13:50.683611 2722 kubelet.go:2405] "Pod admission denied" podUID="471884c3-5e70-4417-b7c5-2be3a6ba1e0f" pod="tigera-operator/tigera-operator-747864d56d-zf5sv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:50.784298 kubelet[2722]: I0813 01:13:50.783556 2722 kubelet.go:2405] "Pod admission denied" podUID="9973e5eb-b01a-46f4-ade8-c0bb13b34d6f" pod="tigera-operator/tigera-operator-747864d56d-ktxhl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:50.837365 kubelet[2722]: I0813 01:13:50.837310 2722 kubelet.go:2405] "Pod admission denied" podUID="daf3c242-9f0a-4e94-a11c-82b8f85f3497" pod="tigera-operator/tigera-operator-747864d56d-fmbfx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:50.945743 kubelet[2722]: I0813 01:13:50.945681 2722 kubelet.go:2405] "Pod admission denied" podUID="197f0373-6e48-4c1a-b38d-c69a41b645d1" pod="tigera-operator/tigera-operator-747864d56d-2zpk4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:51.034843 kubelet[2722]: I0813 01:13:51.034730 2722 kubelet.go:2405] "Pod admission denied" podUID="b0a30cca-2e94-45a0-a78a-ee5fcc98d5a0" pod="tigera-operator/tigera-operator-747864d56d-fnzmp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:51.135706 kubelet[2722]: I0813 01:13:51.135675 2722 kubelet.go:2405] "Pod admission denied" podUID="33242914-4063-4f2e-9fd2-7de2240764d9" pod="tigera-operator/tigera-operator-747864d56d-4lb96" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:51.233611 kubelet[2722]: I0813 01:13:51.233560 2722 kubelet.go:2405] "Pod admission denied" podUID="c624351a-94f0-44c7-89de-e168c5a94842" pod="tigera-operator/tigera-operator-747864d56d-tmkpd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:51.331772 kubelet[2722]: I0813 01:13:51.331270 2722 kubelet.go:2405] "Pod admission denied" podUID="3aaec04c-4850-49b3-bfdb-7883e4602fe3" pod="tigera-operator/tigera-operator-747864d56d-w95xp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:51.434684 kubelet[2722]: I0813 01:13:51.434625 2722 kubelet.go:2405] "Pod admission denied" podUID="9a76af2b-6e97-4fbc-909f-6ab2b6235205" pod="tigera-operator/tigera-operator-747864d56d-5dsnq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:51.532263 kubelet[2722]: I0813 01:13:51.532213 2722 kubelet.go:2405] "Pod admission denied" podUID="839a7364-67c0-449a-be06-28a83e774201" pod="tigera-operator/tigera-operator-747864d56d-p28f8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:51.633153 kubelet[2722]: I0813 01:13:51.632509 2722 kubelet.go:2405] "Pod admission denied" podUID="81e15c68-123a-4521-a5b3-73c96aefed14" pod="tigera-operator/tigera-operator-747864d56d-vxppw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:51.679879 kubelet[2722]: I0813 01:13:51.679835 2722 kubelet.go:2405] "Pod admission denied" podUID="cc8d1be9-96ec-4cba-9a31-cb2aba6e7c2f" pod="tigera-operator/tigera-operator-747864d56d-b9m7m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:51.783456 kubelet[2722]: I0813 01:13:51.783404 2722 kubelet.go:2405] "Pod admission denied" podUID="35e5e49d-abcd-49bb-88bc-1fc214fef94c" pod="tigera-operator/tigera-operator-747864d56d-brq8c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:51.882529 kubelet[2722]: I0813 01:13:51.882488 2722 kubelet.go:2405] "Pod admission denied" podUID="808522bf-dfd6-44d4-86fd-60fb6f8e9291" pod="tigera-operator/tigera-operator-747864d56d-b9dqg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:52.000064 kubelet[2722]: I0813 01:13:52.000021 2722 kubelet.go:2405] "Pod admission denied" podUID="2e16c7f5-2a27-44df-a9c6-c6ed20e025a5" pod="tigera-operator/tigera-operator-747864d56d-wdv8t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:52.061450 kubelet[2722]: E0813 01:13:52.061416 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:13:52.063003 containerd[1550]: time="2025-08-13T01:13:52.062874439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,}" Aug 13 01:13:52.091681 kubelet[2722]: I0813 01:13:52.091630 2722 kubelet.go:2405] "Pod admission denied" podUID="b33efe1d-e0b8-4cfa-87cc-324ce403c0d2" pod="tigera-operator/tigera-operator-747864d56d-gx4vp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:52.157958 containerd[1550]: time="2025-08-13T01:13:52.157059427Z" level=error msg="Failed to destroy network for sandbox \"665551dbd789be2617d81654dac5b1d1f61976e911eb1b1ddb21c0376ab14a7d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:52.160322 systemd[1]: run-netns-cni\x2d38565663\x2d1611\x2de118\x2ddaea\x2ddd5a5e0d1aca.mount: Deactivated successfully. Aug 13 01:13:52.164138 containerd[1550]: time="2025-08-13T01:13:52.164102248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"665551dbd789be2617d81654dac5b1d1f61976e911eb1b1ddb21c0376ab14a7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:52.164594 kubelet[2722]: E0813 01:13:52.164435 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"665551dbd789be2617d81654dac5b1d1f61976e911eb1b1ddb21c0376ab14a7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:52.164594 kubelet[2722]: E0813 01:13:52.164535 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"665551dbd789be2617d81654dac5b1d1f61976e911eb1b1ddb21c0376ab14a7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:13:52.164594 kubelet[2722]: E0813 01:13:52.164579 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"665551dbd789be2617d81654dac5b1d1f61976e911eb1b1ddb21c0376ab14a7d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:13:52.164720 kubelet[2722]: E0813 01:13:52.164629 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"665551dbd789be2617d81654dac5b1d1f61976e911eb1b1ddb21c0376ab14a7d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:13:52.190931 kubelet[2722]: I0813 01:13:52.190877 2722 kubelet.go:2405] "Pod admission denied" podUID="a1f87ae8-beaa-4402-b1b2-5ec02ab66a67" pod="tigera-operator/tigera-operator-747864d56d-z6gzq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:52.386085 kubelet[2722]: I0813 01:13:52.385882 2722 kubelet.go:2405] "Pod admission denied" podUID="559d62c6-3a07-4284-964d-8f5c7d6431f0" pod="tigera-operator/tigera-operator-747864d56d-dszwf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:52.514497 kubelet[2722]: I0813 01:13:52.514261 2722 kubelet.go:2405] "Pod admission denied" podUID="23cc7103-10a1-4fde-b8cf-c53749e24420" pod="tigera-operator/tigera-operator-747864d56d-bf2zp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:52.640780 kubelet[2722]: I0813 01:13:52.640264 2722 kubelet.go:2405] "Pod admission denied" podUID="d9850378-7d04-4047-b6ce-cf46963595d5" pod="tigera-operator/tigera-operator-747864d56d-mvlvh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:52.734798 kubelet[2722]: I0813 01:13:52.734753 2722 kubelet.go:2405] "Pod admission denied" podUID="5a135fdd-fe1c-409c-95fe-b63373345edc" pod="tigera-operator/tigera-operator-747864d56d-ngzgc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:52.834921 kubelet[2722]: I0813 01:13:52.834848 2722 kubelet.go:2405] "Pod admission denied" podUID="5ef0b435-fe6f-45e9-8747-ca0935d357b6" pod="tigera-operator/tigera-operator-747864d56d-nnvcl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:52.932515 kubelet[2722]: I0813 01:13:52.932476 2722 kubelet.go:2405] "Pod admission denied" podUID="5756c925-403c-40ba-b537-029bae8b9b21" pod="tigera-operator/tigera-operator-747864d56d-hkxgv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:53.033395 kubelet[2722]: I0813 01:13:53.033347 2722 kubelet.go:2405] "Pod admission denied" podUID="36958b20-5e8c-4c5d-94aa-3824b65c191a" pod="tigera-operator/tigera-operator-747864d56d-lql6c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:53.135999 kubelet[2722]: I0813 01:13:53.135948 2722 kubelet.go:2405] "Pod admission denied" podUID="cfd6eac3-2f80-44ec-8fdd-16ca9d456d2d" pod="tigera-operator/tigera-operator-747864d56d-7dmbv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:53.241974 kubelet[2722]: I0813 01:13:53.240742 2722 kubelet.go:2405] "Pod admission denied" podUID="df7e431e-af87-4956-85b7-c7882cabe635" pod="tigera-operator/tigera-operator-747864d56d-hr62b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:53.282610 kubelet[2722]: I0813 01:13:53.282377 2722 kubelet.go:2405] "Pod admission denied" podUID="0688df49-dee6-4e27-b30d-d6a322d5e6f1" pod="tigera-operator/tigera-operator-747864d56d-cndl9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:53.386920 kubelet[2722]: I0813 01:13:53.385957 2722 kubelet.go:2405] "Pod admission denied" podUID="fabccf65-31cc-4e50-a1af-bddda8903cb8" pod="tigera-operator/tigera-operator-747864d56d-vspt7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:53.586302 kubelet[2722]: I0813 01:13:53.586102 2722 kubelet.go:2405] "Pod admission denied" podUID="6b10801b-ced9-4a50-8f36-dc893ad11750" pod="tigera-operator/tigera-operator-747864d56d-4cfjt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:53.694892 kubelet[2722]: I0813 01:13:53.694830 2722 kubelet.go:2405] "Pod admission denied" podUID="581ca05c-fd9d-45d1-b679-9d7fb2271818" pod="tigera-operator/tigera-operator-747864d56d-qgd2j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:53.791649 kubelet[2722]: I0813 01:13:53.791570 2722 kubelet.go:2405] "Pod admission denied" podUID="132c91ec-a896-402a-949f-b43b680d84fe" pod="tigera-operator/tigera-operator-747864d56d-fwxlg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:53.892933 kubelet[2722]: I0813 01:13:53.891800 2722 kubelet.go:2405] "Pod admission denied" podUID="839ec673-68cf-462b-ab38-88078ecda2a9" pod="tigera-operator/tigera-operator-747864d56d-xzhkx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:53.982095 kubelet[2722]: I0813 01:13:53.982048 2722 kubelet.go:2405] "Pod admission denied" podUID="4f452f9e-71ef-4d41-87f9-da05f8a06d60" pod="tigera-operator/tigera-operator-747864d56d-ld7j4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:54.100251 kubelet[2722]: I0813 01:13:54.099746 2722 kubelet.go:2405] "Pod admission denied" podUID="61dab454-007b-4e0c-a3a3-778836027bf3" pod="tigera-operator/tigera-operator-747864d56d-6nq6b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:54.184944 kubelet[2722]: I0813 01:13:54.184883 2722 kubelet.go:2405] "Pod admission denied" podUID="252e0f33-da45-49a2-af00-0c587f6f79a7" pod="tigera-operator/tigera-operator-747864d56d-dkvff" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:54.283665 kubelet[2722]: I0813 01:13:54.283615 2722 kubelet.go:2405] "Pod admission denied" podUID="869f8a9e-5705-43ac-8b2e-5548023722c2" pod="tigera-operator/tigera-operator-747864d56d-pzwdw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:54.383578 kubelet[2722]: I0813 01:13:54.383536 2722 kubelet.go:2405] "Pod admission denied" podUID="52b476da-1015-4cfd-8c21-7e9161b9d190" pod="tigera-operator/tigera-operator-747864d56d-g6p5s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:54.481485 kubelet[2722]: I0813 01:13:54.481374 2722 kubelet.go:2405] "Pod admission denied" podUID="5adcc4c5-4143-4b3a-ad84-44338538085a" pod="tigera-operator/tigera-operator-747864d56d-xjbx2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:54.599045 kubelet[2722]: I0813 01:13:54.598751 2722 kubelet.go:2405] "Pod admission denied" podUID="daefdf21-17d1-4e22-972c-cb9e794a8627" pod="tigera-operator/tigera-operator-747864d56d-bxq49" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:54.790707 kubelet[2722]: I0813 01:13:54.790575 2722 kubelet.go:2405] "Pod admission denied" podUID="3480d772-093f-49fc-89bb-f6fe16c58561" pod="tigera-operator/tigera-operator-747864d56d-ldgzm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:54.881852 kubelet[2722]: I0813 01:13:54.881808 2722 kubelet.go:2405] "Pod admission denied" podUID="597b5afa-98e4-4715-a155-ba90955c9957" pod="tigera-operator/tigera-operator-747864d56d-7xhdl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:54.981844 kubelet[2722]: I0813 01:13:54.981799 2722 kubelet.go:2405] "Pod admission denied" podUID="e68d5755-6de4-4a28-9a95-36c58cdf29ee" pod="tigera-operator/tigera-operator-747864d56d-x297h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:55.183564 kubelet[2722]: I0813 01:13:55.183508 2722 kubelet.go:2405] "Pod admission denied" podUID="8c00a383-d221-4673-8904-849060c6456f" pod="tigera-operator/tigera-operator-747864d56d-dzd2v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:55.300625 kubelet[2722]: I0813 01:13:55.299138 2722 kubelet.go:2405] "Pod admission denied" podUID="df920174-750c-45b1-914d-1d051c498eb0" pod="tigera-operator/tigera-operator-747864d56d-b2bm2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:55.388018 kubelet[2722]: I0813 01:13:55.387968 2722 kubelet.go:2405] "Pod admission denied" podUID="b4c39e19-66cd-4b42-9020-f0e82ff48a17" pod="tigera-operator/tigera-operator-747864d56d-cg7zz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:55.485920 kubelet[2722]: I0813 01:13:55.485760 2722 kubelet.go:2405] "Pod admission denied" podUID="80a54456-86a7-4fd4-90c7-47c933247422" pod="tigera-operator/tigera-operator-747864d56d-b4lm4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:55.589804 kubelet[2722]: I0813 01:13:55.589750 2722 kubelet.go:2405] "Pod admission denied" podUID="a1bf9bbf-6f6c-47bf-8623-f2c8f5885d64" pod="tigera-operator/tigera-operator-747864d56d-mjd2t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:55.683911 kubelet[2722]: I0813 01:13:55.683863 2722 kubelet.go:2405] "Pod admission denied" podUID="71a8ac6c-6d5b-4e01-9a53-7ad3c6cf7680" pod="tigera-operator/tigera-operator-747864d56d-gqwzs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:55.803215 kubelet[2722]: I0813 01:13:55.802587 2722 kubelet.go:2405] "Pod admission denied" podUID="1e718fab-372f-4745-b28c-f6520b0dddc9" pod="tigera-operator/tigera-operator-747864d56d-qr2tm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:55.882760 kubelet[2722]: I0813 01:13:55.882710 2722 kubelet.go:2405] "Pod admission denied" podUID="a19260b0-9944-465a-9a09-1f55d452aa10" pod="tigera-operator/tigera-operator-747864d56d-5htt7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:55.985580 kubelet[2722]: I0813 01:13:55.985524 2722 kubelet.go:2405] "Pod admission denied" podUID="bc8863d6-3c3f-4089-81d7-8b164f0f234d" pod="tigera-operator/tigera-operator-747864d56d-tpbhf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:56.057983 kubelet[2722]: E0813 01:13:56.057321 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:13:56.058546 containerd[1550]: time="2025-08-13T01:13:56.058510003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,}" Aug 13 01:13:56.095198 kubelet[2722]: I0813 01:13:56.095170 2722 kubelet.go:2405] "Pod admission denied" podUID="798c3873-05a3-4147-abeb-4c5d87dd9b5d" pod="tigera-operator/tigera-operator-747864d56d-cl42t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:56.116391 containerd[1550]: time="2025-08-13T01:13:56.115474905Z" level=error msg="Failed to destroy network for sandbox \"be1aca0fd510280985ea55ed474556234f0539642b222d584889a1d9baaf8a9d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:56.120436 containerd[1550]: time="2025-08-13T01:13:56.118394539Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1aca0fd510280985ea55ed474556234f0539642b222d584889a1d9baaf8a9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:56.119334 systemd[1]: run-netns-cni\x2d92713cf1\x2d829e\x2d45c7\x2d67e0\x2d0f7c3ff1a268.mount: Deactivated successfully. Aug 13 01:13:56.123926 kubelet[2722]: E0813 01:13:56.122496 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1aca0fd510280985ea55ed474556234f0539642b222d584889a1d9baaf8a9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:56.123926 kubelet[2722]: E0813 01:13:56.122544 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1aca0fd510280985ea55ed474556234f0539642b222d584889a1d9baaf8a9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:13:56.123926 kubelet[2722]: E0813 01:13:56.122565 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"be1aca0fd510280985ea55ed474556234f0539642b222d584889a1d9baaf8a9d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:13:56.123926 kubelet[2722]: E0813 01:13:56.122606 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"be1aca0fd510280985ea55ed474556234f0539642b222d584889a1d9baaf8a9d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:13:56.188879 kubelet[2722]: I0813 01:13:56.188825 2722 kubelet.go:2405] "Pod admission denied" podUID="dee0dc7b-951d-4218-a86e-1c91e54156d0" pod="tigera-operator/tigera-operator-747864d56d-82vdw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:56.415221 kubelet[2722]: I0813 01:13:56.414429 2722 kubelet.go:2405] "Pod admission denied" podUID="74e45fd6-8e09-427f-a81a-b1d6fb2a7a55" pod="tigera-operator/tigera-operator-747864d56d-245tf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:56.487565 kubelet[2722]: I0813 01:13:56.487516 2722 kubelet.go:2405] "Pod admission denied" podUID="25a08680-d053-4126-bcaa-5415aed1e075" pod="tigera-operator/tigera-operator-747864d56d-n7cdk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:56.589804 kubelet[2722]: I0813 01:13:56.589757 2722 kubelet.go:2405] "Pod admission denied" podUID="870918c8-9203-4896-9bf8-838e85ab96f4" pod="tigera-operator/tigera-operator-747864d56d-hzd27" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:56.692027 kubelet[2722]: I0813 01:13:56.691983 2722 kubelet.go:2405] "Pod admission denied" podUID="4655b135-6693-431a-a721-29ae3179d46c" pod="tigera-operator/tigera-operator-747864d56d-nfp7j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:56.791239 kubelet[2722]: I0813 01:13:56.791186 2722 kubelet.go:2405] "Pod admission denied" podUID="2e6e9cf3-6ddd-43e4-9652-65b78ac5438b" pod="tigera-operator/tigera-operator-747864d56d-95dqr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:56.904786 kubelet[2722]: I0813 01:13:56.904652 2722 kubelet.go:2405] "Pod admission denied" podUID="dceebbe9-0cbe-40ce-a9df-4fa73f1b951a" pod="tigera-operator/tigera-operator-747864d56d-6rk57" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:56.955822 kubelet[2722]: I0813 01:13:56.955088 2722 kubelet.go:2405] "Pod admission denied" podUID="635a29c4-2d96-4110-9732-c64ee8d3f0d6" pod="tigera-operator/tigera-operator-747864d56d-b8zbs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:57.032190 kubelet[2722]: I0813 01:13:57.032149 2722 kubelet.go:2405] "Pod admission denied" podUID="31990cab-7768-4c79-b0ae-cebc15b8652d" pod="tigera-operator/tigera-operator-747864d56d-h8b9b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:57.058543 containerd[1550]: time="2025-08-13T01:13:57.058171021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:57.142016 containerd[1550]: time="2025-08-13T01:13:57.141970580Z" level=error msg="Failed to destroy network for sandbox \"20b239ff16e622ab702ddd5bc2e62a8bfd7935d813416b72d7dfbf130dc5e904\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:57.144824 containerd[1550]: time="2025-08-13T01:13:57.143988633Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"20b239ff16e622ab702ddd5bc2e62a8bfd7935d813416b72d7dfbf130dc5e904\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:57.144967 kubelet[2722]: E0813 01:13:57.144403 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20b239ff16e622ab702ddd5bc2e62a8bfd7935d813416b72d7dfbf130dc5e904\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:57.144967 kubelet[2722]: E0813 01:13:57.144459 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20b239ff16e622ab702ddd5bc2e62a8bfd7935d813416b72d7dfbf130dc5e904\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:13:57.144967 kubelet[2722]: E0813 01:13:57.144483 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"20b239ff16e622ab702ddd5bc2e62a8bfd7935d813416b72d7dfbf130dc5e904\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:13:57.144967 kubelet[2722]: E0813 01:13:57.144533 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"20b239ff16e622ab702ddd5bc2e62a8bfd7935d813416b72d7dfbf130dc5e904\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:13:57.145916 systemd[1]: run-netns-cni\x2ded47caed\x2d84f5\x2d90f6\x2d5d8e\x2db76c94dc114c.mount: Deactivated successfully. Aug 13 01:13:57.156134 kubelet[2722]: I0813 01:13:57.156100 2722 kubelet.go:2405] "Pod admission denied" podUID="21036088-33cf-48bd-a90b-52539a5a7bd1" pod="tigera-operator/tigera-operator-747864d56d-kx5ff" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:57.204292 kubelet[2722]: I0813 01:13:57.204235 2722 kubelet.go:2405] "Pod admission denied" podUID="1739a9ac-8313-45d7-9ce0-627feb964ccc" pod="tigera-operator/tigera-operator-747864d56d-rfsrm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:57.297041 kubelet[2722]: I0813 01:13:57.295821 2722 kubelet.go:2405] "Pod admission denied" podUID="b489c2f7-a927-400f-8b70-b803cfb21e43" pod="tigera-operator/tigera-operator-747864d56d-r4wtf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:57.385537 kubelet[2722]: I0813 01:13:57.385498 2722 kubelet.go:2405] "Pod admission denied" podUID="f9123c2c-7791-49e1-a804-1cd745258835" pod="tigera-operator/tigera-operator-747864d56d-hgdw2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:57.484212 kubelet[2722]: I0813 01:13:57.484162 2722 kubelet.go:2405] "Pod admission denied" podUID="6368461a-c2c2-4c47-b2d8-18d673dcc19c" pod="tigera-operator/tigera-operator-747864d56d-cqnsj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:57.583171 kubelet[2722]: I0813 01:13:57.583023 2722 kubelet.go:2405] "Pod admission denied" podUID="033a8ce1-477b-415c-97e3-2d97d7b1c208" pod="tigera-operator/tigera-operator-747864d56d-xzlb6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:57.682954 kubelet[2722]: I0813 01:13:57.682908 2722 kubelet.go:2405] "Pod admission denied" podUID="92adea23-5ab1-4745-bae1-02f384a8d29e" pod="tigera-operator/tigera-operator-747864d56d-6w6kt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:57.798238 kubelet[2722]: I0813 01:13:57.797277 2722 kubelet.go:2405] "Pod admission denied" podUID="4b959acd-3196-497f-8895-6050046b85d6" pod="tigera-operator/tigera-operator-747864d56d-qprnl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:57.884151 kubelet[2722]: I0813 01:13:57.884038 2722 kubelet.go:2405] "Pod admission denied" podUID="9fcd0be2-d808-4e38-843c-71edc75904cb" pod="tigera-operator/tigera-operator-747864d56d-xmmln" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:57.982125 kubelet[2722]: I0813 01:13:57.982079 2722 kubelet.go:2405] "Pod admission denied" podUID="c60fe76b-c304-4575-bd22-945fec2976ff" pod="tigera-operator/tigera-operator-747864d56d-v6fwj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:58.041411 kubelet[2722]: I0813 01:13:58.041335 2722 kubelet.go:2405] "Pod admission denied" podUID="130b5302-d705-4a08-a4e0-2b45a99b3595" pod="tigera-operator/tigera-operator-747864d56d-9ggs4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:58.060792 containerd[1550]: time="2025-08-13T01:13:58.060722102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,}" Aug 13 01:13:58.128163 containerd[1550]: time="2025-08-13T01:13:58.123987261Z" level=error msg="Failed to destroy network for sandbox \"c6bac443aae99f20f423da0eda7c1d74748094adef118ca46d51b45654ec3578\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:58.128163 containerd[1550]: time="2025-08-13T01:13:58.126944295Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6bac443aae99f20f423da0eda7c1d74748094adef118ca46d51b45654ec3578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:58.128317 kubelet[2722]: E0813 01:13:58.127468 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6bac443aae99f20f423da0eda7c1d74748094adef118ca46d51b45654ec3578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:13:58.128317 kubelet[2722]: E0813 01:13:58.127525 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6bac443aae99f20f423da0eda7c1d74748094adef118ca46d51b45654ec3578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:13:58.128317 kubelet[2722]: E0813 01:13:58.127548 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6bac443aae99f20f423da0eda7c1d74748094adef118ca46d51b45654ec3578\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:13:58.128317 kubelet[2722]: E0813 01:13:58.127601 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6bac443aae99f20f423da0eda7c1d74748094adef118ca46d51b45654ec3578\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7lv4" podUID="6b834979-32a4-464b-9898-ef87b1042a9e" Aug 13 01:13:58.126204 systemd[1]: run-netns-cni\x2da3342ce3\x2d4cd6\x2d34f5\x2dd860\x2d4723c0a450fd.mount: Deactivated successfully. Aug 13 01:13:58.146593 kubelet[2722]: I0813 01:13:58.146148 2722 kubelet.go:2405] "Pod admission denied" podUID="2d5ce3ae-bd25-4f67-9206-37513048d94c" pod="tigera-operator/tigera-operator-747864d56d-9cbh4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:58.233876 kubelet[2722]: I0813 01:13:58.233826 2722 kubelet.go:2405] "Pod admission denied" podUID="08ec9714-6daf-43d4-bb3c-cec4b1616f55" pod="tigera-operator/tigera-operator-747864d56d-lgq7t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:58.280777 kubelet[2722]: I0813 01:13:58.280732 2722 kubelet.go:2405] "Pod admission denied" podUID="741eefb4-751e-41c1-8752-2d355147779f" pod="tigera-operator/tigera-operator-747864d56d-xpdzg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:58.381482 kubelet[2722]: I0813 01:13:58.381435 2722 kubelet.go:2405] "Pod admission denied" podUID="814ca7a4-84e7-48a3-b045-58a67e083f6b" pod="tigera-operator/tigera-operator-747864d56d-rfmth" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:58.585927 kubelet[2722]: I0813 01:13:58.584852 2722 kubelet.go:2405] "Pod admission denied" podUID="d8ad58f1-5055-4991-ba9c-3c4fc7adc991" pod="tigera-operator/tigera-operator-747864d56d-cbqw9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:58.681661 kubelet[2722]: I0813 01:13:58.681601 2722 kubelet.go:2405] "Pod admission denied" podUID="457a1795-f843-4624-949a-403f9e692e8b" pod="tigera-operator/tigera-operator-747864d56d-7j7tb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:58.783570 kubelet[2722]: I0813 01:13:58.783512 2722 kubelet.go:2405] "Pod admission denied" podUID="740e6b19-1ff2-4852-a3d0-df82ed3b79b2" pod="tigera-operator/tigera-operator-747864d56d-p5grl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:58.985716 kubelet[2722]: I0813 01:13:58.985649 2722 kubelet.go:2405] "Pod admission denied" podUID="c71b7e98-d480-4622-be8a-58fb5884163a" pod="tigera-operator/tigera-operator-747864d56d-tffws" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:59.093781 kubelet[2722]: I0813 01:13:59.093732 2722 kubelet.go:2405] "Pod admission denied" podUID="a9544529-429d-40c6-96fd-e41bff5faed1" pod="tigera-operator/tigera-operator-747864d56d-rlvc2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:59.186021 kubelet[2722]: I0813 01:13:59.185967 2722 kubelet.go:2405] "Pod admission denied" podUID="46d3cb9e-227d-4415-8647-733107d5bc32" pod="tigera-operator/tigera-operator-747864d56d-rwfqv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:59.288741 kubelet[2722]: I0813 01:13:59.288594 2722 kubelet.go:2405] "Pod admission denied" podUID="21b66bb2-6775-4059-b53c-0839245b17fe" pod="tigera-operator/tigera-operator-747864d56d-mhb5c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:59.398914 kubelet[2722]: I0813 01:13:59.398039 2722 kubelet.go:2405] "Pod admission denied" podUID="88e2a700-843c-4bf7-85fb-56d8ce39b8bc" pod="tigera-operator/tigera-operator-747864d56d-g8sks" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:59.490986 kubelet[2722]: I0813 01:13:59.490928 2722 kubelet.go:2405] "Pod admission denied" podUID="10e1d5a6-c4e3-4e00-8a80-170e5a028bdf" pod="tigera-operator/tigera-operator-747864d56d-9pnx6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:59.534769 kubelet[2722]: I0813 01:13:59.534713 2722 kubelet.go:2405] "Pod admission denied" podUID="2c000eb1-a7f7-4788-8008-d675ad7b2b8f" pod="tigera-operator/tigera-operator-747864d56d-cjlpg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:59.634343 kubelet[2722]: I0813 01:13:59.633664 2722 kubelet.go:2405] "Pod admission denied" podUID="c1fd2aa8-f1ad-4fc6-8f36-e22318cf5f9d" pod="tigera-operator/tigera-operator-747864d56d-fjzhw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:13:59.807915 kubelet[2722]: I0813 01:13:59.807870 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:13:59.808124 kubelet[2722]: I0813 01:13:59.807927 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:13:59.809315 kubelet[2722]: I0813 01:13:59.809301 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:13:59.811212 kubelet[2722]: I0813 01:13:59.811191 2722 image_gc_manager.go:514] "Removing image to free bytes" imageID="sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b" size=20939036 runtimeHandler="" Aug 13 01:13:59.811668 containerd[1550]: time="2025-08-13T01:13:59.811605150Z" level=info msg="RemoveImage \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 01:13:59.813453 containerd[1550]: time="2025-08-13T01:13:59.813163462Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 01:13:59.813916 containerd[1550]: time="2025-08-13T01:13:59.813805183Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\"" Aug 13 01:13:59.814447 containerd[1550]: time="2025-08-13T01:13:59.814422774Z" level=info msg="RemoveImage \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" returns successfully" Aug 13 01:13:59.814638 containerd[1550]: time="2025-08-13T01:13:59.814606714Z" level=info msg="ImageDelete event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 01:13:59.815736 kubelet[2722]: I0813 01:13:59.814888 2722 image_gc_manager.go:514] "Removing image to free bytes" imageID="sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1" size=58938593 runtimeHandler="" Aug 13 01:13:59.815863 containerd[1550]: time="2025-08-13T01:13:59.815842236Z" level=info msg="RemoveImage \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 01:13:59.816656 containerd[1550]: time="2025-08-13T01:13:59.816570077Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 01:13:59.817288 containerd[1550]: time="2025-08-13T01:13:59.817191447Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\"" Aug 13 01:13:59.817699 containerd[1550]: time="2025-08-13T01:13:59.817667349Z" level=info msg="RemoveImage \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" returns successfully" Aug 13 01:13:59.817848 containerd[1550]: time="2025-08-13T01:13:59.817748289Z" level=info msg="ImageDelete event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 01:13:59.829529 kubelet[2722]: I0813 01:13:59.829500 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:13:59.829591 kubelet[2722]: I0813 01:13:59.829574 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/csi-node-driver-l7lv4","calico-system/calico-node-hq29b","calico-system/calico-typha-55bf5cd98c-8lqpc","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:13:59.829652 kubelet[2722]: E0813 01:13:59.829602 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:13:59.829652 kubelet[2722]: E0813 01:13:59.829612 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:13:59.829652 kubelet[2722]: E0813 01:13:59.829618 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:13:59.829652 kubelet[2722]: E0813 01:13:59.829624 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:13:59.829652 kubelet[2722]: E0813 01:13:59.829631 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:13:59.829652 kubelet[2722]: E0813 01:13:59.829640 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:13:59.829652 kubelet[2722]: E0813 01:13:59.829649 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:13:59.829786 kubelet[2722]: E0813 01:13:59.829656 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:13:59.829786 kubelet[2722]: E0813 01:13:59.829664 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:13:59.829786 kubelet[2722]: E0813 01:13:59.829671 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:13:59.829786 kubelet[2722]: I0813 01:13:59.829681 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:13:59.846783 kubelet[2722]: I0813 01:13:59.846754 2722 kubelet.go:2405] "Pod admission denied" podUID="60978d2e-d4ce-4346-b266-21f7819fedfd" pod="tigera-operator/tigera-operator-747864d56d-vj8wn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:00.020812 kubelet[2722]: I0813 01:14:00.020754 2722 kubelet.go:2405] "Pod admission denied" podUID="79f96b24-bf2f-4788-9ff4-0d5e4291a228" pod="tigera-operator/tigera-operator-747864d56d-rlnl5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:00.061922 kubelet[2722]: E0813 01:14:00.061352 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2063325269: write /var/lib/containerd/tmpmounts/containerd-mount2063325269/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-hq29b" podUID="3c0f3b86-7d63-44df-843e-763eb95a8b94" Aug 13 01:14:00.093392 kubelet[2722]: I0813 01:14:00.093342 2722 kubelet.go:2405] "Pod admission denied" podUID="3b7b7923-f70a-4e96-a456-407ec5ff4de1" pod="tigera-operator/tigera-operator-747864d56d-x4xz8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:00.108870 kubelet[2722]: I0813 01:14:00.108818 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-x4xz8" podStartSLOduration=0.108807605 podStartE2EDuration="108.807605ms" podCreationTimestamp="2025-08-13 01:14:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:14:00.099145991 +0000 UTC m=+126.137075300" watchObservedRunningTime="2025-08-13 01:14:00.108807605 +0000 UTC m=+126.146736914" Aug 13 01:14:00.200926 kubelet[2722]: I0813 01:14:00.200598 2722 kubelet.go:2405] "Pod admission denied" podUID="44be64ae-74e8-432e-a160-df4b604b4095" pod="tigera-operator/tigera-operator-747864d56d-bmhfp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:00.411807 kubelet[2722]: I0813 01:14:00.411139 2722 kubelet.go:2405] "Pod admission denied" podUID="745d63b3-731d-4d21-a7a8-aa15e1d2501a" pod="tigera-operator/tigera-operator-747864d56d-s2xnl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:00.528916 kubelet[2722]: I0813 01:14:00.528852 2722 kubelet.go:2405] "Pod admission denied" podUID="9d42bae5-8bc1-449d-ac05-85bd5da41a2a" pod="tigera-operator/tigera-operator-747864d56d-dxwhm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:00.637509 kubelet[2722]: I0813 01:14:00.637455 2722 kubelet.go:2405] "Pod admission denied" podUID="09e3e11f-d5ee-43bf-a4de-1ec7fa82544e" pod="tigera-operator/tigera-operator-747864d56d-hcrhh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:00.686050 kubelet[2722]: I0813 01:14:00.686020 2722 kubelet.go:2405] "Pod admission denied" podUID="0ba47178-13e5-4146-8749-e87d73ebf87c" pod="tigera-operator/tigera-operator-747864d56d-8nvcg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:00.785974 kubelet[2722]: I0813 01:14:00.785920 2722 kubelet.go:2405] "Pod admission denied" podUID="01295bfc-1313-4ac9-950f-3c6ffb9f1ad8" pod="tigera-operator/tigera-operator-747864d56d-npfnb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:00.886833 kubelet[2722]: I0813 01:14:00.886785 2722 kubelet.go:2405] "Pod admission denied" podUID="d194dea2-03df-4aca-996f-72f9c04aa35f" pod="tigera-operator/tigera-operator-747864d56d-vnlj8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:00.984168 kubelet[2722]: I0813 01:14:00.984031 2722 kubelet.go:2405] "Pod admission denied" podUID="9f7e5990-04f5-4fe5-8963-0892ece8e4c7" pod="tigera-operator/tigera-operator-747864d56d-w74p4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:01.082913 kubelet[2722]: I0813 01:14:01.082860 2722 kubelet.go:2405] "Pod admission denied" podUID="fc6c982a-468f-4629-8c98-08120a2b51af" pod="tigera-operator/tigera-operator-747864d56d-vfc2k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:01.204970 kubelet[2722]: I0813 01:14:01.204884 2722 kubelet.go:2405] "Pod admission denied" podUID="1ff0fc8f-fcea-4255-8097-645422c1fdb9" pod="tigera-operator/tigera-operator-747864d56d-bf4fb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:01.389606 kubelet[2722]: I0813 01:14:01.388658 2722 kubelet.go:2405] "Pod admission denied" podUID="492acb14-7ab3-4749-bd9d-22a648198f44" pod="tigera-operator/tigera-operator-747864d56d-kk9kd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:01.486527 kubelet[2722]: I0813 01:14:01.486484 2722 kubelet.go:2405] "Pod admission denied" podUID="6919dea4-3d73-4b46-8e46-2fa9388a0563" pod="tigera-operator/tigera-operator-747864d56d-m6cbm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:01.591109 kubelet[2722]: I0813 01:14:01.591060 2722 kubelet.go:2405] "Pod admission denied" podUID="9c5166c3-6e75-47ce-9335-a83c9f73a895" pod="tigera-operator/tigera-operator-747864d56d-krp76" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:01.796182 kubelet[2722]: I0813 01:14:01.796128 2722 kubelet.go:2405] "Pod admission denied" podUID="69dadf31-6b49-4ed5-9746-3bb9d1504327" pod="tigera-operator/tigera-operator-747864d56d-8hmvt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:01.886691 kubelet[2722]: I0813 01:14:01.886623 2722 kubelet.go:2405] "Pod admission denied" podUID="ffb7352a-0eae-4754-9257-321ca6643491" pod="tigera-operator/tigera-operator-747864d56d-bl4jr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:01.987013 kubelet[2722]: I0813 01:14:01.986947 2722 kubelet.go:2405] "Pod admission denied" podUID="e5885b0c-a26e-46e2-a4b0-136fc4436e6b" pod="tigera-operator/tigera-operator-747864d56d-z59vn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:02.087852 kubelet[2722]: I0813 01:14:02.086718 2722 kubelet.go:2405] "Pod admission denied" podUID="ae4c6e16-d93f-4aaa-8728-60ad88eaefd3" pod="tigera-operator/tigera-operator-747864d56d-49sh6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:02.189465 kubelet[2722]: I0813 01:14:02.189409 2722 kubelet.go:2405] "Pod admission denied" podUID="d9b32a72-c2da-40ae-a9a3-58c2fef7806f" pod="tigera-operator/tigera-operator-747864d56d-gjtvn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:02.290738 kubelet[2722]: I0813 01:14:02.290664 2722 kubelet.go:2405] "Pod admission denied" podUID="6891d605-c737-4422-bcf9-a3329292e596" pod="tigera-operator/tigera-operator-747864d56d-th244" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:02.330522 kubelet[2722]: I0813 01:14:02.330434 2722 kubelet.go:2405] "Pod admission denied" podUID="7b6add11-fded-4d6b-9b54-b4a8d895836c" pod="tigera-operator/tigera-operator-747864d56d-ncq9l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:02.441882 kubelet[2722]: I0813 01:14:02.441836 2722 kubelet.go:2405] "Pod admission denied" podUID="a4b6a9e3-ac05-46f2-9d2e-d678c8c6d0d9" pod="tigera-operator/tigera-operator-747864d56d-8nr86" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:02.541154 kubelet[2722]: I0813 01:14:02.541095 2722 kubelet.go:2405] "Pod admission denied" podUID="783e6754-e1e1-4dd9-b8bd-75f22c33176f" pod="tigera-operator/tigera-operator-747864d56d-5j7t5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:02.620965 kubelet[2722]: I0813 01:14:02.620909 2722 kubelet.go:2405] "Pod admission denied" podUID="d0f93f0a-4fe2-4095-9a43-252749a9edea" pod="tigera-operator/tigera-operator-747864d56d-cltff" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:02.739216 kubelet[2722]: I0813 01:14:02.738341 2722 kubelet.go:2405] "Pod admission denied" podUID="3a451336-ac6d-4ce8-94bd-1b2cb14ca518" pod="tigera-operator/tigera-operator-747864d56d-nd5ts" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:02.843631 kubelet[2722]: I0813 01:14:02.843574 2722 kubelet.go:2405] "Pod admission denied" podUID="7d739d6c-d87b-4118-afa9-1a0f7627c466" pod="tigera-operator/tigera-operator-747864d56d-656t7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:02.941147 kubelet[2722]: I0813 01:14:02.941091 2722 kubelet.go:2405] "Pod admission denied" podUID="4d21e19f-7c58-40bf-904e-7e8380c924f3" pod="tigera-operator/tigera-operator-747864d56d-dhc9f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:03.042859 kubelet[2722]: I0813 01:14:03.042709 2722 kubelet.go:2405] "Pod admission denied" podUID="641688d6-4a48-4ddd-9da8-94222f2e0dac" pod="tigera-operator/tigera-operator-747864d56d-fx526" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:03.057573 kubelet[2722]: E0813 01:14:03.057546 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:14:03.058212 containerd[1550]: time="2025-08-13T01:14:03.058172122Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,}" Aug 13 01:14:03.141217 containerd[1550]: time="2025-08-13T01:14:03.141162935Z" level=error msg="Failed to destroy network for sandbox \"d01ec456419213386f8a703f1f969652c4b75dcf9656929e2595e02432723f70\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:03.143602 systemd[1]: run-netns-cni\x2d30036f71\x2d86c4\x2d72b8\x2d1966\x2d651cb5c9e7db.mount: Deactivated successfully. Aug 13 01:14:03.146796 containerd[1550]: time="2025-08-13T01:14:03.146657082Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d01ec456419213386f8a703f1f969652c4b75dcf9656929e2595e02432723f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:03.147859 kubelet[2722]: E0813 01:14:03.147030 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d01ec456419213386f8a703f1f969652c4b75dcf9656929e2595e02432723f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:03.147859 kubelet[2722]: E0813 01:14:03.147094 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d01ec456419213386f8a703f1f969652c4b75dcf9656929e2595e02432723f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:14:03.147859 kubelet[2722]: E0813 01:14:03.147118 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d01ec456419213386f8a703f1f969652c4b75dcf9656929e2595e02432723f70\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:14:03.147859 kubelet[2722]: E0813 01:14:03.147170 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d01ec456419213386f8a703f1f969652c4b75dcf9656929e2595e02432723f70\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:14:03.154010 kubelet[2722]: I0813 01:14:03.153985 2722 kubelet.go:2405] "Pod admission denied" podUID="62a4f592-594d-4353-b991-05da656a8682" pod="tigera-operator/tigera-operator-747864d56d-l6btf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:03.241646 kubelet[2722]: I0813 01:14:03.241565 2722 kubelet.go:2405] "Pod admission denied" podUID="cd06ca93-d7ba-490b-816b-0ca2b0e4c26d" pod="tigera-operator/tigera-operator-747864d56d-jrc8s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:03.351518 kubelet[2722]: I0813 01:14:03.350805 2722 kubelet.go:2405] "Pod admission denied" podUID="4c50aea8-64aa-45bb-a1ab-009b96179293" pod="tigera-operator/tigera-operator-747864d56d-lw8bs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:03.436520 kubelet[2722]: I0813 01:14:03.436466 2722 kubelet.go:2405] "Pod admission denied" podUID="9d27f362-42dc-4f5c-97d8-efb28f220876" pod="tigera-operator/tigera-operator-747864d56d-gbkxj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:03.539371 kubelet[2722]: I0813 01:14:03.539322 2722 kubelet.go:2405] "Pod admission denied" podUID="9a6e81c5-0425-456d-89f3-5ad991adae84" pod="tigera-operator/tigera-operator-747864d56d-p4dwv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:03.588922 kubelet[2722]: I0813 01:14:03.588840 2722 kubelet.go:2405] "Pod admission denied" podUID="29705a07-a281-475f-a50f-e0d5607cdf9e" pod="tigera-operator/tigera-operator-747864d56d-mrnrj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:03.686318 kubelet[2722]: I0813 01:14:03.686285 2722 kubelet.go:2405] "Pod admission denied" podUID="57f9f25d-a73b-47ea-87d3-e517a1fb158f" pod="tigera-operator/tigera-operator-747864d56d-kk7cf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:03.789791 kubelet[2722]: I0813 01:14:03.789730 2722 kubelet.go:2405] "Pod admission denied" podUID="cd2a9476-b9d7-42b7-b19e-1f23298a023d" pod="tigera-operator/tigera-operator-747864d56d-ggp5l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:03.886476 kubelet[2722]: I0813 01:14:03.886421 2722 kubelet.go:2405] "Pod admission denied" podUID="d00b7172-c349-4d7f-956d-4f4f55bb0d99" pod="tigera-operator/tigera-operator-747864d56d-wtzqb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:03.988053 kubelet[2722]: I0813 01:14:03.987350 2722 kubelet.go:2405] "Pod admission denied" podUID="f9013587-7865-4e72-80b0-f0e60b6db578" pod="tigera-operator/tigera-operator-747864d56d-slntz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:04.084769 kubelet[2722]: I0813 01:14:04.084721 2722 kubelet.go:2405] "Pod admission denied" podUID="8b882bda-b3ae-44a5-96f4-91bda7d734d5" pod="tigera-operator/tigera-operator-747864d56d-tn4k2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:04.303555 kubelet[2722]: I0813 01:14:04.302657 2722 kubelet.go:2405] "Pod admission denied" podUID="cb1c634c-3c9a-4171-9d24-4eb46f2fcc7d" pod="tigera-operator/tigera-operator-747864d56d-6d874" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:04.414222 kubelet[2722]: I0813 01:14:04.414162 2722 kubelet.go:2405] "Pod admission denied" podUID="58ed7b75-656e-4389-9607-3b64fac0813f" pod="tigera-operator/tigera-operator-747864d56d-fc9m5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:04.480279 kubelet[2722]: I0813 01:14:04.480229 2722 kubelet.go:2405] "Pod admission denied" podUID="74733633-e908-4caa-b931-03e4b74fceda" pod="tigera-operator/tigera-operator-747864d56d-gv6pm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:04.599046 kubelet[2722]: I0813 01:14:04.598485 2722 kubelet.go:2405] "Pod admission denied" podUID="7757af15-e2ea-490f-ad1d-906f8d74d24e" pod="tigera-operator/tigera-operator-747864d56d-9jhhj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:04.698201 kubelet[2722]: I0813 01:14:04.698145 2722 kubelet.go:2405] "Pod admission denied" podUID="419dc161-33d5-457c-a303-c958140edb9b" pod="tigera-operator/tigera-operator-747864d56d-snp8v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:04.898117 kubelet[2722]: I0813 01:14:04.897626 2722 kubelet.go:2405] "Pod admission denied" podUID="b2d4d1fa-7ade-4547-947a-6243b315abb3" pod="tigera-operator/tigera-operator-747864d56d-6p247" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:04.995020 kubelet[2722]: I0813 01:14:04.994971 2722 kubelet.go:2405] "Pod admission denied" podUID="6b500354-6bce-4bd2-b28e-df51623e7131" pod="tigera-operator/tigera-operator-747864d56d-vc8fd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:05.094646 kubelet[2722]: I0813 01:14:05.094606 2722 kubelet.go:2405] "Pod admission denied" podUID="6cf383fa-c2ab-4eb6-a2e9-8eb29357cd9e" pod="tigera-operator/tigera-operator-747864d56d-x6z8k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:05.190230 kubelet[2722]: I0813 01:14:05.190181 2722 kubelet.go:2405] "Pod admission denied" podUID="4f1f7319-4d3d-43b5-b8ee-142441a2d021" pod="tigera-operator/tigera-operator-747864d56d-9267m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:05.294638 kubelet[2722]: I0813 01:14:05.294572 2722 kubelet.go:2405] "Pod admission denied" podUID="0fd202ac-3f0c-4e7f-b428-1ea37d0d54d5" pod="tigera-operator/tigera-operator-747864d56d-xsghg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:05.495966 kubelet[2722]: I0813 01:14:05.495026 2722 kubelet.go:2405] "Pod admission denied" podUID="5a351a35-40aa-456d-be0a-3bbb2c525c3c" pod="tigera-operator/tigera-operator-747864d56d-rk4dk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:05.591530 kubelet[2722]: I0813 01:14:05.591481 2722 kubelet.go:2405] "Pod admission denied" podUID="6e1a0335-e6b0-4910-af23-e473a775e6cf" pod="tigera-operator/tigera-operator-747864d56d-xlccg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:05.688887 kubelet[2722]: I0813 01:14:05.688823 2722 kubelet.go:2405] "Pod admission denied" podUID="87d195f7-07c7-40e5-8240-bb7254667233" pod="tigera-operator/tigera-operator-747864d56d-f7jnc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:05.892182 kubelet[2722]: I0813 01:14:05.891259 2722 kubelet.go:2405] "Pod admission denied" podUID="fb4ee5ba-7199-48d2-8e88-107c572c1008" pod="tigera-operator/tigera-operator-747864d56d-cdt5b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:05.996730 kubelet[2722]: I0813 01:14:05.996671 2722 kubelet.go:2405] "Pod admission denied" podUID="991f973b-e71e-43f7-a90d-8af79b4ed49c" pod="tigera-operator/tigera-operator-747864d56d-6sv8s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:06.055608 kubelet[2722]: I0813 01:14:06.055556 2722 kubelet.go:2405] "Pod admission denied" podUID="752099a6-e1d8-4726-94d1-3b995def856f" pod="tigera-operator/tigera-operator-747864d56d-6wfw2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:06.196243 kubelet[2722]: I0813 01:14:06.196191 2722 kubelet.go:2405] "Pod admission denied" podUID="4a8c8750-62a3-4040-b6a2-b7df9489df58" pod="tigera-operator/tigera-operator-747864d56d-5474l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:06.294921 kubelet[2722]: I0813 01:14:06.294852 2722 kubelet.go:2405] "Pod admission denied" podUID="daf434a3-c995-4d31-990e-0c0b4dbafd8a" pod="tigera-operator/tigera-operator-747864d56d-k9ppc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:06.390956 kubelet[2722]: I0813 01:14:06.390888 2722 kubelet.go:2405] "Pod admission denied" podUID="bd59cd29-b9f2-412f-82c8-7b2f94710fff" pod="tigera-operator/tigera-operator-747864d56d-l2v2j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:06.491918 kubelet[2722]: I0813 01:14:06.491770 2722 kubelet.go:2405] "Pod admission denied" podUID="5b0abac9-82e7-4df9-b4cb-e2218618e5e3" pod="tigera-operator/tigera-operator-747864d56d-bsq65" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:06.586727 kubelet[2722]: I0813 01:14:06.586673 2722 kubelet.go:2405] "Pod admission denied" podUID="c8ee0432-3698-459c-8dd2-2c49951b4492" pod="tigera-operator/tigera-operator-747864d56d-rgxcj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:06.691175 kubelet[2722]: I0813 01:14:06.691120 2722 kubelet.go:2405] "Pod admission denied" podUID="f26eaf11-dc8d-4622-8f2b-cf3ee1e64cdb" pod="tigera-operator/tigera-operator-747864d56d-6t5l7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:06.794243 kubelet[2722]: I0813 01:14:06.794112 2722 kubelet.go:2405] "Pod admission denied" podUID="133a1416-4e2f-412f-ac52-94291ecc7a7e" pod="tigera-operator/tigera-operator-747864d56d-wvb78" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:06.841238 kubelet[2722]: I0813 01:14:06.841203 2722 kubelet.go:2405] "Pod admission denied" podUID="82fe1b04-b84e-4596-a743-3244445882eb" pod="tigera-operator/tigera-operator-747864d56d-tkzxn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:06.954718 kubelet[2722]: I0813 01:14:06.954670 2722 kubelet.go:2405] "Pod admission denied" podUID="9490f5d6-eac3-4ae0-b6d1-5b6e3cf24fdb" pod="tigera-operator/tigera-operator-747864d56d-8pc4v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:07.037171 kubelet[2722]: I0813 01:14:07.037116 2722 kubelet.go:2405] "Pod admission denied" podUID="f3afa007-7a14-443d-bcc0-530e33fd1dcf" pod="tigera-operator/tigera-operator-747864d56d-h4wg4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:07.133199 kubelet[2722]: I0813 01:14:07.133097 2722 kubelet.go:2405] "Pod admission denied" podUID="c5f48f6c-af21-45e4-93d4-e689b56e1c4f" pod="tigera-operator/tigera-operator-747864d56d-mwt5z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:07.240537 kubelet[2722]: I0813 01:14:07.240494 2722 kubelet.go:2405] "Pod admission denied" podUID="ea332dd0-a23d-4e74-ac8b-3454ca3fef0b" pod="tigera-operator/tigera-operator-747864d56d-kcz5w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:07.336288 kubelet[2722]: I0813 01:14:07.336235 2722 kubelet.go:2405] "Pod admission denied" podUID="a3f2d92e-fd04-4161-aecd-fd6c68874ed1" pod="tigera-operator/tigera-operator-747864d56d-ch55m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:07.538769 kubelet[2722]: I0813 01:14:07.538719 2722 kubelet.go:2405] "Pod admission denied" podUID="4b1fb9fc-7b47-42ce-9e81-f65adcc5d0f1" pod="tigera-operator/tigera-operator-747864d56d-g9v4t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:07.640851 kubelet[2722]: I0813 01:14:07.640790 2722 kubelet.go:2405] "Pod admission denied" podUID="6b2f0e05-fdae-4002-a4d0-08905692f472" pod="tigera-operator/tigera-operator-747864d56d-djxk5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:07.707561 kubelet[2722]: I0813 01:14:07.707499 2722 kubelet.go:2405] "Pod admission denied" podUID="d2173a72-f3ae-495b-9c09-9d7e347bf75e" pod="tigera-operator/tigera-operator-747864d56d-l7t75" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:07.838230 kubelet[2722]: I0813 01:14:07.838079 2722 kubelet.go:2405] "Pod admission denied" podUID="6d2264f9-32d0-4027-a536-5ffdfb9598c8" pod="tigera-operator/tigera-operator-747864d56d-lstx5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:07.947403 kubelet[2722]: I0813 01:14:07.947355 2722 kubelet.go:2405] "Pod admission denied" podUID="db3e48cd-ba1b-4ed0-a495-d09e8ee3e2f7" pod="tigera-operator/tigera-operator-747864d56d-r2t8s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:08.062164 kubelet[2722]: E0813 01:14:08.062126 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:14:08.062398 containerd[1550]: time="2025-08-13T01:14:08.062375716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,}" Aug 13 01:14:08.108802 containerd[1550]: time="2025-08-13T01:14:08.108688657Z" level=error msg="Failed to destroy network for sandbox \"83cefa42161b6c025db7174fd53d08b74541428eaeee54c171b2fc44674f7981\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:08.112954 containerd[1550]: time="2025-08-13T01:14:08.112154132Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"83cefa42161b6c025db7174fd53d08b74541428eaeee54c171b2fc44674f7981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:08.113068 kubelet[2722]: E0813 01:14:08.112411 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83cefa42161b6c025db7174fd53d08b74541428eaeee54c171b2fc44674f7981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:08.113068 kubelet[2722]: E0813 01:14:08.112482 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83cefa42161b6c025db7174fd53d08b74541428eaeee54c171b2fc44674f7981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:14:08.113068 kubelet[2722]: E0813 01:14:08.112503 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"83cefa42161b6c025db7174fd53d08b74541428eaeee54c171b2fc44674f7981\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:14:08.113068 kubelet[2722]: E0813 01:14:08.112585 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"83cefa42161b6c025db7174fd53d08b74541428eaeee54c171b2fc44674f7981\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:14:08.113542 systemd[1]: run-netns-cni\x2d6df3685f\x2deb80\x2d6a1a\x2d9bf1\x2d95324a543574.mount: Deactivated successfully. Aug 13 01:14:08.145685 kubelet[2722]: I0813 01:14:08.145625 2722 kubelet.go:2405] "Pod admission denied" podUID="5a9b7914-0186-436e-8dac-2ceca3376593" pod="tigera-operator/tigera-operator-747864d56d-fcxkr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:08.247798 kubelet[2722]: I0813 01:14:08.247748 2722 kubelet.go:2405] "Pod admission denied" podUID="679cb010-9334-4e69-98e0-ac12895a3856" pod="tigera-operator/tigera-operator-747864d56d-ffjsx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:08.333883 kubelet[2722]: I0813 01:14:08.333819 2722 kubelet.go:2405] "Pod admission denied" podUID="3bf735cb-b9e5-48ac-bd0b-7b96fd570b57" pod="tigera-operator/tigera-operator-747864d56d-2x4f9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:08.437912 kubelet[2722]: I0813 01:14:08.437604 2722 kubelet.go:2405] "Pod admission denied" podUID="c4bbd0f7-b0f9-436a-883c-c533f9b233b4" pod="tigera-operator/tigera-operator-747864d56d-tggd5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:08.538176 kubelet[2722]: I0813 01:14:08.538132 2722 kubelet.go:2405] "Pod admission denied" podUID="22e4dbae-58be-4f05-be19-cb84d76b7bea" pod="tigera-operator/tigera-operator-747864d56d-bnkxl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:08.739786 kubelet[2722]: I0813 01:14:08.739653 2722 kubelet.go:2405] "Pod admission denied" podUID="7d58fb94-d3f8-46bc-991f-71faa58201da" pod="tigera-operator/tigera-operator-747864d56d-gxr5h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:08.839646 kubelet[2722]: I0813 01:14:08.839595 2722 kubelet.go:2405] "Pod admission denied" podUID="790a97a8-d64c-4b59-8622-13f63283af55" pod="tigera-operator/tigera-operator-747864d56d-qkzj2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:08.937768 kubelet[2722]: I0813 01:14:08.937706 2722 kubelet.go:2405] "Pod admission denied" podUID="0056d47b-915d-4c5c-8d0b-2ccab3e56854" pod="tigera-operator/tigera-operator-747864d56d-7m624" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:09.038055 kubelet[2722]: I0813 01:14:09.037918 2722 kubelet.go:2405] "Pod admission denied" podUID="9e82019b-a7f7-40e4-bfec-08650a0bdde4" pod="tigera-operator/tigera-operator-747864d56d-s64fx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:09.058756 containerd[1550]: time="2025-08-13T01:14:09.058476924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,}" Aug 13 01:14:09.102755 kubelet[2722]: I0813 01:14:09.102695 2722 kubelet.go:2405] "Pod admission denied" podUID="5bae0002-d827-4346-bbec-5622297fd8ae" pod="tigera-operator/tigera-operator-747864d56d-f4w9l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:09.129931 containerd[1550]: time="2025-08-13T01:14:09.128933847Z" level=error msg="Failed to destroy network for sandbox \"5eba16553e086e707e2b7de7af776a1941f9f0315326cd44c982cdabc77c26b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:09.131624 systemd[1]: run-netns-cni\x2d89b563e5\x2dc14d\x2d3e88\x2d5c1f\x2d410ea4074e00.mount: Deactivated successfully. Aug 13 01:14:09.133419 containerd[1550]: time="2025-08-13T01:14:09.132969242Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eba16553e086e707e2b7de7af776a1941f9f0315326cd44c982cdabc77c26b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:09.133748 kubelet[2722]: E0813 01:14:09.133712 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eba16553e086e707e2b7de7af776a1941f9f0315326cd44c982cdabc77c26b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:09.133810 kubelet[2722]: E0813 01:14:09.133781 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eba16553e086e707e2b7de7af776a1941f9f0315326cd44c982cdabc77c26b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:14:09.133848 kubelet[2722]: E0813 01:14:09.133820 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5eba16553e086e707e2b7de7af776a1941f9f0315326cd44c982cdabc77c26b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:14:09.133894 kubelet[2722]: E0813 01:14:09.133869 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5eba16553e086e707e2b7de7af776a1941f9f0315326cd44c982cdabc77c26b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:14:09.194683 kubelet[2722]: I0813 01:14:09.194632 2722 kubelet.go:2405] "Pod admission denied" podUID="22953713-eae3-4313-9037-fc457532c319" pod="tigera-operator/tigera-operator-747864d56d-g6rph" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:09.285028 kubelet[2722]: I0813 01:14:09.284968 2722 kubelet.go:2405] "Pod admission denied" podUID="890e811c-5aee-4da2-8a0e-346c1225fd5a" pod="tigera-operator/tigera-operator-747864d56d-468bx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:09.390497 kubelet[2722]: I0813 01:14:09.390356 2722 kubelet.go:2405] "Pod admission denied" podUID="0d0dc663-80ed-4e5c-bceb-9f88843c478f" pod="tigera-operator/tigera-operator-747864d56d-xdzvn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:09.484701 kubelet[2722]: I0813 01:14:09.484645 2722 kubelet.go:2405] "Pod admission denied" podUID="b8d6f73a-342b-4c3e-8e62-ea148b7a62b4" pod="tigera-operator/tigera-operator-747864d56d-q9mbq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:09.592929 kubelet[2722]: I0813 01:14:09.592864 2722 kubelet.go:2405] "Pod admission denied" podUID="502b440e-4964-444e-aba8-0f4ade161be2" pod="tigera-operator/tigera-operator-747864d56d-gc7xl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:09.688256 kubelet[2722]: I0813 01:14:09.688206 2722 kubelet.go:2405] "Pod admission denied" podUID="9a4f128c-0b9e-425d-8c77-6bcb49aa2397" pod="tigera-operator/tigera-operator-747864d56d-2cj2p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:09.813940 kubelet[2722]: I0813 01:14:09.812967 2722 kubelet.go:2405] "Pod admission denied" podUID="d634d35b-2630-410b-a570-83c9d0f3ad65" pod="tigera-operator/tigera-operator-747864d56d-vxq6c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:09.915088 kubelet[2722]: I0813 01:14:09.915037 2722 kubelet.go:2405] "Pod admission denied" podUID="8c500397-fcc6-4164-8cb5-3e3de4e62b8b" pod="tigera-operator/tigera-operator-747864d56d-f2fdw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:10.045154 kubelet[2722]: I0813 01:14:10.044738 2722 kubelet.go:2405] "Pod admission denied" podUID="f570a96e-b5d2-48af-8983-82bff0c7c5db" pod="tigera-operator/tigera-operator-747864d56d-2fg26" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:10.059418 containerd[1550]: time="2025-08-13T01:14:10.059254301Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,}" Aug 13 01:14:10.121114 containerd[1550]: time="2025-08-13T01:14:10.121018972Z" level=error msg="Failed to destroy network for sandbox \"0dcc5d0567faab7609d69b8249e829dfce1ca1f82ee0cb9c0c9796fcafea5f25\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:10.124605 containerd[1550]: time="2025-08-13T01:14:10.124287796Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dcc5d0567faab7609d69b8249e829dfce1ca1f82ee0cb9c0c9796fcafea5f25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:10.125777 kubelet[2722]: E0813 01:14:10.125471 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dcc5d0567faab7609d69b8249e829dfce1ca1f82ee0cb9c0c9796fcafea5f25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:10.125944 kubelet[2722]: E0813 01:14:10.125809 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dcc5d0567faab7609d69b8249e829dfce1ca1f82ee0cb9c0c9796fcafea5f25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:14:10.125944 kubelet[2722]: E0813 01:14:10.125833 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0dcc5d0567faab7609d69b8249e829dfce1ca1f82ee0cb9c0c9796fcafea5f25\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:14:10.125944 kubelet[2722]: E0813 01:14:10.125880 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0dcc5d0567faab7609d69b8249e829dfce1ca1f82ee0cb9c0c9796fcafea5f25\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7lv4" podUID="6b834979-32a4-464b-9898-ef87b1042a9e" Aug 13 01:14:10.126367 systemd[1]: run-netns-cni\x2d566d0c7b\x2db6fa\x2d3d51\x2d07f2\x2dfd14897567f0.mount: Deactivated successfully. Aug 13 01:14:10.130938 kubelet[2722]: I0813 01:14:10.130889 2722 kubelet.go:2405] "Pod admission denied" podUID="32838404-8c91-4129-995c-0625149e8ec0" pod="tigera-operator/tigera-operator-747864d56d-w87qf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:10.238525 kubelet[2722]: I0813 01:14:10.238464 2722 kubelet.go:2405] "Pod admission denied" podUID="a6a1d392-f123-4ba9-ac34-2bc25535d6b7" pod="tigera-operator/tigera-operator-747864d56d-jgqh4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:10.287441 kubelet[2722]: I0813 01:14:10.287174 2722 kubelet.go:2405] "Pod admission denied" podUID="826b5804-52b3-4f6e-a3b0-cacc68614205" pod="tigera-operator/tigera-operator-747864d56d-nt6d4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:10.391050 kubelet[2722]: I0813 01:14:10.389449 2722 kubelet.go:2405] "Pod admission denied" podUID="fd0d651a-b046-4ecf-9341-28d00ba2bbdc" pod="tigera-operator/tigera-operator-747864d56d-zzxqw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:10.488037 kubelet[2722]: I0813 01:14:10.487978 2722 kubelet.go:2405] "Pod admission denied" podUID="9bccf4f3-83d5-4d65-a9ad-36893dddcb98" pod="tigera-operator/tigera-operator-747864d56d-gwhtg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:10.583291 kubelet[2722]: I0813 01:14:10.583241 2722 kubelet.go:2405] "Pod admission denied" podUID="5d29d062-cec3-41f0-ac4e-4c0f0dd8ad4b" pod="tigera-operator/tigera-operator-747864d56d-nsw2p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:10.682909 kubelet[2722]: I0813 01:14:10.682854 2722 kubelet.go:2405] "Pod admission denied" podUID="7bf9ca66-bf82-4a9d-9ef7-e184705c0e32" pod="tigera-operator/tigera-operator-747864d56d-tcmfv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:10.753324 kubelet[2722]: I0813 01:14:10.753275 2722 kubelet.go:2405] "Pod admission denied" podUID="2ca1cd7b-6403-4667-a55c-41f2fcd60bb6" pod="tigera-operator/tigera-operator-747864d56d-4lvp5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:10.833878 kubelet[2722]: I0813 01:14:10.833824 2722 kubelet.go:2405] "Pod admission denied" podUID="061704ce-0f70-4bc7-9f1b-5d5d27da0592" pod="tigera-operator/tigera-operator-747864d56d-gjfkl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:11.058617 kubelet[2722]: I0813 01:14:11.058434 2722 kubelet.go:2405] "Pod admission denied" podUID="9ed321cf-a29d-4236-a7e1-a2557001a9e2" pod="tigera-operator/tigera-operator-747864d56d-k52d4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:11.299014 kubelet[2722]: I0813 01:14:11.298935 2722 kubelet.go:2405] "Pod admission denied" podUID="46d0e19a-23d0-4373-85b0-8bc289043815" pod="tigera-operator/tigera-operator-747864d56d-tc6rz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:11.395205 kubelet[2722]: I0813 01:14:11.394702 2722 kubelet.go:2405] "Pod admission denied" podUID="db327cb1-a408-4067-9f83-86a955b1805e" pod="tigera-operator/tigera-operator-747864d56d-8crkm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:11.493778 kubelet[2722]: I0813 01:14:11.493697 2722 kubelet.go:2405] "Pod admission denied" podUID="45fa9393-6861-4efc-ab11-ba51e715589e" pod="tigera-operator/tigera-operator-747864d56d-mwgdl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:11.609107 kubelet[2722]: I0813 01:14:11.609043 2722 kubelet.go:2405] "Pod admission denied" podUID="cf5377fb-8588-474a-b0f4-d7859ee14831" pod="tigera-operator/tigera-operator-747864d56d-7gj7n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:11.846092 kubelet[2722]: I0813 01:14:11.846004 2722 kubelet.go:2405] "Pod admission denied" podUID="9e26fdf7-27c9-46c9-80dd-97170cdf7f7d" pod="tigera-operator/tigera-operator-747864d56d-6brlw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:11.951931 kubelet[2722]: I0813 01:14:11.951842 2722 kubelet.go:2405] "Pod admission denied" podUID="c790a7d9-2f52-4105-b655-312d91d2c415" pod="tigera-operator/tigera-operator-747864d56d-bxzkc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:12.197784 kubelet[2722]: I0813 01:14:12.197720 2722 kubelet.go:2405] "Pod admission denied" podUID="d8790000-b2ce-4de5-aed4-fddaa710d1d7" pod="tigera-operator/tigera-operator-747864d56d-qwxbn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:12.304943 kubelet[2722]: I0813 01:14:12.304197 2722 kubelet.go:2405] "Pod admission denied" podUID="c7ebb3f5-4de3-4b08-9e51-2a9674ca691c" pod="tigera-operator/tigera-operator-747864d56d-7pwqb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:12.394137 kubelet[2722]: I0813 01:14:12.394067 2722 kubelet.go:2405] "Pod admission denied" podUID="59002448-4866-4c41-8e07-3309a96512bf" pod="tigera-operator/tigera-operator-747864d56d-hj6lb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:12.498402 kubelet[2722]: I0813 01:14:12.496910 2722 kubelet.go:2405] "Pod admission denied" podUID="315dd438-e35f-41cc-8c18-a77849d929a6" pod="tigera-operator/tigera-operator-747864d56d-pp9jm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:12.602077 kubelet[2722]: I0813 01:14:12.602004 2722 kubelet.go:2405] "Pod admission denied" podUID="dfd0f497-681a-4f6d-976e-0a865e64ff1f" pod="tigera-operator/tigera-operator-747864d56d-wzmdl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:12.699630 kubelet[2722]: I0813 01:14:12.699547 2722 kubelet.go:2405] "Pod admission denied" podUID="9878de4d-a08a-4f7f-a566-f02246690d46" pod="tigera-operator/tigera-operator-747864d56d-7v7p6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:12.801139 kubelet[2722]: I0813 01:14:12.800979 2722 kubelet.go:2405] "Pod admission denied" podUID="b45b39ec-c542-48dc-a3d0-fcadf484a1ba" pod="tigera-operator/tigera-operator-747864d56d-t2klc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:12.865119 kubelet[2722]: I0813 01:14:12.865052 2722 kubelet.go:2405] "Pod admission denied" podUID="c6593f3b-5f56-4a76-9431-fddfedd8f3cd" pod="tigera-operator/tigera-operator-747864d56d-l4jpj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:12.998922 kubelet[2722]: I0813 01:14:12.997913 2722 kubelet.go:2405] "Pod admission denied" podUID="fa2dcb3a-9170-477c-b847-3da3b0ff95dc" pod="tigera-operator/tigera-operator-747864d56d-l7w7g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:13.096779 kubelet[2722]: I0813 01:14:13.096538 2722 kubelet.go:2405] "Pod admission denied" podUID="2f88e92e-f7bb-4703-a600-466b40d1f312" pod="tigera-operator/tigera-operator-747864d56d-jchst" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:13.191998 kubelet[2722]: I0813 01:14:13.191941 2722 kubelet.go:2405] "Pod admission denied" podUID="d22da1c5-a610-4354-b756-b6a62cb87b29" pod="tigera-operator/tigera-operator-747864d56d-5wtkv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:13.298339 kubelet[2722]: I0813 01:14:13.298274 2722 kubelet.go:2405] "Pod admission denied" podUID="42657eba-94d1-4f02-a09f-5ef0d7f8f825" pod="tigera-operator/tigera-operator-747864d56d-mm5qm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:13.492711 kubelet[2722]: I0813 01:14:13.492646 2722 kubelet.go:2405] "Pod admission denied" podUID="35366bc9-da3d-4ff5-aebc-bc1f7953be1f" pod="tigera-operator/tigera-operator-747864d56d-9f58t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:13.595404 kubelet[2722]: I0813 01:14:13.595330 2722 kubelet.go:2405] "Pod admission denied" podUID="e8d2f6fa-e7cf-42a0-9ecb-586a4fab4b6d" pod="tigera-operator/tigera-operator-747864d56d-n258d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:13.678409 kubelet[2722]: I0813 01:14:13.677986 2722 kubelet.go:2405] "Pod admission denied" podUID="66f401a9-d0db-4526-a420-1007407f199f" pod="tigera-operator/tigera-operator-747864d56d-d5cgg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:13.793692 kubelet[2722]: I0813 01:14:13.793509 2722 kubelet.go:2405] "Pod admission denied" podUID="9a400d62-202f-4d28-852f-649c9a036c9f" pod="tigera-operator/tigera-operator-747864d56d-4wpwf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:13.854157 kubelet[2722]: I0813 01:14:13.854116 2722 kubelet.go:2405] "Pod admission denied" podUID="6d237067-1439-4411-9699-8995e531b070" pod="tigera-operator/tigera-operator-747864d56d-wv9jf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:13.995501 kubelet[2722]: I0813 01:14:13.995430 2722 kubelet.go:2405] "Pod admission denied" podUID="936009a3-59fe-4ea4-90b5-0832a3fd2c69" pod="tigera-operator/tigera-operator-747864d56d-2wqml" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:14.116291 kubelet[2722]: I0813 01:14:14.114932 2722 kubelet.go:2405] "Pod admission denied" podUID="e1966ad1-c2be-4712-a403-d02de41b59f1" pod="tigera-operator/tigera-operator-747864d56d-jl4tx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:14.198147 kubelet[2722]: I0813 01:14:14.198088 2722 kubelet.go:2405] "Pod admission denied" podUID="a3dbe965-0006-46ac-9230-1782401f0240" pod="tigera-operator/tigera-operator-747864d56d-vfsf4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:14.292483 kubelet[2722]: I0813 01:14:14.292423 2722 kubelet.go:2405] "Pod admission denied" podUID="b874138d-7e41-467b-abd3-df72217d4057" pod="tigera-operator/tigera-operator-747864d56d-t8bqb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:14.494037 kubelet[2722]: I0813 01:14:14.493972 2722 kubelet.go:2405] "Pod admission denied" podUID="976d6998-fa1f-4a36-834b-e2759c0476f4" pod="tigera-operator/tigera-operator-747864d56d-h2tq2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:14.593327 kubelet[2722]: I0813 01:14:14.593263 2722 kubelet.go:2405] "Pod admission denied" podUID="b47bf18d-1b30-442c-8d46-d801a10a0c3a" pod="tigera-operator/tigera-operator-747864d56d-t2nzn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:14.660022 kubelet[2722]: I0813 01:14:14.659970 2722 kubelet.go:2405] "Pod admission denied" podUID="ffeac62b-72f4-4258-a6bc-080c222951e0" pod="tigera-operator/tigera-operator-747864d56d-gbxgn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:14.790183 kubelet[2722]: I0813 01:14:14.790028 2722 kubelet.go:2405] "Pod admission denied" podUID="6dd726d3-c2e5-4025-b297-3733b209b02b" pod="tigera-operator/tigera-operator-747864d56d-qpzj2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:14.898921 kubelet[2722]: I0813 01:14:14.898568 2722 kubelet.go:2405] "Pod admission denied" podUID="b96de2c2-69ba-4f77-8500-38f72cf14c27" pod="tigera-operator/tigera-operator-747864d56d-6pj4d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:15.058091 kubelet[2722]: E0813 01:14:15.057961 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:14:15.059592 containerd[1550]: time="2025-08-13T01:14:15.059534261Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,}" Aug 13 01:14:15.060937 kubelet[2722]: E0813 01:14:15.060051 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2063325269: write /var/lib/containerd/tmpmounts/containerd-mount2063325269/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-hq29b" podUID="3c0f3b86-7d63-44df-843e-763eb95a8b94" Aug 13 01:14:15.123210 containerd[1550]: time="2025-08-13T01:14:15.120152450Z" level=error msg="Failed to destroy network for sandbox \"1e68b217e97a1a2b259b9eb306e3669242081c0cfcacac087c51362efa32dc31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:15.123210 containerd[1550]: time="2025-08-13T01:14:15.122062882Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e68b217e97a1a2b259b9eb306e3669242081c0cfcacac087c51362efa32dc31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:15.123320 systemd[1]: run-netns-cni\x2d5c64570d\x2d24fa\x2dbcc3\x2d6232\x2d8f227c292b55.mount: Deactivated successfully. Aug 13 01:14:15.124106 kubelet[2722]: E0813 01:14:15.124077 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e68b217e97a1a2b259b9eb306e3669242081c0cfcacac087c51362efa32dc31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:15.125212 kubelet[2722]: E0813 01:14:15.124282 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e68b217e97a1a2b259b9eb306e3669242081c0cfcacac087c51362efa32dc31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:14:15.125212 kubelet[2722]: E0813 01:14:15.124309 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e68b217e97a1a2b259b9eb306e3669242081c0cfcacac087c51362efa32dc31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:14:15.125954 kubelet[2722]: E0813 01:14:15.125425 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e68b217e97a1a2b259b9eb306e3669242081c0cfcacac087c51362efa32dc31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:14:15.131549 kubelet[2722]: I0813 01:14:15.131518 2722 kubelet.go:2405] "Pod admission denied" podUID="b62fd7c1-d212-4c00-9ea2-e6d2b5950e0d" pod="tigera-operator/tigera-operator-747864d56d-c5z7j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:15.157062 kubelet[2722]: I0813 01:14:15.157002 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-c5z7j" podStartSLOduration=0.156984066 podStartE2EDuration="156.984066ms" podCreationTimestamp="2025-08-13 01:14:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:14:15.138731233 +0000 UTC m=+141.176660542" watchObservedRunningTime="2025-08-13 01:14:15.156984066 +0000 UTC m=+141.194913375" Aug 13 01:14:15.213522 kubelet[2722]: I0813 01:14:15.213403 2722 kubelet.go:2405] "Pod admission denied" podUID="7a3706c3-b106-4b26-a655-aa6c36d7b744" pod="tigera-operator/tigera-operator-747864d56d-f2vrn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:15.341880 kubelet[2722]: I0813 01:14:15.341746 2722 kubelet.go:2405] "Pod admission denied" podUID="e4e58663-d870-4abe-99ca-833e283954c9" pod="tigera-operator/tigera-operator-747864d56d-fvzvm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:15.440161 kubelet[2722]: I0813 01:14:15.440115 2722 kubelet.go:2405] "Pod admission denied" podUID="aaccfb4d-1b02-4c8c-8fc0-91c680702b44" pod="tigera-operator/tigera-operator-747864d56d-gkj5d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:15.540946 kubelet[2722]: I0813 01:14:15.540884 2722 kubelet.go:2405] "Pod admission denied" podUID="7796c050-67c1-4e51-8674-2cb3e65a4b2c" pod="tigera-operator/tigera-operator-747864d56d-sn95v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:15.639444 kubelet[2722]: I0813 01:14:15.638625 2722 kubelet.go:2405] "Pod admission denied" podUID="578365a0-d580-4fe0-a8a9-4f7c2802252b" pod="tigera-operator/tigera-operator-747864d56d-6vswk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:15.840962 kubelet[2722]: I0813 01:14:15.840883 2722 kubelet.go:2405] "Pod admission denied" podUID="1494b73a-4805-4b8a-a8d1-f715cb10ad0f" pod="tigera-operator/tigera-operator-747864d56d-pccb8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:15.939232 kubelet[2722]: I0813 01:14:15.939192 2722 kubelet.go:2405] "Pod admission denied" podUID="6e24e06b-7e8a-44e3-a86c-680ab250672b" pod="tigera-operator/tigera-operator-747864d56d-mv9wz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:16.039440 kubelet[2722]: I0813 01:14:16.039379 2722 kubelet.go:2405] "Pod admission denied" podUID="268303c0-b51f-43b0-a2d1-a6043ab74f72" pod="tigera-operator/tigera-operator-747864d56d-g29sk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:16.138286 kubelet[2722]: I0813 01:14:16.138234 2722 kubelet.go:2405] "Pod admission denied" podUID="6c873e9f-5df1-4781-8623-69de21f2f0c7" pod="tigera-operator/tigera-operator-747864d56d-mq4lw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:16.241070 kubelet[2722]: I0813 01:14:16.240735 2722 kubelet.go:2405] "Pod admission denied" podUID="3fa653e9-78c0-4b02-8dfe-e2b41d61262a" pod="tigera-operator/tigera-operator-747864d56d-fr9kw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:16.339182 kubelet[2722]: I0813 01:14:16.339123 2722 kubelet.go:2405] "Pod admission denied" podUID="a25289dd-c243-4ce0-99ba-fc16dfdcb0bb" pod="tigera-operator/tigera-operator-747864d56d-wf9bt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:16.438142 kubelet[2722]: I0813 01:14:16.438074 2722 kubelet.go:2405] "Pod admission denied" podUID="8bb94154-deb5-408e-ba22-84c8afa5ed77" pod="tigera-operator/tigera-operator-747864d56d-nxqdl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:16.538977 kubelet[2722]: I0813 01:14:16.538171 2722 kubelet.go:2405] "Pod admission denied" podUID="86c77199-d160-4b56-b5f1-232787e295e3" pod="tigera-operator/tigera-operator-747864d56d-lbztl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:16.638013 kubelet[2722]: I0813 01:14:16.637955 2722 kubelet.go:2405] "Pod admission denied" podUID="fbc96772-bd99-4b08-af85-c2d3ad9bc708" pod="tigera-operator/tigera-operator-747864d56d-2c4k5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:16.735796 kubelet[2722]: I0813 01:14:16.735737 2722 kubelet.go:2405] "Pod admission denied" podUID="632eb18e-88fc-4098-b035-393da4234fb3" pod="tigera-operator/tigera-operator-747864d56d-2c7lj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:16.860999 kubelet[2722]: I0813 01:14:16.860653 2722 kubelet.go:2405] "Pod admission denied" podUID="6ab3056c-7c15-435b-97d1-4d3b7f67fd4a" pod="tigera-operator/tigera-operator-747864d56d-ck6cs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:17.036014 kubelet[2722]: I0813 01:14:17.035965 2722 kubelet.go:2405] "Pod admission denied" podUID="13dcac1e-9cd9-4cb3-938e-c2570c3c15ed" pod="tigera-operator/tigera-operator-747864d56d-hr8fk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:17.150247 kubelet[2722]: I0813 01:14:17.149354 2722 kubelet.go:2405] "Pod admission denied" podUID="38a66b0d-3a5b-48da-9202-6b48c45b969a" pod="tigera-operator/tigera-operator-747864d56d-h27xp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:17.207261 kubelet[2722]: I0813 01:14:17.207180 2722 kubelet.go:2405] "Pod admission denied" podUID="c27b522a-581a-4f92-8ebf-c797925bddb1" pod="tigera-operator/tigera-operator-747864d56d-grw6x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:17.347923 kubelet[2722]: I0813 01:14:17.347270 2722 kubelet.go:2405] "Pod admission denied" podUID="6ddd533b-3329-412a-a5e8-44c5b18461bc" pod="tigera-operator/tigera-operator-747864d56d-mlw52" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:17.439666 kubelet[2722]: I0813 01:14:17.439625 2722 kubelet.go:2405] "Pod admission denied" podUID="3822c5c1-ec3f-4a0f-bb9a-d40cca73fd6b" pod="tigera-operator/tigera-operator-747864d56d-tfx84" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:17.544528 kubelet[2722]: I0813 01:14:17.544472 2722 kubelet.go:2405] "Pod admission denied" podUID="57313ac6-c207-4b74-b12f-c5b2ed4c9c5f" pod="tigera-operator/tigera-operator-747864d56d-dw2qq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:17.640225 kubelet[2722]: I0813 01:14:17.640181 2722 kubelet.go:2405] "Pod admission denied" podUID="ee71afbe-3b0d-4727-a0a0-5dc69f97c548" pod="tigera-operator/tigera-operator-747864d56d-v9nnq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:17.738399 kubelet[2722]: I0813 01:14:17.737869 2722 kubelet.go:2405] "Pod admission denied" podUID="f46e3e05-2a38-4075-804b-dd1f4fa237ae" pod="tigera-operator/tigera-operator-747864d56d-vpnr2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:17.804919 kubelet[2722]: I0813 01:14:17.804560 2722 kubelet.go:2405] "Pod admission denied" podUID="23a7cf6a-076e-42e5-aac2-f4522693ba51" pod="tigera-operator/tigera-operator-747864d56d-5d6t8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:17.887826 kubelet[2722]: I0813 01:14:17.887772 2722 kubelet.go:2405] "Pod admission denied" podUID="eb1921d7-93ab-4cab-9206-a1d542f35768" pod="tigera-operator/tigera-operator-747864d56d-pcpcg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:18.000014 kubelet[2722]: I0813 01:14:17.997615 2722 kubelet.go:2405] "Pod admission denied" podUID="0b0697f8-df9e-473b-932d-85e2fbc82ecb" pod="tigera-operator/tigera-operator-747864d56d-wzr9b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:18.086968 kubelet[2722]: I0813 01:14:18.086918 2722 kubelet.go:2405] "Pod admission denied" podUID="a228758f-48f2-4927-a152-96f203a7e369" pod="tigera-operator/tigera-operator-747864d56d-xnrcj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:18.298060 kubelet[2722]: I0813 01:14:18.297339 2722 kubelet.go:2405] "Pod admission denied" podUID="64347a08-546f-498a-8a0e-c9b9d64fb690" pod="tigera-operator/tigera-operator-747864d56d-ctwpm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:18.385277 kubelet[2722]: I0813 01:14:18.385238 2722 kubelet.go:2405] "Pod admission denied" podUID="1aec0424-2d27-4f74-97b0-579c7b9f1f4b" pod="tigera-operator/tigera-operator-747864d56d-27cst" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:18.440915 kubelet[2722]: I0813 01:14:18.440635 2722 kubelet.go:2405] "Pod admission denied" podUID="9f34ae32-2124-4191-89e6-a736ccfae25f" pod="tigera-operator/tigera-operator-747864d56d-n4pn2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:18.535909 kubelet[2722]: I0813 01:14:18.535858 2722 kubelet.go:2405] "Pod admission denied" podUID="106ae53e-5516-42b9-b3d7-3e9d2af6b2f4" pod="tigera-operator/tigera-operator-747864d56d-jkbkn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:18.642923 kubelet[2722]: I0813 01:14:18.641823 2722 kubelet.go:2405] "Pod admission denied" podUID="77055675-b63f-40bd-82a5-46429c64c19d" pod="tigera-operator/tigera-operator-747864d56d-49tsw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:18.688583 kubelet[2722]: I0813 01:14:18.688533 2722 kubelet.go:2405] "Pod admission denied" podUID="d8b9e537-bad7-4188-a05e-bc93ff74d632" pod="tigera-operator/tigera-operator-747864d56d-9fmrz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:18.796641 kubelet[2722]: I0813 01:14:18.796590 2722 kubelet.go:2405] "Pod admission denied" podUID="7c8c60a9-8e5f-41aa-b1be-e3978c52773a" pod="tigera-operator/tigera-operator-747864d56d-ng5pz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:18.887950 kubelet[2722]: I0813 01:14:18.887885 2722 kubelet.go:2405] "Pod admission denied" podUID="807c9f72-8886-40f1-86fd-0f6841fa2eb9" pod="tigera-operator/tigera-operator-747864d56d-z2fjx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:18.938647 kubelet[2722]: I0813 01:14:18.938608 2722 kubelet.go:2405] "Pod admission denied" podUID="d14735e5-dfa8-47bd-82ab-810722a8db29" pod="tigera-operator/tigera-operator-747864d56d-vf5nr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:19.038071 kubelet[2722]: I0813 01:14:19.038020 2722 kubelet.go:2405] "Pod admission denied" podUID="e7c25a39-ab26-48c1-b210-dc6c35f40f16" pod="tigera-operator/tigera-operator-747864d56d-zl6vn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:19.136866 kubelet[2722]: I0813 01:14:19.136817 2722 kubelet.go:2405] "Pod admission denied" podUID="cd3a633d-7cfe-4ce8-a5cc-9546889138e5" pod="tigera-operator/tigera-operator-747864d56d-9lwbf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:19.238994 kubelet[2722]: I0813 01:14:19.238829 2722 kubelet.go:2405] "Pod admission denied" podUID="216483df-84a1-48ee-9b20-5428d92e7016" pod="tigera-operator/tigera-operator-747864d56d-hwmrh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:19.339910 kubelet[2722]: I0813 01:14:19.339860 2722 kubelet.go:2405] "Pod admission denied" podUID="6213b2db-362b-48e6-8de7-3f4c3a56d35a" pod="tigera-operator/tigera-operator-747864d56d-j29tz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:19.482935 kubelet[2722]: I0813 01:14:19.480607 2722 kubelet.go:2405] "Pod admission denied" podUID="61f426ef-c042-4080-946f-d85165f0b9c0" pod="tigera-operator/tigera-operator-747864d56d-95pnk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:19.554519 kubelet[2722]: I0813 01:14:19.554418 2722 kubelet.go:2405] "Pod admission denied" podUID="75fb97e9-b360-46b0-99ea-0aebc0c28022" pod="tigera-operator/tigera-operator-747864d56d-skbz6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:19.638695 kubelet[2722]: I0813 01:14:19.638648 2722 kubelet.go:2405] "Pod admission denied" podUID="19aefe1c-a179-4910-9342-1eae7830f09c" pod="tigera-operator/tigera-operator-747864d56d-fqcb2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:19.836969 kubelet[2722]: I0813 01:14:19.835355 2722 kubelet.go:2405] "Pod admission denied" podUID="0f9dcd31-8ed7-4138-adc9-2f65eff823f0" pod="tigera-operator/tigera-operator-747864d56d-wbjnt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:19.948109 kubelet[2722]: I0813 01:14:19.948010 2722 kubelet.go:2405] "Pod admission denied" podUID="bc1b8738-e096-46e3-a2dd-42e213cf8f76" pod="tigera-operator/tigera-operator-747864d56d-hxrg5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:20.037168 kubelet[2722]: I0813 01:14:20.036083 2722 kubelet.go:2405] "Pod admission denied" podUID="bf823534-6966-4236-9990-a96b80b1ef95" pod="tigera-operator/tigera-operator-747864d56d-hrkrd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:20.136914 kubelet[2722]: I0813 01:14:20.136181 2722 kubelet.go:2405] "Pod admission denied" podUID="68846c87-f0d1-4a48-ae40-73ee337bc134" pod="tigera-operator/tigera-operator-747864d56d-d42w7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:20.238117 kubelet[2722]: I0813 01:14:20.238073 2722 kubelet.go:2405] "Pod admission denied" podUID="8ee72dbc-11eb-479d-a9c0-33551406bea8" pod="tigera-operator/tigera-operator-747864d56d-ksksb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:20.436774 kubelet[2722]: I0813 01:14:20.436731 2722 kubelet.go:2405] "Pod admission denied" podUID="a89f7f55-32ca-4d25-a829-49992016adae" pod="tigera-operator/tigera-operator-747864d56d-lg9w8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:20.535264 kubelet[2722]: I0813 01:14:20.535227 2722 kubelet.go:2405] "Pod admission denied" podUID="443f5ec8-7ab1-43d0-8d95-41a2ccca81a1" pod="tigera-operator/tigera-operator-747864d56d-62xld" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:20.636029 kubelet[2722]: I0813 01:14:20.635770 2722 kubelet.go:2405] "Pod admission denied" podUID="0a1c7289-6add-4ac3-a1af-4788d668c277" pod="tigera-operator/tigera-operator-747864d56d-zrptc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:20.735096 kubelet[2722]: I0813 01:14:20.734969 2722 kubelet.go:2405] "Pod admission denied" podUID="12e29506-58b1-42a6-86a3-72449e8d443b" pod="tigera-operator/tigera-operator-747864d56d-kl8ph" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:20.849261 kubelet[2722]: I0813 01:14:20.849214 2722 kubelet.go:2405] "Pod admission denied" podUID="532db1e6-b83c-4648-acd1-afc8e202d226" pod="tigera-operator/tigera-operator-747864d56d-vl6zd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:20.935801 kubelet[2722]: I0813 01:14:20.935748 2722 kubelet.go:2405] "Pod admission denied" podUID="093a94ec-6cfc-496d-8151-4099e3c8a4b2" pod="tigera-operator/tigera-operator-747864d56d-r8blv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:21.048840 kubelet[2722]: I0813 01:14:21.046678 2722 kubelet.go:2405] "Pod admission denied" podUID="b740cd51-7cef-4c31-abcc-a7d215d9dd56" pod="tigera-operator/tigera-operator-747864d56d-5tjhx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:21.057694 kubelet[2722]: E0813 01:14:21.057652 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:14:21.058646 containerd[1550]: time="2025-08-13T01:14:21.058277167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,}" Aug 13 01:14:21.113166 containerd[1550]: time="2025-08-13T01:14:21.111510802Z" level=error msg="Failed to destroy network for sandbox \"db2dd3e65d2f1f5684ad39531468d093cda1ec3bdd94be1bfd42c92776881dba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:21.115446 containerd[1550]: time="2025-08-13T01:14:21.114482286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"db2dd3e65d2f1f5684ad39531468d093cda1ec3bdd94be1bfd42c92776881dba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:21.115019 systemd[1]: run-netns-cni\x2dc2d60564\x2ddfed\x2db552\x2d9fa8\x2d0949a5fd3c77.mount: Deactivated successfully. Aug 13 01:14:21.115785 kubelet[2722]: E0813 01:14:21.114727 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db2dd3e65d2f1f5684ad39531468d093cda1ec3bdd94be1bfd42c92776881dba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:21.115785 kubelet[2722]: E0813 01:14:21.114796 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db2dd3e65d2f1f5684ad39531468d093cda1ec3bdd94be1bfd42c92776881dba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:14:21.115785 kubelet[2722]: E0813 01:14:21.114819 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db2dd3e65d2f1f5684ad39531468d093cda1ec3bdd94be1bfd42c92776881dba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:14:21.115785 kubelet[2722]: E0813 01:14:21.114880 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db2dd3e65d2f1f5684ad39531468d093cda1ec3bdd94be1bfd42c92776881dba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:14:21.237046 kubelet[2722]: I0813 01:14:21.236997 2722 kubelet.go:2405] "Pod admission denied" podUID="8cddd4e2-6e2b-4037-86ed-27dff779f8e7" pod="tigera-operator/tigera-operator-747864d56d-c994c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:21.345879 kubelet[2722]: I0813 01:14:21.344740 2722 kubelet.go:2405] "Pod admission denied" podUID="4067c973-2722-43ca-b17c-8418650dd3df" pod="tigera-operator/tigera-operator-747864d56d-2fjr7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:21.439185 kubelet[2722]: I0813 01:14:21.439133 2722 kubelet.go:2405] "Pod admission denied" podUID="8acb3853-9a15-4685-92a8-06a4b30abf31" pod="tigera-operator/tigera-operator-747864d56d-fbsjk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:21.661931 kubelet[2722]: I0813 01:14:21.660141 2722 kubelet.go:2405] "Pod admission denied" podUID="69625f12-591a-4817-9baa-5a896f958b7a" pod="tigera-operator/tigera-operator-747864d56d-vpwtl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:21.745271 kubelet[2722]: I0813 01:14:21.745222 2722 kubelet.go:2405] "Pod admission denied" podUID="f2378c85-40f4-4b30-95ae-3d1cbbccbfd5" pod="tigera-operator/tigera-operator-747864d56d-mkgjv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:21.843733 kubelet[2722]: I0813 01:14:21.843670 2722 kubelet.go:2405] "Pod admission denied" podUID="39aa2059-f637-4c7b-8a3f-532ee24bd44a" pod="tigera-operator/tigera-operator-747864d56d-8btsk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:21.945206 kubelet[2722]: I0813 01:14:21.945141 2722 kubelet.go:2405] "Pod admission denied" podUID="dd282e37-f7c8-4df2-abc5-de83f2891506" pod="tigera-operator/tigera-operator-747864d56d-t42cp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:22.047712 kubelet[2722]: I0813 01:14:22.047659 2722 kubelet.go:2405] "Pod admission denied" podUID="00b848ff-37fe-4a46-b4c0-b73ff65ff994" pod="tigera-operator/tigera-operator-747864d56d-57g8g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:22.061607 containerd[1550]: time="2025-08-13T01:14:22.061354856Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,}" Aug 13 01:14:22.062308 containerd[1550]: time="2025-08-13T01:14:22.062162107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,}" Aug 13 01:14:22.195801 containerd[1550]: time="2025-08-13T01:14:22.195447083Z" level=error msg="Failed to destroy network for sandbox \"2f13335a8db973b73c3b425a8b56acd7b755afd743dcd7345b9ff133a5958a45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:22.197671 systemd[1]: run-netns-cni\x2d26dd064a\x2d3984\x2d530c\x2d521c\x2dc4ccca08e0a5.mount: Deactivated successfully. Aug 13 01:14:22.199167 containerd[1550]: time="2025-08-13T01:14:22.199040518Z" level=error msg="Failed to destroy network for sandbox \"5a9fc00ef4217636a86349b1a7379107aab0a86f7460bb23c6725e61db55904c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:22.204516 containerd[1550]: time="2025-08-13T01:14:22.203599323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9fc00ef4217636a86349b1a7379107aab0a86f7460bb23c6725e61db55904c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:22.205428 kubelet[2722]: E0813 01:14:22.205370 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9fc00ef4217636a86349b1a7379107aab0a86f7460bb23c6725e61db55904c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:22.206200 kubelet[2722]: E0813 01:14:22.205538 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9fc00ef4217636a86349b1a7379107aab0a86f7460bb23c6725e61db55904c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:14:22.206200 kubelet[2722]: E0813 01:14:22.205563 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a9fc00ef4217636a86349b1a7379107aab0a86f7460bb23c6725e61db55904c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:14:22.206200 kubelet[2722]: E0813 01:14:22.205612 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a9fc00ef4217636a86349b1a7379107aab0a86f7460bb23c6725e61db55904c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:14:22.205776 systemd[1]: run-netns-cni\x2d561538c7\x2d77a5\x2d3f18\x2dd966\x2dc5db0cd22e23.mount: Deactivated successfully. Aug 13 01:14:22.207196 containerd[1550]: time="2025-08-13T01:14:22.207071368Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f13335a8db973b73c3b425a8b56acd7b755afd743dcd7345b9ff133a5958a45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:22.207832 kubelet[2722]: E0813 01:14:22.207555 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f13335a8db973b73c3b425a8b56acd7b755afd743dcd7345b9ff133a5958a45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:22.208639 kubelet[2722]: E0813 01:14:22.208198 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f13335a8db973b73c3b425a8b56acd7b755afd743dcd7345b9ff133a5958a45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:14:22.208639 kubelet[2722]: E0813 01:14:22.208340 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f13335a8db973b73c3b425a8b56acd7b755afd743dcd7345b9ff133a5958a45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:14:22.208639 kubelet[2722]: E0813 01:14:22.208373 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f13335a8db973b73c3b425a8b56acd7b755afd743dcd7345b9ff133a5958a45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7lv4" podUID="6b834979-32a4-464b-9898-ef87b1042a9e" Aug 13 01:14:22.209151 kubelet[2722]: I0813 01:14:22.208695 2722 kubelet.go:2405] "Pod admission denied" podUID="f2ca0254-5152-4ed6-a716-6659eebc3a29" pod="tigera-operator/tigera-operator-747864d56d-frktd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:22.294911 kubelet[2722]: I0813 01:14:22.294837 2722 kubelet.go:2405] "Pod admission denied" podUID="d659877c-877c-46fe-b03b-2c6257f62119" pod="tigera-operator/tigera-operator-747864d56d-zfhq2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:22.396749 kubelet[2722]: I0813 01:14:22.396698 2722 kubelet.go:2405] "Pod admission denied" podUID="6532edff-af77-4b14-bfb8-07169ed2ac5b" pod="tigera-operator/tigera-operator-747864d56d-fflmj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:22.491291 kubelet[2722]: I0813 01:14:22.490417 2722 kubelet.go:2405] "Pod admission denied" podUID="4d2f75d7-2257-4246-85cb-48beee3c4232" pod="tigera-operator/tigera-operator-747864d56d-79wqp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:22.614333 kubelet[2722]: I0813 01:14:22.614215 2722 kubelet.go:2405] "Pod admission denied" podUID="7b4c2cf5-2234-448d-8ccc-713318dc1617" pod="tigera-operator/tigera-operator-747864d56d-gkgst" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:22.798853 kubelet[2722]: I0813 01:14:22.798239 2722 kubelet.go:2405] "Pod admission denied" podUID="d51ebab3-e60b-453a-bc9c-7874a753d203" pod="tigera-operator/tigera-operator-747864d56d-6j2h4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:22.899961 kubelet[2722]: I0813 01:14:22.899916 2722 kubelet.go:2405] "Pod admission denied" podUID="bd160d28-9363-4126-a0ba-b245f3e2d85a" pod="tigera-operator/tigera-operator-747864d56d-97w4r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:22.998597 kubelet[2722]: I0813 01:14:22.998540 2722 kubelet.go:2405] "Pod admission denied" podUID="5d90cab1-7451-43a3-a8d3-896dc02c2c51" pod="tigera-operator/tigera-operator-747864d56d-fgxgd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:23.103177 kubelet[2722]: I0813 01:14:23.103053 2722 kubelet.go:2405] "Pod admission denied" podUID="8b311218-4af5-4e1e-b4d0-301603ebe49f" pod="tigera-operator/tigera-operator-747864d56d-mtfcf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:23.242593 kubelet[2722]: I0813 01:14:23.242540 2722 kubelet.go:2405] "Pod admission denied" podUID="21484f19-a829-44c4-9583-057608fa6e36" pod="tigera-operator/tigera-operator-747864d56d-vl8cp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:23.343029 kubelet[2722]: I0813 01:14:23.342692 2722 kubelet.go:2405] "Pod admission denied" podUID="ac0cf1d0-670f-45b4-93c9-74ba9da67028" pod="tigera-operator/tigera-operator-747864d56d-gsdlk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:23.438295 kubelet[2722]: I0813 01:14:23.438246 2722 kubelet.go:2405] "Pod admission denied" podUID="3177d0c7-71a4-4926-abc6-7301be034367" pod="tigera-operator/tigera-operator-747864d56d-ktf4g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:23.542913 kubelet[2722]: I0813 01:14:23.542130 2722 kubelet.go:2405] "Pod admission denied" podUID="bd5d9398-e3ed-442b-9a53-dcb2db7110b7" pod="tigera-operator/tigera-operator-747864d56d-6r8q2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:23.637940 kubelet[2722]: I0813 01:14:23.637905 2722 kubelet.go:2405] "Pod admission denied" podUID="44f55db3-a858-4a5f-8f17-905118d44fe3" pod="tigera-operator/tigera-operator-747864d56d-gx52l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:23.748276 kubelet[2722]: I0813 01:14:23.747981 2722 kubelet.go:2405] "Pod admission denied" podUID="4258de18-79b4-4706-bd01-e8bb4d8f3dfa" pod="tigera-operator/tigera-operator-747864d56d-h9wpw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:23.836175 kubelet[2722]: I0813 01:14:23.836124 2722 kubelet.go:2405] "Pod admission denied" podUID="8024b952-100b-4a55-a67b-00e15dce6299" pod="tigera-operator/tigera-operator-747864d56d-j4hhp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:23.936018 kubelet[2722]: I0813 01:14:23.935963 2722 kubelet.go:2405] "Pod admission denied" podUID="6bb6d49c-6f8e-4c71-a489-8bd8bf3814d9" pod="tigera-operator/tigera-operator-747864d56d-8jl7r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:24.039335 kubelet[2722]: I0813 01:14:24.038340 2722 kubelet.go:2405] "Pod admission denied" podUID="d72c8e4f-8ea1-4e7b-bf9d-7f2af6feac02" pod="tigera-operator/tigera-operator-747864d56d-gbm4p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:24.136284 kubelet[2722]: I0813 01:14:24.136240 2722 kubelet.go:2405] "Pod admission denied" podUID="b6fe557e-5f04-419b-b001-b7f26014e2cb" pod="tigera-operator/tigera-operator-747864d56d-tgp4z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:24.237272 kubelet[2722]: I0813 01:14:24.237227 2722 kubelet.go:2405] "Pod admission denied" podUID="69c989a8-2c40-4e70-af09-53ebe9969ebf" pod="tigera-operator/tigera-operator-747864d56d-g5fvw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:24.335520 kubelet[2722]: I0813 01:14:24.334793 2722 kubelet.go:2405] "Pod admission denied" podUID="84272d35-9cf1-4c65-8108-5d889c0d338a" pod="tigera-operator/tigera-operator-747864d56d-tblkr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:24.538866 kubelet[2722]: I0813 01:14:24.538819 2722 kubelet.go:2405] "Pod admission denied" podUID="94801f6f-7fb5-4a46-b101-636f8f3de196" pod="tigera-operator/tigera-operator-747864d56d-4gvnq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:24.638144 kubelet[2722]: I0813 01:14:24.637388 2722 kubelet.go:2405] "Pod admission denied" podUID="2869d0ed-96d1-462a-9228-34f1a9eb51b2" pod="tigera-operator/tigera-operator-747864d56d-zv98b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:24.739810 kubelet[2722]: I0813 01:14:24.739571 2722 kubelet.go:2405] "Pod admission denied" podUID="f9badd01-df11-4b7b-b256-6cf9e8e6da7a" pod="tigera-operator/tigera-operator-747864d56d-np97z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:24.838628 kubelet[2722]: I0813 01:14:24.838586 2722 kubelet.go:2405] "Pod admission denied" podUID="3f6563d3-e065-429d-ad2a-27bf13ec694e" pod="tigera-operator/tigera-operator-747864d56d-v2tb5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:24.935496 kubelet[2722]: I0813 01:14:24.935455 2722 kubelet.go:2405] "Pod admission denied" podUID="3cb4e232-f7cd-4c24-bc82-2fd0cc329e76" pod="tigera-operator/tigera-operator-747864d56d-275q2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:25.136303 kubelet[2722]: I0813 01:14:25.136206 2722 kubelet.go:2405] "Pod admission denied" podUID="c5d835a5-b70c-4583-86ae-bf2fb091f65d" pod="tigera-operator/tigera-operator-747864d56d-58nwz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:25.237981 kubelet[2722]: I0813 01:14:25.237277 2722 kubelet.go:2405] "Pod admission denied" podUID="73413fec-9dbd-491f-8f01-5e8ff7996f97" pod="tigera-operator/tigera-operator-747864d56d-mrp7r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:25.334496 kubelet[2722]: I0813 01:14:25.334449 2722 kubelet.go:2405] "Pod admission denied" podUID="4edd8b13-d09c-4856-89eb-5d7d5889f3e5" pod="tigera-operator/tigera-operator-747864d56d-sx8ws" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:25.435315 kubelet[2722]: I0813 01:14:25.435270 2722 kubelet.go:2405] "Pod admission denied" podUID="c72b7682-00a5-46a5-b228-a3d190d69143" pod="tigera-operator/tigera-operator-747864d56d-mmffc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:25.537675 kubelet[2722]: I0813 01:14:25.536863 2722 kubelet.go:2405] "Pod admission denied" podUID="31d22508-0539-4452-b6cc-b58419856567" pod="tigera-operator/tigera-operator-747864d56d-g7pd6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:25.643844 kubelet[2722]: I0813 01:14:25.643803 2722 kubelet.go:2405] "Pod admission denied" podUID="0978abd8-068a-4c34-9538-16349a8524a9" pod="tigera-operator/tigera-operator-747864d56d-kvqdf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:25.735268 kubelet[2722]: I0813 01:14:25.735223 2722 kubelet.go:2405] "Pod admission denied" podUID="42a0f589-c902-4553-bc06-c36646acf3a2" pod="tigera-operator/tigera-operator-747864d56d-fvsl2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:25.936773 kubelet[2722]: I0813 01:14:25.936726 2722 kubelet.go:2405] "Pod admission denied" podUID="dfdc59e1-62f4-40d5-ab4f-4268f4578c4f" pod="tigera-operator/tigera-operator-747864d56d-tj7wf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:26.035438 kubelet[2722]: I0813 01:14:26.035390 2722 kubelet.go:2405] "Pod admission denied" podUID="7459832b-03d2-430b-b375-da2554b0fbe7" pod="tigera-operator/tigera-operator-747864d56d-8rpr5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:26.078270 kubelet[2722]: E0813 01:14:26.078187 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:14:26.079015 containerd[1550]: time="2025-08-13T01:14:26.078939877Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,}" Aug 13 01:14:26.134860 containerd[1550]: time="2025-08-13T01:14:26.132760173Z" level=error msg="Failed to destroy network for sandbox \"c9c9421d531943ecfcfd0cf5639bd0e5170c5b17b343e4de14e2ba6e29c89f29\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:26.134654 systemd[1]: run-netns-cni\x2d03b1e5e5\x2da385\x2db87f\x2d16e9\x2da19ac4df7a39.mount: Deactivated successfully. Aug 13 01:14:26.135227 containerd[1550]: time="2025-08-13T01:14:26.134979716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9c9421d531943ecfcfd0cf5639bd0e5170c5b17b343e4de14e2ba6e29c89f29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:26.136208 kubelet[2722]: E0813 01:14:26.135388 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9c9421d531943ecfcfd0cf5639bd0e5170c5b17b343e4de14e2ba6e29c89f29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:26.136208 kubelet[2722]: E0813 01:14:26.135438 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9c9421d531943ecfcfd0cf5639bd0e5170c5b17b343e4de14e2ba6e29c89f29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:14:26.136208 kubelet[2722]: E0813 01:14:26.135458 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c9c9421d531943ecfcfd0cf5639bd0e5170c5b17b343e4de14e2ba6e29c89f29\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:14:26.136208 kubelet[2722]: E0813 01:14:26.135504 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c9c9421d531943ecfcfd0cf5639bd0e5170c5b17b343e4de14e2ba6e29c89f29\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:14:26.150914 kubelet[2722]: I0813 01:14:26.150858 2722 kubelet.go:2405] "Pod admission denied" podUID="d793d13c-bb46-4b3f-b86c-4702a7d1e106" pod="tigera-operator/tigera-operator-747864d56d-5lsqs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:26.236769 kubelet[2722]: I0813 01:14:26.236139 2722 kubelet.go:2405] "Pod admission denied" podUID="93273a1c-9807-4198-a34c-a99571036c4c" pod="tigera-operator/tigera-operator-747864d56d-4tssn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:26.337406 kubelet[2722]: I0813 01:14:26.337344 2722 kubelet.go:2405] "Pod admission denied" podUID="2d443fde-d2fb-42ae-990b-0d28c8f6a54e" pod="tigera-operator/tigera-operator-747864d56d-bmdgf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:26.536934 kubelet[2722]: I0813 01:14:26.536119 2722 kubelet.go:2405] "Pod admission denied" podUID="e0906a82-f4ac-4b8d-8a8a-99a564cb9e2c" pod="tigera-operator/tigera-operator-747864d56d-8lbs4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:26.636282 kubelet[2722]: I0813 01:14:26.636233 2722 kubelet.go:2405] "Pod admission denied" podUID="53a20290-4157-4c90-adc3-a71d2b3f820e" pod="tigera-operator/tigera-operator-747864d56d-gnpcp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:26.736754 kubelet[2722]: I0813 01:14:26.736710 2722 kubelet.go:2405] "Pod admission denied" podUID="e45c2a37-4f17-45d3-a4f8-37c8b368211e" pod="tigera-operator/tigera-operator-747864d56d-km7vt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:26.837388 kubelet[2722]: I0813 01:14:26.837263 2722 kubelet.go:2405] "Pod admission denied" podUID="42533086-dbc8-4273-b0d5-a1701cf2afe9" pod="tigera-operator/tigera-operator-747864d56d-hmlm4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:26.935247 kubelet[2722]: I0813 01:14:26.935199 2722 kubelet.go:2405] "Pod admission denied" podUID="62d3004b-6139-4520-b86a-5c9130785948" pod="tigera-operator/tigera-operator-747864d56d-dnzcn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:27.036319 kubelet[2722]: I0813 01:14:27.036274 2722 kubelet.go:2405] "Pod admission denied" podUID="d9e05738-a013-4917-b2f0-5e84cdc36a41" pod="tigera-operator/tigera-operator-747864d56d-4hp5w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:27.058808 kubelet[2722]: E0813 01:14:27.058283 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2063325269: write /var/lib/containerd/tmpmounts/containerd-mount2063325269/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-hq29b" podUID="3c0f3b86-7d63-44df-843e-763eb95a8b94" Aug 13 01:14:27.135936 kubelet[2722]: I0813 01:14:27.134827 2722 kubelet.go:2405] "Pod admission denied" podUID="fc776651-772c-4617-b162-b06200ed9f7e" pod="tigera-operator/tigera-operator-747864d56d-9p2kc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:27.336276 kubelet[2722]: I0813 01:14:27.336221 2722 kubelet.go:2405] "Pod admission denied" podUID="2b575a1c-0da0-4017-999e-3cb0db34dec8" pod="tigera-operator/tigera-operator-747864d56d-m2czr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:27.434530 kubelet[2722]: I0813 01:14:27.434493 2722 kubelet.go:2405] "Pod admission denied" podUID="0385ee4d-6103-4bb5-8d73-0ba94dd6e32a" pod="tigera-operator/tigera-operator-747864d56d-l7kgp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:27.561948 kubelet[2722]: I0813 01:14:27.561024 2722 kubelet.go:2405] "Pod admission denied" podUID="1be0373a-afe6-45b3-ab26-abf481e07533" pod="tigera-operator/tigera-operator-747864d56d-drtx5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:27.742637 kubelet[2722]: I0813 01:14:27.741774 2722 kubelet.go:2405] "Pod admission denied" podUID="d861f948-3aa3-44b8-a96d-548bd8b95f8b" pod="tigera-operator/tigera-operator-747864d56d-6tp6v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:27.868237 kubelet[2722]: I0813 01:14:27.867122 2722 kubelet.go:2405] "Pod admission denied" podUID="54016e8d-db7c-4a5a-8d2d-e59d64d0c1fc" pod="tigera-operator/tigera-operator-747864d56d-89rhg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:27.998941 kubelet[2722]: I0813 01:14:27.997734 2722 kubelet.go:2405] "Pod admission denied" podUID="19aa55bd-0d3b-4e11-a41f-52ffad5c196b" pod="tigera-operator/tigera-operator-747864d56d-wxszd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:28.111423 kubelet[2722]: I0813 01:14:28.111355 2722 kubelet.go:2405] "Pod admission denied" podUID="027cf931-6278-4d94-bba0-88dfb2597742" pod="tigera-operator/tigera-operator-747864d56d-nq288" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:28.203773 kubelet[2722]: I0813 01:14:28.203704 2722 kubelet.go:2405] "Pod admission denied" podUID="14469516-b92f-4415-9796-47b4c5e346a6" pod="tigera-operator/tigera-operator-747864d56d-s7b5b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:28.314359 kubelet[2722]: I0813 01:14:28.311103 2722 kubelet.go:2405] "Pod admission denied" podUID="9c1faa36-2b9e-4a73-a45c-9d147f66b97e" pod="tigera-operator/tigera-operator-747864d56d-dtz6c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:28.554357 kubelet[2722]: I0813 01:14:28.554305 2722 kubelet.go:2405] "Pod admission denied" podUID="71d25319-a4ab-4bff-a5a3-571e3ad53442" pod="tigera-operator/tigera-operator-747864d56d-w7jqw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:28.675178 kubelet[2722]: I0813 01:14:28.672132 2722 kubelet.go:2405] "Pod admission denied" podUID="da588616-e2e4-4234-bf6b-ccf4e26fa2d5" pod="tigera-operator/tigera-operator-747864d56d-8v82m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:28.852472 kubelet[2722]: I0813 01:14:28.852396 2722 kubelet.go:2405] "Pod admission denied" podUID="7c0cd0d1-ef7b-4851-930d-2ecc515f1990" pod="tigera-operator/tigera-operator-747864d56d-pjv66" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:29.102918 kubelet[2722]: I0813 01:14:29.102346 2722 kubelet.go:2405] "Pod admission denied" podUID="0cbe5db9-c166-4321-93f9-870e402eb3f7" pod="tigera-operator/tigera-operator-747864d56d-dlzb5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:29.207089 kubelet[2722]: I0813 01:14:29.207045 2722 kubelet.go:2405] "Pod admission denied" podUID="41870c50-65c3-44ac-be89-9b2dc6239d06" pod="tigera-operator/tigera-operator-747864d56d-jthfr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:29.401589 kubelet[2722]: I0813 01:14:29.400417 2722 kubelet.go:2405] "Pod admission denied" podUID="e4ea0a7d-fc65-4b88-8dba-85efe123e130" pod="tigera-operator/tigera-operator-747864d56d-wc65l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:29.500955 kubelet[2722]: I0813 01:14:29.500235 2722 kubelet.go:2405] "Pod admission denied" podUID="5cfb9830-1e4e-42ca-a9e6-18592c9f20f7" pod="tigera-operator/tigera-operator-747864d56d-gfqcz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:29.599314 kubelet[2722]: I0813 01:14:29.599287 2722 kubelet.go:2405] "Pod admission denied" podUID="ef727749-5d3b-43e7-8933-338513b93cb1" pod="tigera-operator/tigera-operator-747864d56d-r6k5h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:29.723060 kubelet[2722]: I0813 01:14:29.721641 2722 kubelet.go:2405] "Pod admission denied" podUID="ebef5cab-1f5d-4d90-918a-e1d8e0e7718d" pod="tigera-operator/tigera-operator-747864d56d-nqs8x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:29.857666 kubelet[2722]: I0813 01:14:29.857613 2722 kubelet.go:2405] "Pod admission denied" podUID="36c01c39-2908-40c6-abbf-6646555d8314" pod="tigera-operator/tigera-operator-747864d56d-crbq6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:29.994198 kubelet[2722]: I0813 01:14:29.994077 2722 kubelet.go:2405] "Pod admission denied" podUID="7872bd12-4486-4728-9481-439cd46d6f1f" pod="tigera-operator/tigera-operator-747864d56d-2z95l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.095889 kubelet[2722]: I0813 01:14:30.095827 2722 kubelet.go:2405] "Pod admission denied" podUID="009bc621-986a-451b-8a10-2dd4285110f3" pod="tigera-operator/tigera-operator-747864d56d-6jmhb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.191717 kubelet[2722]: I0813 01:14:30.191676 2722 kubelet.go:2405] "Pod admission denied" podUID="9568d79f-3a46-40a9-a444-4bf7abeb367a" pod="tigera-operator/tigera-operator-747864d56d-lxqfl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.285982 kubelet[2722]: I0813 01:14:30.285859 2722 kubelet.go:2405] "Pod admission denied" podUID="ee001505-009e-4c20-bfab-1a7852b53b6c" pod="tigera-operator/tigera-operator-747864d56d-zv828" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.388916 kubelet[2722]: I0813 01:14:30.388281 2722 kubelet.go:2405] "Pod admission denied" podUID="428f9ab6-d582-401c-aa93-345d98cf8379" pod="tigera-operator/tigera-operator-747864d56d-9kdbp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.490858 kubelet[2722]: I0813 01:14:30.490810 2722 kubelet.go:2405] "Pod admission denied" podUID="dde9d5cb-f25c-4b1a-afd1-086f9b2d2af0" pod="tigera-operator/tigera-operator-747864d56d-h5kvr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.593288 kubelet[2722]: I0813 01:14:30.593179 2722 kubelet.go:2405] "Pod admission denied" podUID="d2f63750-eb8e-4846-8ccf-83e77ab4b0f1" pod="tigera-operator/tigera-operator-747864d56d-pxl76" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.695068 kubelet[2722]: I0813 01:14:30.695019 2722 kubelet.go:2405] "Pod admission denied" podUID="1a701334-ce37-4626-83c1-a318f93d86fb" pod="tigera-operator/tigera-operator-747864d56d-64fpd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.794196 kubelet[2722]: I0813 01:14:30.794140 2722 kubelet.go:2405] "Pod admission denied" podUID="5eba1f8a-836b-49e2-a813-13e0287445fb" pod="tigera-operator/tigera-operator-747864d56d-8t6jm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.892614 kubelet[2722]: I0813 01:14:30.891763 2722 kubelet.go:2405] "Pod admission denied" podUID="6bdb56e8-e85c-4a06-9a34-c7cf36178b34" pod="tigera-operator/tigera-operator-747864d56d-84cks" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:30.988317 kubelet[2722]: I0813 01:14:30.988269 2722 kubelet.go:2405] "Pod admission denied" podUID="060eaa7a-345f-4286-9155-46d952514f3e" pod="tigera-operator/tigera-operator-747864d56d-2vxdh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.056012 kubelet[2722]: I0813 01:14:31.055960 2722 kubelet.go:2405] "Pod admission denied" podUID="6c8e8f87-01f9-4b2c-bf56-d8eaccb59b13" pod="tigera-operator/tigera-operator-747864d56d-5fzb2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.147609 kubelet[2722]: I0813 01:14:31.146597 2722 kubelet.go:2405] "Pod admission denied" podUID="009f1867-d2cb-4b87-833a-46202986db28" pod="tigera-operator/tigera-operator-747864d56d-mvfm4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.245593 kubelet[2722]: I0813 01:14:31.245533 2722 kubelet.go:2405] "Pod admission denied" podUID="bf94aa31-0920-4bcb-b6b9-370ec778617b" pod="tigera-operator/tigera-operator-747864d56d-xjmzd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.343377 kubelet[2722]: I0813 01:14:31.343318 2722 kubelet.go:2405] "Pod admission denied" podUID="b91473ba-da30-4731-996d-f14da8428ade" pod="tigera-operator/tigera-operator-747864d56d-2kbcj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.444284 kubelet[2722]: I0813 01:14:31.444228 2722 kubelet.go:2405] "Pod admission denied" podUID="2efbcce8-9f53-4096-a05c-93ec53494707" pod="tigera-operator/tigera-operator-747864d56d-spg6b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.542623 kubelet[2722]: I0813 01:14:31.542569 2722 kubelet.go:2405] "Pod admission denied" podUID="17c32f4e-611b-49a8-b390-b5c7417785e4" pod="tigera-operator/tigera-operator-747864d56d-ndtwv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.641182 kubelet[2722]: I0813 01:14:31.641118 2722 kubelet.go:2405] "Pod admission denied" podUID="6d75ec99-e230-4f9b-953b-e6aae97463d0" pod="tigera-operator/tigera-operator-747864d56d-v6tmb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.737066 kubelet[2722]: I0813 01:14:31.736939 2722 kubelet.go:2405] "Pod admission denied" podUID="d4a18e5d-52fc-40f8-8e24-d94912cb4d0a" pod="tigera-operator/tigera-operator-747864d56d-tj8gw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.839366 kubelet[2722]: I0813 01:14:31.839312 2722 kubelet.go:2405] "Pod admission denied" podUID="658a01d0-76bf-4523-878b-0e15fdd35cab" pod="tigera-operator/tigera-operator-747864d56d-2xflb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:31.943619 kubelet[2722]: I0813 01:14:31.943554 2722 kubelet.go:2405] "Pod admission denied" podUID="b94c86e0-dfd5-4042-90dc-7a19e981801a" pod="tigera-operator/tigera-operator-747864d56d-ds9xb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:32.142926 kubelet[2722]: I0813 01:14:32.142120 2722 kubelet.go:2405] "Pod admission denied" podUID="5f7e0345-2ec9-432f-9d12-58600018b5e3" pod="tigera-operator/tigera-operator-747864d56d-7z8sg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:32.246521 kubelet[2722]: I0813 01:14:32.246468 2722 kubelet.go:2405] "Pod admission denied" podUID="1ebd1c41-513d-4ebc-a9f0-c30f23447ff5" pod="tigera-operator/tigera-operator-747864d56d-2527l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:32.339168 kubelet[2722]: I0813 01:14:32.339102 2722 kubelet.go:2405] "Pod admission denied" podUID="1f036da2-9d07-4bfc-8b53-ed3ed65258ba" pod="tigera-operator/tigera-operator-747864d56d-spvsr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:32.443645 kubelet[2722]: I0813 01:14:32.443600 2722 kubelet.go:2405] "Pod admission denied" podUID="f643f115-367e-4e81-be3c-a05a2b8f786a" pod="tigera-operator/tigera-operator-747864d56d-w49qb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:32.536972 kubelet[2722]: I0813 01:14:32.536919 2722 kubelet.go:2405] "Pod admission denied" podUID="33e86f24-0f2a-463a-bad8-5aef9846104f" pod="tigera-operator/tigera-operator-747864d56d-kq56r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:32.638680 kubelet[2722]: I0813 01:14:32.638628 2722 kubelet.go:2405] "Pod admission denied" podUID="4531e77c-963f-4bc2-8d3e-119472224f3e" pod="tigera-operator/tigera-operator-747864d56d-47fsm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:32.693965 kubelet[2722]: I0813 01:14:32.693840 2722 kubelet.go:2405] "Pod admission denied" podUID="3870511b-5118-4bae-b58b-bc46d7c4e85d" pod="tigera-operator/tigera-operator-747864d56d-hcnkw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:32.789649 kubelet[2722]: I0813 01:14:32.789595 2722 kubelet.go:2405] "Pod admission denied" podUID="08e5e991-426c-4619-920a-e8d7f1523961" pod="tigera-operator/tigera-operator-747864d56d-qk9vw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:32.888616 kubelet[2722]: I0813 01:14:32.888563 2722 kubelet.go:2405] "Pod admission denied" podUID="acf4adfb-f902-4105-95b8-ad1c2ff56343" pod="tigera-operator/tigera-operator-747864d56d-rw558" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:32.991563 kubelet[2722]: I0813 01:14:32.991012 2722 kubelet.go:2405] "Pod admission denied" podUID="9078374d-a784-4a9a-a147-c9dcc37cb816" pod="tigera-operator/tigera-operator-747864d56d-r7469" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:33.058370 containerd[1550]: time="2025-08-13T01:14:33.058333063Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,}" Aug 13 01:14:33.112151 containerd[1550]: time="2025-08-13T01:14:33.112100417Z" level=error msg="Failed to destroy network for sandbox \"d775b493014c82078974b18ae3c6b0e93674e934dc48e6e5dd302cc043df93bd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:33.114031 systemd[1]: run-netns-cni\x2d45eef23c\x2d965b\x2d571a\x2d46a1\x2d9f51870a08a3.mount: Deactivated successfully. Aug 13 01:14:33.116045 containerd[1550]: time="2025-08-13T01:14:33.116012471Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d775b493014c82078974b18ae3c6b0e93674e934dc48e6e5dd302cc043df93bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:33.116776 kubelet[2722]: E0813 01:14:33.116213 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d775b493014c82078974b18ae3c6b0e93674e934dc48e6e5dd302cc043df93bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:33.116776 kubelet[2722]: E0813 01:14:33.116269 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d775b493014c82078974b18ae3c6b0e93674e934dc48e6e5dd302cc043df93bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:14:33.116776 kubelet[2722]: E0813 01:14:33.116290 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d775b493014c82078974b18ae3c6b0e93674e934dc48e6e5dd302cc043df93bd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:14:33.116776 kubelet[2722]: E0813 01:14:33.116336 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d775b493014c82078974b18ae3c6b0e93674e934dc48e6e5dd302cc043df93bd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:14:33.193587 kubelet[2722]: I0813 01:14:33.193543 2722 kubelet.go:2405] "Pod admission denied" podUID="bf4d16fe-1e27-41d7-b4b4-13e2f4c59dcd" pod="tigera-operator/tigera-operator-747864d56d-nsfh9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:33.290003 kubelet[2722]: I0813 01:14:33.289815 2722 kubelet.go:2405] "Pod admission denied" podUID="ee5f63eb-1f9b-41cb-b74c-6e23b0b62b93" pod="tigera-operator/tigera-operator-747864d56d-pg7fj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:33.388016 kubelet[2722]: I0813 01:14:33.387961 2722 kubelet.go:2405] "Pod admission denied" podUID="a2448976-c9b8-4f42-96b5-46673100b2d4" pod="tigera-operator/tigera-operator-747864d56d-wsqsx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:33.590225 kubelet[2722]: I0813 01:14:33.590117 2722 kubelet.go:2405] "Pod admission denied" podUID="9b4238fe-fe23-4fd2-8431-52d416e480f9" pod="tigera-operator/tigera-operator-747864d56d-dpghb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:33.690815 kubelet[2722]: I0813 01:14:33.690768 2722 kubelet.go:2405] "Pod admission denied" podUID="053ca712-8ffa-4dc3-a2a9-1ad80b5443d5" pod="tigera-operator/tigera-operator-747864d56d-t8998" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:33.756844 kubelet[2722]: I0813 01:14:33.756793 2722 kubelet.go:2405] "Pod admission denied" podUID="b4a77dec-2035-4e2f-b94f-0988d89f0c36" pod="tigera-operator/tigera-operator-747864d56d-8srkn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:33.889421 kubelet[2722]: I0813 01:14:33.889295 2722 kubelet.go:2405] "Pod admission denied" podUID="bd11c688-94c4-4fbb-a3b9-5c0af1b672b1" pod="tigera-operator/tigera-operator-747864d56d-rvt95" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:33.955993 kubelet[2722]: I0813 01:14:33.955945 2722 kubelet.go:2405] "Pod admission denied" podUID="51c5e51d-514c-4f39-9899-13b6e6125e95" pod="tigera-operator/tigera-operator-747864d56d-kjsrl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:34.036145 kubelet[2722]: I0813 01:14:34.036087 2722 kubelet.go:2405] "Pod admission denied" podUID="f049f5a1-1010-4c1c-938e-082fd7e7156e" pod="tigera-operator/tigera-operator-747864d56d-7g42r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:34.260920 kubelet[2722]: I0813 01:14:34.260844 2722 kubelet.go:2405] "Pod admission denied" podUID="bd29971b-c5bb-4c99-95c6-eb6c819e0336" pod="tigera-operator/tigera-operator-747864d56d-7wvs8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:34.341164 kubelet[2722]: I0813 01:14:34.341108 2722 kubelet.go:2405] "Pod admission denied" podUID="e5bdb175-6535-46bf-b440-85783f579997" pod="tigera-operator/tigera-operator-747864d56d-hs6nr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:34.475051 kubelet[2722]: I0813 01:14:34.474994 2722 kubelet.go:2405] "Pod admission denied" podUID="58805b71-e323-4325-9db5-0b231bc31a5e" pod="tigera-operator/tigera-operator-747864d56d-2rxt5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:34.551081 kubelet[2722]: I0813 01:14:34.550885 2722 kubelet.go:2405] "Pod admission denied" podUID="4c8b83a7-d7c5-4aa9-88b8-30bebf662895" pod="tigera-operator/tigera-operator-747864d56d-k8rp7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:34.663944 kubelet[2722]: I0813 01:14:34.663864 2722 kubelet.go:2405] "Pod admission denied" podUID="b16120fe-3737-4583-a08a-5619045de956" pod="tigera-operator/tigera-operator-747864d56d-26fk9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:34.756919 kubelet[2722]: I0813 01:14:34.756183 2722 kubelet.go:2405] "Pod admission denied" podUID="dae4b6b6-6787-40a7-83d1-0656db90d0dc" pod="tigera-operator/tigera-operator-747864d56d-rngxw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:34.866601 kubelet[2722]: I0813 01:14:34.866211 2722 kubelet.go:2405] "Pod admission denied" podUID="d76eb282-0bc1-45ab-9e8c-92219f1e8fb5" pod="tigera-operator/tigera-operator-747864d56d-2nd7w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:35.051245 kubelet[2722]: I0813 01:14:35.051180 2722 kubelet.go:2405] "Pod admission denied" podUID="620bd98f-045e-495b-a5e0-dc2674e0b88e" pod="tigera-operator/tigera-operator-747864d56d-f2l99" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:35.060959 kubelet[2722]: E0813 01:14:35.060237 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:14:35.061629 containerd[1550]: time="2025-08-13T01:14:35.061585521Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,}" Aug 13 01:14:35.142567 containerd[1550]: time="2025-08-13T01:14:35.142466888Z" level=error msg="Failed to destroy network for sandbox \"bbc83b0e486b1e4988ca244227fdb811f2147ce52255d403a1d4bcc053064555\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:35.144358 systemd[1]: run-netns-cni\x2de34c3ab3\x2dd110\x2d942a\x2d740e\x2de9fe8fe1d13d.mount: Deactivated successfully. Aug 13 01:14:35.147059 containerd[1550]: time="2025-08-13T01:14:35.146607763Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc83b0e486b1e4988ca244227fdb811f2147ce52255d403a1d4bcc053064555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:35.147312 kubelet[2722]: E0813 01:14:35.147263 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc83b0e486b1e4988ca244227fdb811f2147ce52255d403a1d4bcc053064555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:35.147423 kubelet[2722]: E0813 01:14:35.147317 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc83b0e486b1e4988ca244227fdb811f2147ce52255d403a1d4bcc053064555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:14:35.147423 kubelet[2722]: E0813 01:14:35.147341 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbc83b0e486b1e4988ca244227fdb811f2147ce52255d403a1d4bcc053064555\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:14:35.147423 kubelet[2722]: E0813 01:14:35.147385 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbc83b0e486b1e4988ca244227fdb811f2147ce52255d403a1d4bcc053064555\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:14:35.156990 kubelet[2722]: I0813 01:14:35.156954 2722 kubelet.go:2405] "Pod admission denied" podUID="10409078-1be0-4a91-adb2-4682231a81be" pod="tigera-operator/tigera-operator-747864d56d-t7xht" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:35.253613 kubelet[2722]: I0813 01:14:35.253554 2722 kubelet.go:2405] "Pod admission denied" podUID="db7f0440-fbba-42ec-8ece-19c0862fad77" pod="tigera-operator/tigera-operator-747864d56d-hrzg5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:35.438954 kubelet[2722]: I0813 01:14:35.438913 2722 kubelet.go:2405] "Pod admission denied" podUID="bd8fa673-a919-4011-902e-428f5dfa848d" pod="tigera-operator/tigera-operator-747864d56d-vpqld" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:35.544408 kubelet[2722]: I0813 01:14:35.544355 2722 kubelet.go:2405] "Pod admission denied" podUID="52e860a8-8649-4500-aa4b-cc18940fa5c6" pod="tigera-operator/tigera-operator-747864d56d-sslfb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:35.600117 kubelet[2722]: I0813 01:14:35.600068 2722 kubelet.go:2405] "Pod admission denied" podUID="cf76714d-e152-4ec3-a76f-32c50d1a2395" pod="tigera-operator/tigera-operator-747864d56d-8mpxg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:35.699968 kubelet[2722]: I0813 01:14:35.699633 2722 kubelet.go:2405] "Pod admission denied" podUID="4ffb0e70-2696-4218-a7b3-e27feb5f605c" pod="tigera-operator/tigera-operator-747864d56d-6fldf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:35.790963 kubelet[2722]: I0813 01:14:35.790916 2722 kubelet.go:2405] "Pod admission denied" podUID="1138a6ae-4ca8-4567-9a4f-1334953057ac" pod="tigera-operator/tigera-operator-747864d56d-lvt7b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:35.896946 kubelet[2722]: I0813 01:14:35.896879 2722 kubelet.go:2405] "Pod admission denied" podUID="edce2165-c002-4096-83eb-05f275dd6f88" pod="tigera-operator/tigera-operator-747864d56d-nfdsk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:36.000459 kubelet[2722]: I0813 01:14:35.999742 2722 kubelet.go:2405] "Pod admission denied" podUID="3e5a9b63-6514-480b-a3c2-16e5c29c4c02" pod="tigera-operator/tigera-operator-747864d56d-bhjck" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:36.097012 kubelet[2722]: I0813 01:14:36.096963 2722 kubelet.go:2405] "Pod admission denied" podUID="8e16ae23-b428-428c-810f-72df3a59634d" pod="tigera-operator/tigera-operator-747864d56d-cvmz4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:36.293928 kubelet[2722]: I0813 01:14:36.293250 2722 kubelet.go:2405] "Pod admission denied" podUID="ea62088d-1859-4dab-b06a-015317754e3d" pod="tigera-operator/tigera-operator-747864d56d-l6chs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:36.394226 kubelet[2722]: I0813 01:14:36.394166 2722 kubelet.go:2405] "Pod admission denied" podUID="33ec4b0f-6d0b-4312-a2ec-7cdf391d2c90" pod="tigera-operator/tigera-operator-747864d56d-gzhlp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:36.491768 kubelet[2722]: I0813 01:14:36.491709 2722 kubelet.go:2405] "Pod admission denied" podUID="cfa14a73-b208-454f-a17d-2cdfa3a88030" pod="tigera-operator/tigera-operator-747864d56d-qbx5d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:36.695243 kubelet[2722]: I0813 01:14:36.695166 2722 kubelet.go:2405] "Pod admission denied" podUID="a95e2ec8-b56a-4c29-9e50-7a5cc22695de" pod="tigera-operator/tigera-operator-747864d56d-l5ngd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:36.789433 kubelet[2722]: I0813 01:14:36.789381 2722 kubelet.go:2405] "Pod admission denied" podUID="9a5f34a7-c886-4319-8a1b-b9d34c2c2855" pod="tigera-operator/tigera-operator-747864d56d-xt8tl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:36.892076 kubelet[2722]: I0813 01:14:36.892022 2722 kubelet.go:2405] "Pod admission denied" podUID="1e2c6110-90c4-485d-bf46-92a9020be531" pod="tigera-operator/tigera-operator-747864d56d-gbv8g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:37.058018 containerd[1550]: time="2025-08-13T01:14:37.057843599Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,}" Aug 13 01:14:37.120472 kubelet[2722]: I0813 01:14:37.120405 2722 kubelet.go:2405] "Pod admission denied" podUID="67d7ba54-c21d-49f0-a829-987562360cbf" pod="tigera-operator/tigera-operator-747864d56d-8n56k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:37.140819 containerd[1550]: time="2025-08-13T01:14:37.140745008Z" level=error msg="Failed to destroy network for sandbox \"5a08bcaf0a92fe6a34de4d0a785f3146f370d219db30c132c528276bb3c6e5c2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:37.144322 systemd[1]: run-netns-cni\x2d7c03c7fc\x2dc944\x2db762\x2dde62\x2d153cba3e7c03.mount: Deactivated successfully. Aug 13 01:14:37.146551 containerd[1550]: time="2025-08-13T01:14:37.146436404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a08bcaf0a92fe6a34de4d0a785f3146f370d219db30c132c528276bb3c6e5c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:37.149929 kubelet[2722]: E0813 01:14:37.148288 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a08bcaf0a92fe6a34de4d0a785f3146f370d219db30c132c528276bb3c6e5c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:37.149929 kubelet[2722]: E0813 01:14:37.148372 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a08bcaf0a92fe6a34de4d0a785f3146f370d219db30c132c528276bb3c6e5c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:14:37.149929 kubelet[2722]: E0813 01:14:37.148415 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a08bcaf0a92fe6a34de4d0a785f3146f370d219db30c132c528276bb3c6e5c2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:14:37.149929 kubelet[2722]: E0813 01:14:37.148480 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a08bcaf0a92fe6a34de4d0a785f3146f370d219db30c132c528276bb3c6e5c2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7lv4" podUID="6b834979-32a4-464b-9898-ef87b1042a9e" Aug 13 01:14:37.204257 kubelet[2722]: I0813 01:14:37.204220 2722 kubelet.go:2405] "Pod admission denied" podUID="b20450ac-d014-4804-a82d-a8ea6e25baec" pod="tigera-operator/tigera-operator-747864d56d-bpcwd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:37.289984 kubelet[2722]: I0813 01:14:37.289935 2722 kubelet.go:2405] "Pod admission denied" podUID="c8dfc3fa-3784-4039-ba69-9e8265f1d971" pod="tigera-operator/tigera-operator-747864d56d-cwwjd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:37.410555 kubelet[2722]: I0813 01:14:37.410439 2722 kubelet.go:2405] "Pod admission denied" podUID="30236cb7-44c3-4825-b56b-9b49a8e5f724" pod="tigera-operator/tigera-operator-747864d56d-85sjr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:37.469780 kubelet[2722]: I0813 01:14:37.469745 2722 kubelet.go:2405] "Pod admission denied" podUID="68f4ccdf-ab4e-467d-9251-f58726049f1f" pod="tigera-operator/tigera-operator-747864d56d-6hhqh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:37.601891 kubelet[2722]: I0813 01:14:37.601761 2722 kubelet.go:2405] "Pod admission denied" podUID="1d964bc5-24a1-416c-b979-dfefbf53c1db" pod="tigera-operator/tigera-operator-747864d56d-57bxc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:37.688647 kubelet[2722]: I0813 01:14:37.688572 2722 kubelet.go:2405] "Pod admission denied" podUID="9cee5b3a-1dee-4e98-8d4d-6c7fd8886bd6" pod="tigera-operator/tigera-operator-747864d56d-z2vdg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:37.907121 kubelet[2722]: I0813 01:14:37.906290 2722 kubelet.go:2405] "Pod admission denied" podUID="aa5a067f-0973-4e30-899c-9a4549d2fd58" pod="tigera-operator/tigera-operator-747864d56d-zbb6n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:37.990247 kubelet[2722]: I0813 01:14:37.989686 2722 kubelet.go:2405] "Pod admission denied" podUID="672f3e1c-870b-46e7-944a-e7abc15a2fcd" pod="tigera-operator/tigera-operator-747864d56d-8dhq9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:38.061077 kubelet[2722]: E0813 01:14:38.060353 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:14:38.063249 kubelet[2722]: E0813 01:14:38.061331 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:14:38.063293 containerd[1550]: time="2025-08-13T01:14:38.061410740Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,}" Aug 13 01:14:38.075102 kubelet[2722]: I0813 01:14:38.074794 2722 kubelet.go:2405] "Pod admission denied" podUID="26cbfed8-380b-429b-9ddf-cad55fc6fd5c" pod="tigera-operator/tigera-operator-747864d56d-26kk9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:38.141153 containerd[1550]: time="2025-08-13T01:14:38.141111985Z" level=error msg="Failed to destroy network for sandbox \"241a0627f867a582445785298ed4effd5b3461e115a8a0fc3cfdf29b9c02af4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:38.145047 containerd[1550]: time="2025-08-13T01:14:38.142419097Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"241a0627f867a582445785298ed4effd5b3461e115a8a0fc3cfdf29b9c02af4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:38.144617 systemd[1]: run-netns-cni\x2d8641d3c1\x2d04d7\x2ddd5a\x2dff0e\x2d6b64a28b85dd.mount: Deactivated successfully. Aug 13 01:14:38.145514 kubelet[2722]: E0813 01:14:38.142614 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"241a0627f867a582445785298ed4effd5b3461e115a8a0fc3cfdf29b9c02af4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:38.145514 kubelet[2722]: E0813 01:14:38.142661 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"241a0627f867a582445785298ed4effd5b3461e115a8a0fc3cfdf29b9c02af4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:14:38.145514 kubelet[2722]: E0813 01:14:38.142679 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"241a0627f867a582445785298ed4effd5b3461e115a8a0fc3cfdf29b9c02af4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:14:38.145514 kubelet[2722]: E0813 01:14:38.142727 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"241a0627f867a582445785298ed4effd5b3461e115a8a0fc3cfdf29b9c02af4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:14:38.191640 kubelet[2722]: I0813 01:14:38.191605 2722 kubelet.go:2405] "Pod admission denied" podUID="d276943e-46cf-4688-b135-ec9ee327d491" pod="tigera-operator/tigera-operator-747864d56d-45lf5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:38.304985 kubelet[2722]: I0813 01:14:38.304817 2722 kubelet.go:2405] "Pod admission denied" podUID="f71584c5-1b7f-4e81-9cbd-9228958f4ddf" pod="tigera-operator/tigera-operator-747864d56d-zxrxr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:38.390493 kubelet[2722]: I0813 01:14:38.390442 2722 kubelet.go:2405] "Pod admission denied" podUID="a8c8580b-3c2f-4972-8cd1-20aca4d68985" pod="tigera-operator/tigera-operator-747864d56d-ssqj6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:38.501580 kubelet[2722]: I0813 01:14:38.501542 2722 kubelet.go:2405] "Pod admission denied" podUID="c2a7beeb-59a3-4e04-b8a1-3d9878aec06a" pod="tigera-operator/tigera-operator-747864d56d-lkgzb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:38.589320 kubelet[2722]: I0813 01:14:38.588621 2722 kubelet.go:2405] "Pod admission denied" podUID="9e03bfa8-4f52-4156-894c-2ded39cd24d9" pod="tigera-operator/tigera-operator-747864d56d-lts2q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:38.698495 kubelet[2722]: I0813 01:14:38.698460 2722 kubelet.go:2405] "Pod admission denied" podUID="645212c2-7ed1-436e-ba0f-6c8d18d9c5ac" pod="tigera-operator/tigera-operator-747864d56d-5vfkw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:38.795118 kubelet[2722]: I0813 01:14:38.795069 2722 kubelet.go:2405] "Pod admission denied" podUID="7978ed73-4b69-4a02-a8eb-4b429c6efe4d" pod="tigera-operator/tigera-operator-747864d56d-vnsrc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:38.901130 kubelet[2722]: I0813 01:14:38.900266 2722 kubelet.go:2405] "Pod admission denied" podUID="4cacc404-2ab1-4f7e-956e-d661a04e8185" pod="tigera-operator/tigera-operator-747864d56d-45nbh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:39.057969 kubelet[2722]: E0813 01:14:39.057939 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:14:39.088772 kubelet[2722]: I0813 01:14:39.088741 2722 kubelet.go:2405] "Pod admission denied" podUID="31a4fa39-ff78-47f4-b1af-132a2a235da9" pod="tigera-operator/tigera-operator-747864d56d-khqjc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:39.200032 kubelet[2722]: I0813 01:14:39.199974 2722 kubelet.go:2405] "Pod admission denied" podUID="1c82bcda-63d2-4737-aff4-a27c97d9fc08" pod="tigera-operator/tigera-operator-747864d56d-4fj6p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:39.288645 kubelet[2722]: I0813 01:14:39.288590 2722 kubelet.go:2405] "Pod admission denied" podUID="5866364c-f7f1-46ff-a4a1-5f642add607d" pod="tigera-operator/tigera-operator-747864d56d-wdv8g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:39.402021 kubelet[2722]: I0813 01:14:39.401976 2722 kubelet.go:2405] "Pod admission denied" podUID="73dcdf0c-8225-41ad-9aa4-6b3444b199a2" pod="tigera-operator/tigera-operator-747864d56d-9rt8p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:39.494997 kubelet[2722]: I0813 01:14:39.494602 2722 kubelet.go:2405] "Pod admission denied" podUID="e623e099-8e7b-409b-9ab3-457363ed0cc9" pod="tigera-operator/tigera-operator-747864d56d-7tlls" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:39.623918 kubelet[2722]: I0813 01:14:39.622860 2722 kubelet.go:2405] "Pod admission denied" podUID="9d83d841-7434-493c-949c-c1f1fa3aa4c9" pod="tigera-operator/tigera-operator-747864d56d-8k72q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:39.747809 kubelet[2722]: I0813 01:14:39.747060 2722 kubelet.go:2405] "Pod admission denied" podUID="2ac7cb51-77f5-4963-b754-a6a47138dd50" pod="tigera-operator/tigera-operator-747864d56d-sxndc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:39.852474 kubelet[2722]: I0813 01:14:39.852433 2722 kubelet.go:2405] "Pod admission denied" podUID="7b982329-acab-4f64-a329-cb09afcc2ac2" pod="tigera-operator/tigera-operator-747864d56d-khxgh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:40.040876 kubelet[2722]: I0813 01:14:40.040087 2722 kubelet.go:2405] "Pod admission denied" podUID="50c11088-b4b0-4643-98f6-b6a3cd62707f" pod="tigera-operator/tigera-operator-747864d56d-jfllp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:40.063096 kubelet[2722]: E0813 01:14:40.063052 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2063325269: write /var/lib/containerd/tmpmounts/containerd-mount2063325269/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-hq29b" podUID="3c0f3b86-7d63-44df-843e-763eb95a8b94" Aug 13 01:14:40.151788 kubelet[2722]: I0813 01:14:40.151749 2722 kubelet.go:2405] "Pod admission denied" podUID="ebf24a95-04b3-4382-bb0e-8296e3786d08" pod="tigera-operator/tigera-operator-747864d56d-j86v5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:40.240730 kubelet[2722]: I0813 01:14:40.240683 2722 kubelet.go:2405] "Pod admission denied" podUID="3c61d637-d9d1-44c6-afa9-c1c248a93fce" pod="tigera-operator/tigera-operator-747864d56d-zzz2t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:40.358476 kubelet[2722]: I0813 01:14:40.356219 2722 kubelet.go:2405] "Pod admission denied" podUID="4aa359ba-530c-4de9-a7b6-04588730c646" pod="tigera-operator/tigera-operator-747864d56d-cg5hh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:40.491848 kubelet[2722]: I0813 01:14:40.491789 2722 kubelet.go:2405] "Pod admission denied" podUID="ffecbcc8-9ae7-4afb-90b1-d11ccee8f6e7" pod="tigera-operator/tigera-operator-747864d56d-2c7mh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:40.599542 kubelet[2722]: I0813 01:14:40.599512 2722 kubelet.go:2405] "Pod admission denied" podUID="502c1651-b760-4fac-bb9e-482f3ca2ce6e" pod="tigera-operator/tigera-operator-747864d56d-96c82" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:40.790412 kubelet[2722]: I0813 01:14:40.790359 2722 kubelet.go:2405] "Pod admission denied" podUID="0854df04-ba1f-47a6-91da-1bb2682ebaa0" pod="tigera-operator/tigera-operator-747864d56d-6v4c7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:40.906112 kubelet[2722]: I0813 01:14:40.906075 2722 kubelet.go:2405] "Pod admission denied" podUID="b2c6bf23-dc4b-4520-ae35-0f6d04a3ef58" pod="tigera-operator/tigera-operator-747864d56d-wjp52" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:40.967175 kubelet[2722]: I0813 01:14:40.967121 2722 kubelet.go:2405] "Pod admission denied" podUID="82f0d70b-b24a-44c5-9709-5fca1e342ff1" pod="tigera-operator/tigera-operator-747864d56d-dtp2j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:41.103981 kubelet[2722]: I0813 01:14:41.103750 2722 kubelet.go:2405] "Pod admission denied" podUID="ee9a2f29-d1bf-4d19-a8c2-72e576753203" pod="tigera-operator/tigera-operator-747864d56d-55g6k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:41.191965 kubelet[2722]: I0813 01:14:41.191724 2722 kubelet.go:2405] "Pod admission denied" podUID="e99d70a1-0ba4-4e8c-a035-d7fc55a037be" pod="tigera-operator/tigera-operator-747864d56d-xh24l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:41.302335 kubelet[2722]: I0813 01:14:41.302112 2722 kubelet.go:2405] "Pod admission denied" podUID="71b689a2-d076-4bb1-8742-5c599ea9e922" pod="tigera-operator/tigera-operator-747864d56d-vxxq8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:41.365576 kubelet[2722]: I0813 01:14:41.365295 2722 kubelet.go:2405] "Pod admission denied" podUID="1656f1aa-5110-493d-81bc-b25b508a850f" pod="tigera-operator/tigera-operator-747864d56d-kcsj9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:41.502783 kubelet[2722]: I0813 01:14:41.502746 2722 kubelet.go:2405] "Pod admission denied" podUID="6230025b-2ac6-4bff-8438-fd1eae121e27" pod="tigera-operator/tigera-operator-747864d56d-6k5nw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:41.639124 kubelet[2722]: I0813 01:14:41.639000 2722 kubelet.go:2405] "Pod admission denied" podUID="67d81fbe-4393-465b-bc9d-c6eebbaf6fda" pod="tigera-operator/tigera-operator-747864d56d-lkrqv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:41.749958 kubelet[2722]: I0813 01:14:41.749620 2722 kubelet.go:2405] "Pod admission denied" podUID="0c655fdb-4d60-4309-9ddd-404f7134f451" pod="tigera-operator/tigera-operator-747864d56d-rq6j5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:41.838228 kubelet[2722]: I0813 01:14:41.838168 2722 kubelet.go:2405] "Pod admission denied" podUID="f2e0d62e-2166-4c89-9227-59888f0226ed" pod="tigera-operator/tigera-operator-747864d56d-k4t2l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:41.951098 kubelet[2722]: I0813 01:14:41.950463 2722 kubelet.go:2405] "Pod admission denied" podUID="340c2dbe-8044-4608-86a5-3a3e46980b1a" pod="tigera-operator/tigera-operator-747864d56d-tqlhd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.144141 kubelet[2722]: I0813 01:14:42.144084 2722 kubelet.go:2405] "Pod admission denied" podUID="9ab6e41f-0bb2-4943-8e62-e7d8d8d9069a" pod="tigera-operator/tigera-operator-747864d56d-q2x7g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.266115 kubelet[2722]: I0813 01:14:42.265568 2722 kubelet.go:2405] "Pod admission denied" podUID="40fdea9a-b3e2-41e3-991d-9ac94a71a1b2" pod="tigera-operator/tigera-operator-747864d56d-dzn8r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.393623 kubelet[2722]: I0813 01:14:42.393569 2722 kubelet.go:2405] "Pod admission denied" podUID="6eedb5e1-fe97-4b42-94dc-53d9b85ce093" pod="tigera-operator/tigera-operator-747864d56d-rrfjr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.496916 kubelet[2722]: I0813 01:14:42.496847 2722 kubelet.go:2405] "Pod admission denied" podUID="549fda0b-520d-4502-bc11-85b944f83a65" pod="tigera-operator/tigera-operator-747864d56d-h2lw2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.591429 kubelet[2722]: I0813 01:14:42.590815 2722 kubelet.go:2405] "Pod admission denied" podUID="5ce2e9cd-8cd2-4fc5-9931-590db7ba2911" pod="tigera-operator/tigera-operator-747864d56d-2srvf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.699169 kubelet[2722]: I0813 01:14:42.699138 2722 kubelet.go:2405] "Pod admission denied" podUID="74089c60-e83b-4641-af47-e45314c1c162" pod="tigera-operator/tigera-operator-747864d56d-v5hr6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.792202 kubelet[2722]: I0813 01:14:42.792153 2722 kubelet.go:2405] "Pod admission denied" podUID="5aa21dda-b284-4727-a587-6884fa45296e" pod="tigera-operator/tigera-operator-747864d56d-b6prd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.890978 kubelet[2722]: I0813 01:14:42.890846 2722 kubelet.go:2405] "Pod admission denied" podUID="3169e568-cab3-46af-a26d-7ea67f22aafb" pod="tigera-operator/tigera-operator-747864d56d-s6glc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:42.992738 kubelet[2722]: I0813 01:14:42.992687 2722 kubelet.go:2405] "Pod admission denied" podUID="b4a60515-4cff-4cea-8c78-6aa1ef10d42c" pod="tigera-operator/tigera-operator-747864d56d-stns7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:43.049497 kubelet[2722]: I0813 01:14:43.049187 2722 kubelet.go:2405] "Pod admission denied" podUID="ee6dbdcd-b1ab-4da7-907f-99708d6ce6dc" pod="tigera-operator/tigera-operator-747864d56d-7ncd8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:43.143432 kubelet[2722]: I0813 01:14:43.143303 2722 kubelet.go:2405] "Pod admission denied" podUID="5d7d021a-afdb-4dda-a8b9-65ace9b42cc6" pod="tigera-operator/tigera-operator-747864d56d-8pfjj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:43.241170 kubelet[2722]: I0813 01:14:43.241125 2722 kubelet.go:2405] "Pod admission denied" podUID="761eb32a-4f8b-4dc0-967c-5939e3b8bec5" pod="tigera-operator/tigera-operator-747864d56d-dch6p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:43.343815 kubelet[2722]: I0813 01:14:43.343757 2722 kubelet.go:2405] "Pod admission denied" podUID="1c4c008a-62f1-4735-882c-19d4f9e2ce9f" pod="tigera-operator/tigera-operator-747864d56d-k6qzb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:43.441932 kubelet[2722]: I0813 01:14:43.441199 2722 kubelet.go:2405] "Pod admission denied" podUID="bc54721b-eb25-4ecd-bcd5-15ef6568b247" pod="tigera-operator/tigera-operator-747864d56d-xwh26" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:43.516045 kubelet[2722]: I0813 01:14:43.515996 2722 kubelet.go:2405] "Pod admission denied" podUID="af425878-45c9-43da-8601-0a4c52a3f35c" pod="tigera-operator/tigera-operator-747864d56d-pz6kx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:43.641224 kubelet[2722]: I0813 01:14:43.641172 2722 kubelet.go:2405] "Pod admission denied" podUID="b7d146f2-4a0c-4a4d-8002-93eabdbaac47" pod="tigera-operator/tigera-operator-747864d56d-44x62" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:43.769979 kubelet[2722]: I0813 01:14:43.768576 2722 kubelet.go:2405] "Pod admission denied" podUID="f8c69f62-c707-4a7b-ab1c-2bd10ca892eb" pod="tigera-operator/tigera-operator-747864d56d-hvt88" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:43.993252 kubelet[2722]: I0813 01:14:43.993182 2722 kubelet.go:2405] "Pod admission denied" podUID="97f36657-0ba2-43b3-850f-a9772edf203c" pod="tigera-operator/tigera-operator-747864d56d-mn29b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:44.073395 kubelet[2722]: E0813 01:14:44.073248 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:14:44.132568 kubelet[2722]: I0813 01:14:44.131608 2722 kubelet.go:2405] "Pod admission denied" podUID="33aad336-d9bb-46bb-b3c5-79f88a9fe498" pod="tigera-operator/tigera-operator-747864d56d-4njwk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:44.222490 kubelet[2722]: I0813 01:14:44.222443 2722 kubelet.go:2405] "Pod admission denied" podUID="cabea71a-b169-4783-bf19-7f660591c346" pod="tigera-operator/tigera-operator-747864d56d-5tg5c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:44.373331 kubelet[2722]: I0813 01:14:44.371633 2722 kubelet.go:2405] "Pod admission denied" podUID="15316398-a0c1-4e08-8e60-adb211a3d2d8" pod="tigera-operator/tigera-operator-747864d56d-65km5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:44.502516 kubelet[2722]: I0813 01:14:44.502439 2722 kubelet.go:2405] "Pod admission denied" podUID="4c68d476-bdb9-43fb-af84-3d56a4686b58" pod="tigera-operator/tigera-operator-747864d56d-d5gvb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:44.599350 kubelet[2722]: I0813 01:14:44.599296 2722 kubelet.go:2405] "Pod admission denied" podUID="fcdf0bf8-a373-4e08-b627-f2c77e6a3a7c" pod="tigera-operator/tigera-operator-747864d56d-qdcmz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:44.702803 kubelet[2722]: I0813 01:14:44.702750 2722 kubelet.go:2405] "Pod admission denied" podUID="714e53a9-7ab4-4dbe-a532-e567900c76ec" pod="tigera-operator/tigera-operator-747864d56d-glzfs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:44.812666 kubelet[2722]: I0813 01:14:44.812600 2722 kubelet.go:2405] "Pod admission denied" podUID="f2b66594-7262-4394-8ad6-c0c824432e41" pod="tigera-operator/tigera-operator-747864d56d-c2s84" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:45.000968 kubelet[2722]: I0813 01:14:45.000583 2722 kubelet.go:2405] "Pod admission denied" podUID="8287dea1-a21e-40fe-9a8c-bc2bb4f8b6f2" pod="tigera-operator/tigera-operator-747864d56d-5fx9d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:45.103508 kubelet[2722]: I0813 01:14:45.103403 2722 kubelet.go:2405] "Pod admission denied" podUID="d2c93a6a-06dc-43f7-a3ee-15ce5cbab98f" pod="tigera-operator/tigera-operator-747864d56d-6j4zj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:45.221761 kubelet[2722]: I0813 01:14:45.221360 2722 kubelet.go:2405] "Pod admission denied" podUID="7172e189-2a60-44f2-b20c-4a6d9f1fbfa3" pod="tigera-operator/tigera-operator-747864d56d-vntb4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:45.402585 kubelet[2722]: I0813 01:14:45.402388 2722 kubelet.go:2405] "Pod admission denied" podUID="32c172ce-f979-4723-9359-eb75d4b43dc9" pod="tigera-operator/tigera-operator-747864d56d-nnkkl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:45.515433 kubelet[2722]: I0813 01:14:45.515367 2722 kubelet.go:2405] "Pod admission denied" podUID="f54830ab-0b35-40ba-8d27-ab538dc0a750" pod="tigera-operator/tigera-operator-747864d56d-7vfqd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:45.596661 kubelet[2722]: I0813 01:14:45.596613 2722 kubelet.go:2405] "Pod admission denied" podUID="80fd5c46-71b3-464a-9497-fbe15c5b3c7e" pod="tigera-operator/tigera-operator-747864d56d-zbns8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:45.739296 kubelet[2722]: I0813 01:14:45.739238 2722 kubelet.go:2405] "Pod admission denied" podUID="143aabad-f875-48b6-a961-ff40e13a67df" pod="tigera-operator/tigera-operator-747864d56d-6ftnc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:45.859630 kubelet[2722]: I0813 01:14:45.859539 2722 kubelet.go:2405] "Pod admission denied" podUID="d9c6253d-6541-4e2b-8ab4-812529c4db13" pod="tigera-operator/tigera-operator-747864d56d-6nhjw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:46.047057 kubelet[2722]: I0813 01:14:46.044580 2722 kubelet.go:2405] "Pod admission denied" podUID="788e0be9-f192-4feb-a883-51846612cd5b" pod="tigera-operator/tigera-operator-747864d56d-t24j4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:46.155219 kubelet[2722]: I0813 01:14:46.155163 2722 kubelet.go:2405] "Pod admission denied" podUID="117e299b-9bd3-48cf-9f9f-ca0f0644b0d6" pod="tigera-operator/tigera-operator-747864d56d-b6jxj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:46.269943 kubelet[2722]: I0813 01:14:46.269872 2722 kubelet.go:2405] "Pod admission denied" podUID="4db81a96-caef-4cdc-b6c5-3db3703ae325" pod="tigera-operator/tigera-operator-747864d56d-qnvxs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:46.372971 kubelet[2722]: I0813 01:14:46.372772 2722 kubelet.go:2405] "Pod admission denied" podUID="941481f7-42a9-4026-8b9e-ed2ed9073409" pod="tigera-operator/tigera-operator-747864d56d-2cm99" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:46.517849 kubelet[2722]: I0813 01:14:46.517778 2722 kubelet.go:2405] "Pod admission denied" podUID="eaa16ef0-230e-45b0-aac9-ec07c9b9d6ad" pod="tigera-operator/tigera-operator-747864d56d-b5l8h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:46.606222 kubelet[2722]: I0813 01:14:46.606119 2722 kubelet.go:2405] "Pod admission denied" podUID="d347475e-0f64-43d0-89fc-0ca62a18bafa" pod="tigera-operator/tigera-operator-747864d56d-mlx4d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:46.727930 kubelet[2722]: I0813 01:14:46.725470 2722 kubelet.go:2405] "Pod admission denied" podUID="83d5053d-1512-4a04-9233-598926821a2a" pod="tigera-operator/tigera-operator-747864d56d-x785c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:46.854755 kubelet[2722]: I0813 01:14:46.854694 2722 kubelet.go:2405] "Pod admission denied" podUID="9a20622d-753b-4816-9150-11c1ebefe472" pod="tigera-operator/tigera-operator-747864d56d-c7b95" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:47.021011 kubelet[2722]: I0813 01:14:47.018938 2722 kubelet.go:2405] "Pod admission denied" podUID="89371725-ee2f-4ec7-8144-e6e147d34cb3" pod="tigera-operator/tigera-operator-747864d56d-dgkjz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:47.156094 kubelet[2722]: I0813 01:14:47.156048 2722 kubelet.go:2405] "Pod admission denied" podUID="89b97801-23db-4505-9667-5b8db73f6a73" pod="tigera-operator/tigera-operator-747864d56d-bdk4f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:47.336544 kubelet[2722]: I0813 01:14:47.336188 2722 kubelet.go:2405] "Pod admission denied" podUID="52304708-61fb-4b7a-8106-57711d00fada" pod="tigera-operator/tigera-operator-747864d56d-pmf9n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:47.448884 kubelet[2722]: I0813 01:14:47.448826 2722 kubelet.go:2405] "Pod admission denied" podUID="030b2705-11d0-4de2-bae2-86fbbb76ec3e" pod="tigera-operator/tigera-operator-747864d56d-6h9rh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:47.573242 kubelet[2722]: I0813 01:14:47.572334 2722 kubelet.go:2405] "Pod admission denied" podUID="fd72be3d-b2cd-4e71-8a09-56636be6e492" pod="tigera-operator/tigera-operator-747864d56d-ftstp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:47.649988 kubelet[2722]: I0813 01:14:47.649669 2722 kubelet.go:2405] "Pod admission denied" podUID="f6248ce3-001d-410f-932a-376ba6f9ddde" pod="tigera-operator/tigera-operator-747864d56d-2qr9p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:47.775761 kubelet[2722]: I0813 01:14:47.775685 2722 kubelet.go:2405] "Pod admission denied" podUID="b1e014a7-2208-46dc-a028-282eaf98a1bc" pod="tigera-operator/tigera-operator-747864d56d-c7xww" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:48.011747 kubelet[2722]: I0813 01:14:48.010651 2722 kubelet.go:2405] "Pod admission denied" podUID="5b4741cd-8c06-4b61-94ed-c47ca7acae65" pod="tigera-operator/tigera-operator-747864d56d-lbh94" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:48.057951 containerd[1550]: time="2025-08-13T01:14:48.057886415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,}" Aug 13 01:14:48.058463 kubelet[2722]: E0813 01:14:48.058404 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:14:48.059918 containerd[1550]: time="2025-08-13T01:14:48.059234868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,}" Aug 13 01:14:48.067501 containerd[1550]: time="2025-08-13T01:14:48.067455912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,}" Aug 13 01:14:48.207974 containerd[1550]: time="2025-08-13T01:14:48.205168167Z" level=error msg="Failed to destroy network for sandbox \"9fa58ba89cf332033ce06803583fb2f20a23645ff7766d41ab43aac117765edd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:48.208238 systemd[1]: run-netns-cni\x2db592db8f\x2ded18\x2db8d3\x2d820a\x2d5764a282c9cd.mount: Deactivated successfully. Aug 13 01:14:48.210745 containerd[1550]: time="2025-08-13T01:14:48.210691105Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fa58ba89cf332033ce06803583fb2f20a23645ff7766d41ab43aac117765edd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:48.218404 kubelet[2722]: E0813 01:14:48.217383 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fa58ba89cf332033ce06803583fb2f20a23645ff7766d41ab43aac117765edd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:48.218404 kubelet[2722]: E0813 01:14:48.217446 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fa58ba89cf332033ce06803583fb2f20a23645ff7766d41ab43aac117765edd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:14:48.218404 kubelet[2722]: E0813 01:14:48.217467 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9fa58ba89cf332033ce06803583fb2f20a23645ff7766d41ab43aac117765edd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:14:48.218404 kubelet[2722]: E0813 01:14:48.217515 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9fa58ba89cf332033ce06803583fb2f20a23645ff7766d41ab43aac117765edd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7lv4" podUID="6b834979-32a4-464b-9898-ef87b1042a9e" Aug 13 01:14:48.223833 kubelet[2722]: I0813 01:14:48.223046 2722 kubelet.go:2405] "Pod admission denied" podUID="d1c8fa42-e3c1-42bd-9fab-842f8a1ba9ed" pod="tigera-operator/tigera-operator-747864d56d-n5r25" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:48.235326 containerd[1550]: time="2025-08-13T01:14:48.235128200Z" level=error msg="Failed to destroy network for sandbox \"4e063a3298955a6af85cc498ad20998c9950fa91725aece4e79b899c52f3385d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:48.236287 containerd[1550]: time="2025-08-13T01:14:48.236241234Z" level=error msg="Failed to destroy network for sandbox \"c85127c5cf9e9d85d797d9d1e395eb2fddec906b6c48369c91686ff35797218a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:48.238549 systemd[1]: run-netns-cni\x2de44483e0\x2d8f48\x2d6c59\x2dac62\x2dded2887a0a53.mount: Deactivated successfully. Aug 13 01:14:48.239690 containerd[1550]: time="2025-08-13T01:14:48.238864700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c85127c5cf9e9d85d797d9d1e395eb2fddec906b6c48369c91686ff35797218a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:48.239954 kubelet[2722]: E0813 01:14:48.239736 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c85127c5cf9e9d85d797d9d1e395eb2fddec906b6c48369c91686ff35797218a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:48.240960 kubelet[2722]: E0813 01:14:48.240097 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c85127c5cf9e9d85d797d9d1e395eb2fddec906b6c48369c91686ff35797218a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:14:48.240960 kubelet[2722]: E0813 01:14:48.240130 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c85127c5cf9e9d85d797d9d1e395eb2fddec906b6c48369c91686ff35797218a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:14:48.240960 kubelet[2722]: E0813 01:14:48.240175 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c85127c5cf9e9d85d797d9d1e395eb2fddec906b6c48369c91686ff35797218a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:14:48.241432 containerd[1550]: time="2025-08-13T01:14:48.241324886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e063a3298955a6af85cc498ad20998c9950fa91725aece4e79b899c52f3385d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:48.241541 kubelet[2722]: E0813 01:14:48.241479 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e063a3298955a6af85cc498ad20998c9950fa91725aece4e79b899c52f3385d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:48.241597 kubelet[2722]: E0813 01:14:48.241552 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e063a3298955a6af85cc498ad20998c9950fa91725aece4e79b899c52f3385d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:14:48.241597 kubelet[2722]: E0813 01:14:48.241574 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4e063a3298955a6af85cc498ad20998c9950fa91725aece4e79b899c52f3385d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:14:48.241657 kubelet[2722]: E0813 01:14:48.241620 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4e063a3298955a6af85cc498ad20998c9950fa91725aece4e79b899c52f3385d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:14:48.313316 kubelet[2722]: I0813 01:14:48.312628 2722 kubelet.go:2405] "Pod admission denied" podUID="a5a68725-2a88-4918-8cf1-b55b94fd1be7" pod="tigera-operator/tigera-operator-747864d56d-nkvkp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:48.453990 kubelet[2722]: I0813 01:14:48.453937 2722 kubelet.go:2405] "Pod admission denied" podUID="400ff655-a9f5-4384-80a0-93fca9732cb0" pod="tigera-operator/tigera-operator-747864d56d-mst7w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:48.524207 kubelet[2722]: I0813 01:14:48.524157 2722 kubelet.go:2405] "Pod admission denied" podUID="b54c071a-3cc6-4be3-8ac5-cf411ecaf023" pod="tigera-operator/tigera-operator-747864d56d-t2fkv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:48.682791 kubelet[2722]: I0813 01:14:48.679173 2722 kubelet.go:2405] "Pod admission denied" podUID="6ee5a68d-f700-42ae-b745-b9ed89004ecf" pod="tigera-operator/tigera-operator-747864d56d-xdj7n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:48.770178 kubelet[2722]: I0813 01:14:48.770133 2722 kubelet.go:2405] "Pod admission denied" podUID="35bb5083-8999-4f8e-acf2-336f7a43d769" pod="tigera-operator/tigera-operator-747864d56d-6shdz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:48.850325 kubelet[2722]: I0813 01:14:48.850273 2722 kubelet.go:2405] "Pod admission denied" podUID="c88c6893-f1fd-4d17-9a85-5d60b7098e5a" pod="tigera-operator/tigera-operator-747864d56d-9x4df" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:14:49.064105 systemd[1]: run-netns-cni\x2db398aaa6\x2d997f\x2dc2ee\x2d60dd\x2deefb154f0a57.mount: Deactivated successfully. Aug 13 01:14:49.172314 systemd[1]: Started sshd@7-172.233.214.103:22-147.75.109.163:39284.service - OpenSSH per-connection server daemon (147.75.109.163:39284). Aug 13 01:14:49.506232 sshd[5025]: Accepted publickey for core from 147.75.109.163 port 39284 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:14:49.508054 sshd-session[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:14:49.514278 systemd-logind[1522]: New session 8 of user core. Aug 13 01:14:49.522033 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 01:14:49.806563 sshd[5027]: Connection closed by 147.75.109.163 port 39284 Aug 13 01:14:49.805304 sshd-session[5025]: pam_unix(sshd:session): session closed for user core Aug 13 01:14:49.812054 systemd-logind[1522]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:14:49.813520 systemd[1]: sshd@7-172.233.214.103:22-147.75.109.163:39284.service: Deactivated successfully. Aug 13 01:14:49.816228 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:14:49.817571 systemd-logind[1522]: Removed session 8. Aug 13 01:14:50.058098 kubelet[2722]: E0813 01:14:50.057586 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:14:50.060705 containerd[1550]: time="2025-08-13T01:14:50.060660203Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,}" Aug 13 01:14:50.104196 containerd[1550]: time="2025-08-13T01:14:50.104150969Z" level=error msg="Failed to destroy network for sandbox \"4c2f693d944a9b4d761cd3c4e4baf1796329ae86ff49751134d89bec4aeaf204\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:50.106323 systemd[1]: run-netns-cni\x2dc9e5eb00\x2d716f\x2d3830\x2d7590\x2d8fae661e4797.mount: Deactivated successfully. Aug 13 01:14:50.107310 containerd[1550]: time="2025-08-13T01:14:50.107064533Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c2f693d944a9b4d761cd3c4e4baf1796329ae86ff49751134d89bec4aeaf204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:50.108006 kubelet[2722]: E0813 01:14:50.107973 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c2f693d944a9b4d761cd3c4e4baf1796329ae86ff49751134d89bec4aeaf204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:14:50.108160 kubelet[2722]: E0813 01:14:50.108130 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c2f693d944a9b4d761cd3c4e4baf1796329ae86ff49751134d89bec4aeaf204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:14:50.108240 kubelet[2722]: E0813 01:14:50.108161 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4c2f693d944a9b4d761cd3c4e4baf1796329ae86ff49751134d89bec4aeaf204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:14:50.108240 kubelet[2722]: E0813 01:14:50.108215 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4c2f693d944a9b4d761cd3c4e4baf1796329ae86ff49751134d89bec4aeaf204\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:14:51.058637 kubelet[2722]: E0813 01:14:51.058346 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:14:51.059834 kubelet[2722]: E0813 01:14:51.059743 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2063325269: write /var/lib/containerd/tmpmounts/containerd-mount2063325269/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-hq29b" podUID="3c0f3b86-7d63-44df-843e-763eb95a8b94" Aug 13 01:14:54.067719 kubelet[2722]: E0813 01:14:54.065550 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:14:54.870311 systemd[1]: Started sshd@8-172.233.214.103:22-147.75.109.163:39298.service - OpenSSH per-connection server daemon (147.75.109.163:39298). Aug 13 01:14:55.211951 sshd[5068]: Accepted publickey for core from 147.75.109.163 port 39298 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:14:55.213680 sshd-session[5068]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:14:55.218996 systemd-logind[1522]: New session 9 of user core. Aug 13 01:14:55.225068 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 01:14:55.515527 sshd[5070]: Connection closed by 147.75.109.163 port 39298 Aug 13 01:14:55.516634 sshd-session[5068]: pam_unix(sshd:session): session closed for user core Aug 13 01:14:55.522143 systemd[1]: sshd@8-172.233.214.103:22-147.75.109.163:39298.service: Deactivated successfully. Aug 13 01:14:55.524755 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:14:55.526687 systemd-logind[1522]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:14:55.528353 systemd-logind[1522]: Removed session 9. Aug 13 01:15:00.058672 kubelet[2722]: E0813 01:15:00.058373 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:15:00.061410 containerd[1550]: time="2025-08-13T01:15:00.061367928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,}" Aug 13 01:15:00.062010 containerd[1550]: time="2025-08-13T01:15:00.061852755Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,}" Aug 13 01:15:00.127389 containerd[1550]: time="2025-08-13T01:15:00.127321277Z" level=error msg="Failed to destroy network for sandbox \"6541fb5f4a7da96f8412e0ef289535b603ce29e02b100955ac53e97323562bfd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:00.129633 containerd[1550]: time="2025-08-13T01:15:00.129606477Z" level=error msg="Failed to destroy network for sandbox \"a205c42ff923d8c9e3af1108e6b80b484a2b59103024271006615046b86aab1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:00.131164 containerd[1550]: time="2025-08-13T01:15:00.131052711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6541fb5f4a7da96f8412e0ef289535b603ce29e02b100955ac53e97323562bfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:00.131522 kubelet[2722]: E0813 01:15:00.131486 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6541fb5f4a7da96f8412e0ef289535b603ce29e02b100955ac53e97323562bfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:00.131522 kubelet[2722]: E0813 01:15:00.131551 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6541fb5f4a7da96f8412e0ef289535b603ce29e02b100955ac53e97323562bfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:15:00.131797 kubelet[2722]: E0813 01:15:00.131577 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6541fb5f4a7da96f8412e0ef289535b603ce29e02b100955ac53e97323562bfd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:15:00.131797 kubelet[2722]: E0813 01:15:00.131638 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6541fb5f4a7da96f8412e0ef289535b603ce29e02b100955ac53e97323562bfd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:15:00.131867 containerd[1550]: time="2025-08-13T01:15:00.131674867Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a205c42ff923d8c9e3af1108e6b80b484a2b59103024271006615046b86aab1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:00.132434 kubelet[2722]: E0813 01:15:00.132005 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a205c42ff923d8c9e3af1108e6b80b484a2b59103024271006615046b86aab1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:00.132434 kubelet[2722]: E0813 01:15:00.132046 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a205c42ff923d8c9e3af1108e6b80b484a2b59103024271006615046b86aab1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:15:00.132434 kubelet[2722]: E0813 01:15:00.132063 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a205c42ff923d8c9e3af1108e6b80b484a2b59103024271006615046b86aab1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:15:00.132434 kubelet[2722]: E0813 01:15:00.132093 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a205c42ff923d8c9e3af1108e6b80b484a2b59103024271006615046b86aab1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7lv4" podUID="6b834979-32a4-464b-9898-ef87b1042a9e" Aug 13 01:15:00.133464 systemd[1]: run-netns-cni\x2d824bd5e1\x2d6a38\x2d21ce\x2dc645\x2d3d6227fce539.mount: Deactivated successfully. Aug 13 01:15:00.137278 systemd[1]: run-netns-cni\x2db5f6f3ab\x2d154f\x2d3b59\x2d17dd\x2d0a02c74bd676.mount: Deactivated successfully. Aug 13 01:15:00.578215 systemd[1]: Started sshd@9-172.233.214.103:22-147.75.109.163:38590.service - OpenSSH per-connection server daemon (147.75.109.163:38590). Aug 13 01:15:00.939618 sshd[5134]: Accepted publickey for core from 147.75.109.163 port 38590 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:00.941161 sshd-session[5134]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:00.945952 systemd-logind[1522]: New session 10 of user core. Aug 13 01:15:00.953231 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 01:15:01.239961 sshd[5138]: Connection closed by 147.75.109.163 port 38590 Aug 13 01:15:01.240694 sshd-session[5134]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:01.244742 systemd-logind[1522]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:15:01.245129 systemd[1]: sshd@9-172.233.214.103:22-147.75.109.163:38590.service: Deactivated successfully. Aug 13 01:15:01.247258 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:15:01.248588 systemd-logind[1522]: Removed session 10. Aug 13 01:15:02.059476 kubelet[2722]: E0813 01:15:02.058871 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:15:02.060614 containerd[1550]: time="2025-08-13T01:15:02.060203776Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,}" Aug 13 01:15:02.121680 containerd[1550]: time="2025-08-13T01:15:02.121626036Z" level=error msg="Failed to destroy network for sandbox \"b4c1c4b00b10af8bcbd1704a2c73f839f04dcde4166ee9ed2e014213db1548e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:02.124726 systemd[1]: run-netns-cni\x2d0d03918a\x2d147c\x2dd444\x2d21a0\x2d7100ddc21cf2.mount: Deactivated successfully. Aug 13 01:15:02.125364 containerd[1550]: time="2025-08-13T01:15:02.124957241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c1c4b00b10af8bcbd1704a2c73f839f04dcde4166ee9ed2e014213db1548e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:02.125537 kubelet[2722]: E0813 01:15:02.125482 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c1c4b00b10af8bcbd1704a2c73f839f04dcde4166ee9ed2e014213db1548e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:02.125654 kubelet[2722]: E0813 01:15:02.125557 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c1c4b00b10af8bcbd1704a2c73f839f04dcde4166ee9ed2e014213db1548e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:15:02.125654 kubelet[2722]: E0813 01:15:02.125587 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b4c1c4b00b10af8bcbd1704a2c73f839f04dcde4166ee9ed2e014213db1548e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:15:02.125706 kubelet[2722]: E0813 01:15:02.125650 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b4c1c4b00b10af8bcbd1704a2c73f839f04dcde4166ee9ed2e014213db1548e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:15:03.058432 containerd[1550]: time="2025-08-13T01:15:03.058309052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,}" Aug 13 01:15:03.058613 kubelet[2722]: E0813 01:15:03.058556 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2063325269: write /var/lib/containerd/tmpmounts/containerd-mount2063325269/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-hq29b" podUID="3c0f3b86-7d63-44df-843e-763eb95a8b94" Aug 13 01:15:03.116279 containerd[1550]: time="2025-08-13T01:15:03.116219692Z" level=error msg="Failed to destroy network for sandbox \"6d42940ad7e26d3cc8e232ebc39086902b9b969432a9684a738720c4698665b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:03.118487 systemd[1]: run-netns-cni\x2d584617bb\x2d653c\x2d030e\x2dea48\x2d2c67ad8bef00.mount: Deactivated successfully. Aug 13 01:15:03.119557 containerd[1550]: time="2025-08-13T01:15:03.119509808Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d42940ad7e26d3cc8e232ebc39086902b9b969432a9684a738720c4698665b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:03.120472 kubelet[2722]: E0813 01:15:03.119789 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d42940ad7e26d3cc8e232ebc39086902b9b969432a9684a738720c4698665b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:03.120472 kubelet[2722]: E0813 01:15:03.119860 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d42940ad7e26d3cc8e232ebc39086902b9b969432a9684a738720c4698665b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:15:03.120472 kubelet[2722]: E0813 01:15:03.119883 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d42940ad7e26d3cc8e232ebc39086902b9b969432a9684a738720c4698665b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:15:03.120472 kubelet[2722]: E0813 01:15:03.120007 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d42940ad7e26d3cc8e232ebc39086902b9b969432a9684a738720c4698665b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:15:06.303182 systemd[1]: Started sshd@10-172.233.214.103:22-147.75.109.163:38592.service - OpenSSH per-connection server daemon (147.75.109.163:38592). Aug 13 01:15:06.646413 sshd[5203]: Accepted publickey for core from 147.75.109.163 port 38592 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:06.647846 sshd-session[5203]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:06.653963 systemd-logind[1522]: New session 11 of user core. Aug 13 01:15:06.665021 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 01:15:06.951051 sshd[5205]: Connection closed by 147.75.109.163 port 38592 Aug 13 01:15:06.951744 sshd-session[5203]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:06.960590 systemd[1]: sshd@10-172.233.214.103:22-147.75.109.163:38592.service: Deactivated successfully. Aug 13 01:15:06.963357 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:15:06.964556 systemd-logind[1522]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:15:06.967182 systemd-logind[1522]: Removed session 11. Aug 13 01:15:07.015327 systemd[1]: Started sshd@11-172.233.214.103:22-147.75.109.163:38594.service - OpenSSH per-connection server daemon (147.75.109.163:38594). Aug 13 01:15:07.342857 sshd[5218]: Accepted publickey for core from 147.75.109.163 port 38594 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:07.344866 sshd-session[5218]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:07.351061 systemd-logind[1522]: New session 12 of user core. Aug 13 01:15:07.357023 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 01:15:07.660661 sshd[5220]: Connection closed by 147.75.109.163 port 38594 Aug 13 01:15:07.661320 sshd-session[5218]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:07.665982 systemd[1]: sshd@11-172.233.214.103:22-147.75.109.163:38594.service: Deactivated successfully. Aug 13 01:15:07.668070 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:15:07.669304 systemd-logind[1522]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:15:07.670576 systemd-logind[1522]: Removed session 12. Aug 13 01:15:07.723725 systemd[1]: Started sshd@12-172.233.214.103:22-147.75.109.163:38596.service - OpenSSH per-connection server daemon (147.75.109.163:38596). Aug 13 01:15:08.065785 sshd[5230]: Accepted publickey for core from 147.75.109.163 port 38596 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:08.068214 sshd-session[5230]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:08.074260 systemd-logind[1522]: New session 13 of user core. Aug 13 01:15:08.081047 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 01:15:08.374090 sshd[5232]: Connection closed by 147.75.109.163 port 38596 Aug 13 01:15:08.375856 sshd-session[5230]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:08.379822 systemd-logind[1522]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:15:08.380755 systemd[1]: sshd@12-172.233.214.103:22-147.75.109.163:38596.service: Deactivated successfully. Aug 13 01:15:08.383154 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:15:08.386776 systemd-logind[1522]: Removed session 13. Aug 13 01:15:12.058729 containerd[1550]: time="2025-08-13T01:15:12.058142857Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,}" Aug 13 01:15:12.118691 containerd[1550]: time="2025-08-13T01:15:12.118630801Z" level=error msg="Failed to destroy network for sandbox \"e1ba6790f3a77314c9974b6abd5a8f67425684f64e3ea4fa7399376b7b7161cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:12.120682 systemd[1]: run-netns-cni\x2d5795e536\x2d60bc\x2d3c54\x2d9740\x2d670262e3985b.mount: Deactivated successfully. Aug 13 01:15:12.122508 containerd[1550]: time="2025-08-13T01:15:12.122447737Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ba6790f3a77314c9974b6abd5a8f67425684f64e3ea4fa7399376b7b7161cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:12.122802 kubelet[2722]: E0813 01:15:12.122724 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ba6790f3a77314c9974b6abd5a8f67425684f64e3ea4fa7399376b7b7161cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:12.123212 kubelet[2722]: E0813 01:15:12.122820 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ba6790f3a77314c9974b6abd5a8f67425684f64e3ea4fa7399376b7b7161cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:15:12.123212 kubelet[2722]: E0813 01:15:12.122859 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1ba6790f3a77314c9974b6abd5a8f67425684f64e3ea4fa7399376b7b7161cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:15:12.123212 kubelet[2722]: E0813 01:15:12.122968 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-l7lv4_calico-system(6b834979-32a4-464b-9898-ef87b1042a9e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1ba6790f3a77314c9974b6abd5a8f67425684f64e3ea4fa7399376b7b7161cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-l7lv4" podUID="6b834979-32a4-464b-9898-ef87b1042a9e" Aug 13 01:15:13.058233 kubelet[2722]: E0813 01:15:13.058184 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:15:13.058873 containerd[1550]: time="2025-08-13T01:15:13.058793744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,}" Aug 13 01:15:13.116409 containerd[1550]: time="2025-08-13T01:15:13.116340053Z" level=error msg="Failed to destroy network for sandbox \"284be5a6572348e4f9b443ae6ee4d8c576c733d794f64c4b6f125954284c35bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:13.118683 systemd[1]: run-netns-cni\x2db4670391\x2d57c0\x2d2355\x2df94c\x2d67ef6f979400.mount: Deactivated successfully. Aug 13 01:15:13.119050 containerd[1550]: time="2025-08-13T01:15:13.119005323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"284be5a6572348e4f9b443ae6ee4d8c576c733d794f64c4b6f125954284c35bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:13.119874 kubelet[2722]: E0813 01:15:13.119777 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"284be5a6572348e4f9b443ae6ee4d8c576c733d794f64c4b6f125954284c35bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:13.119983 kubelet[2722]: E0813 01:15:13.119908 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"284be5a6572348e4f9b443ae6ee4d8c576c733d794f64c4b6f125954284c35bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:15:13.119983 kubelet[2722]: E0813 01:15:13.119943 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"284be5a6572348e4f9b443ae6ee4d8c576c733d794f64c4b6f125954284c35bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:15:13.120379 kubelet[2722]: E0813 01:15:13.120022 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"284be5a6572348e4f9b443ae6ee4d8c576c733d794f64c4b6f125954284c35bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:15:13.443556 systemd[1]: Started sshd@13-172.233.214.103:22-147.75.109.163:57748.service - OpenSSH per-connection server daemon (147.75.109.163:57748). Aug 13 01:15:13.791384 sshd[5297]: Accepted publickey for core from 147.75.109.163 port 57748 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:13.793029 sshd-session[5297]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:13.798722 systemd-logind[1522]: New session 14 of user core. Aug 13 01:15:13.804031 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 01:15:14.097625 sshd[5299]: Connection closed by 147.75.109.163 port 57748 Aug 13 01:15:14.098617 sshd-session[5297]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:14.103256 systemd[1]: sshd@13-172.233.214.103:22-147.75.109.163:57748.service: Deactivated successfully. Aug 13 01:15:14.105108 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:15:14.106800 systemd-logind[1522]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:15:14.108341 systemd-logind[1522]: Removed session 14. Aug 13 01:15:16.062225 containerd[1550]: time="2025-08-13T01:15:16.062140828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:15:16.064522 containerd[1550]: time="2025-08-13T01:15:16.063146834Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,}" Aug 13 01:15:16.139956 containerd[1550]: time="2025-08-13T01:15:16.139852187Z" level=error msg="Failed to destroy network for sandbox \"8b7ededdaf5072ba6883778d87dbc1ccada38e333d5de877c3ea91e01c1e459c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:16.142048 containerd[1550]: time="2025-08-13T01:15:16.141957870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b7ededdaf5072ba6883778d87dbc1ccada38e333d5de877c3ea91e01c1e459c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:16.143794 kubelet[2722]: E0813 01:15:16.142475 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b7ededdaf5072ba6883778d87dbc1ccada38e333d5de877c3ea91e01c1e459c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:16.143794 kubelet[2722]: E0813 01:15:16.142556 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b7ededdaf5072ba6883778d87dbc1ccada38e333d5de877c3ea91e01c1e459c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:15:16.143794 kubelet[2722]: E0813 01:15:16.142601 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b7ededdaf5072ba6883778d87dbc1ccada38e333d5de877c3ea91e01c1e459c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:15:16.143794 kubelet[2722]: E0813 01:15:16.142664 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b7ededdaf5072ba6883778d87dbc1ccada38e333d5de877c3ea91e01c1e459c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:15:16.143617 systemd[1]: run-netns-cni\x2d49dcf11c\x2de290\x2ded67\x2d37d4\x2ddef845c172af.mount: Deactivated successfully. Aug 13 01:15:17.058678 kubelet[2722]: E0813 01:15:17.058472 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:15:17.063197 containerd[1550]: time="2025-08-13T01:15:17.062874038Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,}" Aug 13 01:15:17.140920 containerd[1550]: time="2025-08-13T01:15:17.140545132Z" level=error msg="Failed to destroy network for sandbox \"694f0c9d7bec3411eb8353460e2ebc3ff433c7f7e4ee122a9012b98e8530bc91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:17.142669 systemd[1]: run-netns-cni\x2d5e6a42ff\x2ddb3a\x2d51c6\x2ddb79\x2d82e26d739c9b.mount: Deactivated successfully. Aug 13 01:15:17.146534 containerd[1550]: time="2025-08-13T01:15:17.146500611Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"694f0c9d7bec3411eb8353460e2ebc3ff433c7f7e4ee122a9012b98e8530bc91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:17.146777 kubelet[2722]: E0813 01:15:17.146739 2722 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"694f0c9d7bec3411eb8353460e2ebc3ff433c7f7e4ee122a9012b98e8530bc91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:15:17.147160 kubelet[2722]: E0813 01:15:17.146804 2722 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"694f0c9d7bec3411eb8353460e2ebc3ff433c7f7e4ee122a9012b98e8530bc91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:15:17.147160 kubelet[2722]: E0813 01:15:17.146824 2722 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"694f0c9d7bec3411eb8353460e2ebc3ff433c7f7e4ee122a9012b98e8530bc91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:15:17.147160 kubelet[2722]: E0813 01:15:17.146929 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"694f0c9d7bec3411eb8353460e2ebc3ff433c7f7e4ee122a9012b98e8530bc91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:15:19.163085 systemd[1]: Started sshd@14-172.233.214.103:22-147.75.109.163:55236.service - OpenSSH per-connection server daemon (147.75.109.163:55236). Aug 13 01:15:19.509388 sshd[5369]: Accepted publickey for core from 147.75.109.163 port 55236 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:19.511433 sshd-session[5369]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:19.517177 systemd-logind[1522]: New session 15 of user core. Aug 13 01:15:19.524326 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 01:15:19.637284 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1724375529.mount: Deactivated successfully. Aug 13 01:15:19.668921 containerd[1550]: time="2025-08-13T01:15:19.667797797Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:15:19.668921 containerd[1550]: time="2025-08-13T01:15:19.668334895Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:19.671396 containerd[1550]: time="2025-08-13T01:15:19.671189746Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:19.675750 containerd[1550]: time="2025-08-13T01:15:19.674328955Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:19.675954 containerd[1550]: time="2025-08-13T01:15:19.674676455Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 3.612477967s" Aug 13 01:15:19.675954 containerd[1550]: time="2025-08-13T01:15:19.675951910Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 01:15:19.718595 containerd[1550]: time="2025-08-13T01:15:19.718547860Z" level=info msg="CreateContainer within sandbox \"230d5e2260a9c5817bd0783127b59ee5e78b885b601c7a71a82f5c041382166d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 01:15:19.731761 containerd[1550]: time="2025-08-13T01:15:19.731653616Z" level=info msg="Container 912e55c883b9193775a8e2855a8f299720d449e5b9a7826e028fd24a993416d7: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:15:19.738765 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1809244155.mount: Deactivated successfully. Aug 13 01:15:19.747655 containerd[1550]: time="2025-08-13T01:15:19.747632963Z" level=info msg="CreateContainer within sandbox \"230d5e2260a9c5817bd0783127b59ee5e78b885b601c7a71a82f5c041382166d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"912e55c883b9193775a8e2855a8f299720d449e5b9a7826e028fd24a993416d7\"" Aug 13 01:15:19.749918 containerd[1550]: time="2025-08-13T01:15:19.748333120Z" level=info msg="StartContainer for \"912e55c883b9193775a8e2855a8f299720d449e5b9a7826e028fd24a993416d7\"" Aug 13 01:15:19.751523 containerd[1550]: time="2025-08-13T01:15:19.751504020Z" level=info msg="connecting to shim 912e55c883b9193775a8e2855a8f299720d449e5b9a7826e028fd24a993416d7" address="unix:///run/containerd/s/67393f8f5ec0478f793c124a7121aab7064377f0424e98086f7c743d7914b797" protocol=ttrpc version=3 Aug 13 01:15:19.802430 systemd[1]: Started cri-containerd-912e55c883b9193775a8e2855a8f299720d449e5b9a7826e028fd24a993416d7.scope - libcontainer container 912e55c883b9193775a8e2855a8f299720d449e5b9a7826e028fd24a993416d7. Aug 13 01:15:19.869453 sshd[5371]: Connection closed by 147.75.109.163 port 55236 Aug 13 01:15:19.870289 sshd-session[5369]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:19.878374 systemd[1]: sshd@14-172.233.214.103:22-147.75.109.163:55236.service: Deactivated successfully. Aug 13 01:15:19.878778 systemd-logind[1522]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:15:19.881749 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:15:19.883945 containerd[1550]: time="2025-08-13T01:15:19.883849182Z" level=info msg="StartContainer for \"912e55c883b9193775a8e2855a8f299720d449e5b9a7826e028fd24a993416d7\" returns successfully" Aug 13 01:15:19.885740 systemd-logind[1522]: Removed session 15. Aug 13 01:15:19.987759 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 01:15:19.987891 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 01:15:20.041867 kubelet[2722]: I0813 01:15:20.041837 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:20.042461 kubelet[2722]: I0813 01:15:20.042431 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:15:20.045282 kubelet[2722]: I0813 01:15:20.045261 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:15:20.047313 kubelet[2722]: I0813 01:15:20.047279 2722 image_gc_manager.go:514] "Removing image to free bytes" imageID="sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93" size=25052538 runtimeHandler="" Aug 13 01:15:20.047537 containerd[1550]: time="2025-08-13T01:15:20.047465144Z" level=info msg="RemoveImage \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:15:20.048771 containerd[1550]: time="2025-08-13T01:15:20.048742760Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator:v1.38.3\"" Aug 13 01:15:20.049662 containerd[1550]: time="2025-08-13T01:15:20.049621107Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\"" Aug 13 01:15:20.050063 containerd[1550]: time="2025-08-13T01:15:20.050046685Z" level=info msg="RemoveImage \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" returns successfully" Aug 13 01:15:20.050389 containerd[1550]: time="2025-08-13T01:15:20.050304574Z" level=info msg="ImageDelete event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:15:20.071207 kubelet[2722]: I0813 01:15:20.070567 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:20.071207 kubelet[2722]: I0813 01:15:20.070667 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","calico-system/csi-node-driver-l7lv4","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:15:20.071207 kubelet[2722]: E0813 01:15:20.070708 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:15:20.071207 kubelet[2722]: E0813 01:15:20.070718 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:15:20.071207 kubelet[2722]: E0813 01:15:20.070724 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:15:20.071207 kubelet[2722]: E0813 01:15:20.070730 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:15:20.071207 kubelet[2722]: E0813 01:15:20.070739 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:15:20.071207 kubelet[2722]: E0813 01:15:20.070748 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:15:20.071207 kubelet[2722]: E0813 01:15:20.070756 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:15:20.071207 kubelet[2722]: E0813 01:15:20.070763 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:15:20.071207 kubelet[2722]: E0813 01:15:20.070769 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:15:20.071207 kubelet[2722]: E0813 01:15:20.070778 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:15:20.071207 kubelet[2722]: I0813 01:15:20.070787 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:15:20.741991 kubelet[2722]: I0813 01:15:20.741281 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hq29b" podStartSLOduration=2.229970656 podStartE2EDuration="3m10.741267428s" podCreationTimestamp="2025-08-13 01:12:10 +0000 UTC" firstStartedPulling="2025-08-13 01:12:11.165983304 +0000 UTC m=+17.203912613" lastFinishedPulling="2025-08-13 01:15:19.677280076 +0000 UTC m=+205.715209385" observedRunningTime="2025-08-13 01:15:20.729098188 +0000 UTC m=+206.767027497" watchObservedRunningTime="2025-08-13 01:15:20.741267428 +0000 UTC m=+206.779196737" Aug 13 01:15:21.016123 containerd[1550]: time="2025-08-13T01:15:21.015944895Z" level=info msg="TaskExit event in podsandbox handler container_id:\"912e55c883b9193775a8e2855a8f299720d449e5b9a7826e028fd24a993416d7\" id:\"9bf1b8fbd260c97da330e7a06bb2306671839141702cc37fa9d1bf750555f16b\" pid:5461 exit_status:1 exited_at:{seconds:1755047721 nanos:15486867}" Aug 13 01:15:21.794675 containerd[1550]: time="2025-08-13T01:15:21.794629426Z" level=info msg="TaskExit event in podsandbox handler container_id:\"912e55c883b9193775a8e2855a8f299720d449e5b9a7826e028fd24a993416d7\" id:\"b98a5a2cc05f48e498f42e13fde4d52de1d750fa7e03133c0169662726db1587\" pid:5609 exit_status:1 exited_at:{seconds:1755047721 nanos:793621669}" Aug 13 01:15:21.934821 systemd-networkd[1449]: vxlan.calico: Link UP Aug 13 01:15:21.936773 systemd-networkd[1449]: vxlan.calico: Gained carrier Aug 13 01:15:23.467339 systemd-networkd[1449]: vxlan.calico: Gained IPv6LL Aug 13 01:15:24.067248 containerd[1550]: time="2025-08-13T01:15:24.067160053Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,}" Aug 13 01:15:24.243201 systemd-networkd[1449]: calia2a44f9fd9b: Link UP Aug 13 01:15:24.245423 systemd-networkd[1449]: calia2a44f9fd9b: Gained carrier Aug 13 01:15:24.293566 containerd[1550]: 2025-08-13 01:15:24.128 [INFO][5688] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--214--103-k8s-csi--node--driver--l7lv4-eth0 csi-node-driver- calico-system 6b834979-32a4-464b-9898-ef87b1042a9e 769 0 2025-08-13 01:12:10 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-233-214-103 csi-node-driver-l7lv4 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calia2a44f9fd9b [] [] }} ContainerID="0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" Namespace="calico-system" Pod="csi-node-driver-l7lv4" WorkloadEndpoint="172--233--214--103-k8s-csi--node--driver--l7lv4-" Aug 13 01:15:24.293566 containerd[1550]: 2025-08-13 01:15:24.129 [INFO][5688] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" Namespace="calico-system" Pod="csi-node-driver-l7lv4" WorkloadEndpoint="172--233--214--103-k8s-csi--node--driver--l7lv4-eth0" Aug 13 01:15:24.293566 containerd[1550]: 2025-08-13 01:15:24.184 [INFO][5699] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" HandleID="k8s-pod-network.0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" Workload="172--233--214--103-k8s-csi--node--driver--l7lv4-eth0" Aug 13 01:15:24.299013 containerd[1550]: 2025-08-13 01:15:24.187 [INFO][5699] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" HandleID="k8s-pod-network.0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" Workload="172--233--214--103-k8s-csi--node--driver--l7lv4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f160), Attrs:map[string]string{"namespace":"calico-system", "node":"172-233-214-103", "pod":"csi-node-driver-l7lv4", "timestamp":"2025-08-13 01:15:24.184461817 +0000 UTC"}, Hostname:"172-233-214-103", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:15:24.299013 containerd[1550]: 2025-08-13 01:15:24.187 [INFO][5699] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:24.299013 containerd[1550]: 2025-08-13 01:15:24.187 [INFO][5699] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:24.299013 containerd[1550]: 2025-08-13 01:15:24.187 [INFO][5699] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-214-103' Aug 13 01:15:24.299013 containerd[1550]: 2025-08-13 01:15:24.196 [INFO][5699] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" host="172-233-214-103" Aug 13 01:15:24.299013 containerd[1550]: 2025-08-13 01:15:24.206 [INFO][5699] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-214-103" Aug 13 01:15:24.299013 containerd[1550]: 2025-08-13 01:15:24.210 [INFO][5699] ipam/ipam.go 511: Trying affinity for 192.168.26.128/26 host="172-233-214-103" Aug 13 01:15:24.299013 containerd[1550]: 2025-08-13 01:15:24.212 [INFO][5699] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.128/26 host="172-233-214-103" Aug 13 01:15:24.299013 containerd[1550]: 2025-08-13 01:15:24.214 [INFO][5699] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.128/26 host="172-233-214-103" Aug 13 01:15:24.299013 containerd[1550]: 2025-08-13 01:15:24.214 [INFO][5699] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.128/26 handle="k8s-pod-network.0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" host="172-233-214-103" Aug 13 01:15:24.299241 containerd[1550]: 2025-08-13 01:15:24.216 [INFO][5699] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7 Aug 13 01:15:24.299241 containerd[1550]: 2025-08-13 01:15:24.223 [INFO][5699] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.128/26 handle="k8s-pod-network.0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" host="172-233-214-103" Aug 13 01:15:24.299241 containerd[1550]: 2025-08-13 01:15:24.229 [INFO][5699] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.129/26] block=192.168.26.128/26 handle="k8s-pod-network.0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" host="172-233-214-103" Aug 13 01:15:24.299241 containerd[1550]: 2025-08-13 01:15:24.229 [INFO][5699] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.129/26] handle="k8s-pod-network.0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" host="172-233-214-103" Aug 13 01:15:24.299241 containerd[1550]: 2025-08-13 01:15:24.229 [INFO][5699] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:24.299241 containerd[1550]: 2025-08-13 01:15:24.229 [INFO][5699] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.129/26] IPv6=[] ContainerID="0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" HandleID="k8s-pod-network.0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" Workload="172--233--214--103-k8s-csi--node--driver--l7lv4-eth0" Aug 13 01:15:24.299408 containerd[1550]: 2025-08-13 01:15:24.234 [INFO][5688] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" Namespace="calico-system" Pod="csi-node-driver-l7lv4" WorkloadEndpoint="172--233--214--103-k8s-csi--node--driver--l7lv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--214--103-k8s-csi--node--driver--l7lv4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b834979-32a4-464b-9898-ef87b1042a9e", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 12, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-214-103", ContainerID:"", Pod:"csi-node-driver-l7lv4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia2a44f9fd9b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:24.299462 containerd[1550]: 2025-08-13 01:15:24.235 [INFO][5688] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.129/32] ContainerID="0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" Namespace="calico-system" Pod="csi-node-driver-l7lv4" WorkloadEndpoint="172--233--214--103-k8s-csi--node--driver--l7lv4-eth0" Aug 13 01:15:24.299462 containerd[1550]: 2025-08-13 01:15:24.235 [INFO][5688] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia2a44f9fd9b ContainerID="0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" Namespace="calico-system" Pod="csi-node-driver-l7lv4" WorkloadEndpoint="172--233--214--103-k8s-csi--node--driver--l7lv4-eth0" Aug 13 01:15:24.299462 containerd[1550]: 2025-08-13 01:15:24.247 [INFO][5688] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" Namespace="calico-system" Pod="csi-node-driver-l7lv4" WorkloadEndpoint="172--233--214--103-k8s-csi--node--driver--l7lv4-eth0" Aug 13 01:15:24.299526 containerd[1550]: 2025-08-13 01:15:24.250 [INFO][5688] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" Namespace="calico-system" Pod="csi-node-driver-l7lv4" WorkloadEndpoint="172--233--214--103-k8s-csi--node--driver--l7lv4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--214--103-k8s-csi--node--driver--l7lv4-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"6b834979-32a4-464b-9898-ef87b1042a9e", ResourceVersion:"769", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 12, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-214-103", ContainerID:"0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7", Pod:"csi-node-driver-l7lv4", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.26.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calia2a44f9fd9b", MAC:"02:0f:24:0a:61:1d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:24.299575 containerd[1550]: 2025-08-13 01:15:24.281 [INFO][5688] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" Namespace="calico-system" Pod="csi-node-driver-l7lv4" WorkloadEndpoint="172--233--214--103-k8s-csi--node--driver--l7lv4-eth0" Aug 13 01:15:24.377739 containerd[1550]: time="2025-08-13T01:15:24.374030102Z" level=info msg="connecting to shim 0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7" address="unix:///run/containerd/s/1497e14d92f5122ad6ded4a95bdde47a0ad64c4f37361c1ac61f1102365c3ea3" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:15:24.448122 systemd[1]: Started cri-containerd-0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7.scope - libcontainer container 0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7. Aug 13 01:15:24.491884 containerd[1550]: time="2025-08-13T01:15:24.491808334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-l7lv4,Uid:6b834979-32a4-464b-9898-ef87b1042a9e,Namespace:calico-system,Attempt:0,} returns sandbox id \"0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7\"" Aug 13 01:15:24.493570 containerd[1550]: time="2025-08-13T01:15:24.493543328Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 01:15:24.934579 systemd[1]: Started sshd@15-172.233.214.103:22-147.75.109.163:55252.service - OpenSSH per-connection server daemon (147.75.109.163:55252). Aug 13 01:15:25.298376 sshd[5770]: Accepted publickey for core from 147.75.109.163 port 55252 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:25.302823 sshd-session[5770]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:25.316763 systemd-logind[1522]: New session 16 of user core. Aug 13 01:15:25.323084 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 01:15:25.414628 containerd[1550]: time="2025-08-13T01:15:25.414571583Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:25.415802 containerd[1550]: time="2025-08-13T01:15:25.415637290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 01:15:25.417863 containerd[1550]: time="2025-08-13T01:15:25.417774303Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:25.420188 containerd[1550]: time="2025-08-13T01:15:25.420148036Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:25.421617 containerd[1550]: time="2025-08-13T01:15:25.421586272Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 927.949314ms" Aug 13 01:15:25.421683 containerd[1550]: time="2025-08-13T01:15:25.421620772Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 01:15:25.427499 containerd[1550]: time="2025-08-13T01:15:25.427453114Z" level=info msg="CreateContainer within sandbox \"0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 01:15:25.440503 containerd[1550]: time="2025-08-13T01:15:25.440039087Z" level=info msg="Container 15a2beb7207e8ebc198817b02439b18e7197a40bde4e171a85947c08303731d5: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:15:25.446079 containerd[1550]: time="2025-08-13T01:15:25.446020959Z" level=info msg="CreateContainer within sandbox \"0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"15a2beb7207e8ebc198817b02439b18e7197a40bde4e171a85947c08303731d5\"" Aug 13 01:15:25.447315 containerd[1550]: time="2025-08-13T01:15:25.447268395Z" level=info msg="StartContainer for \"15a2beb7207e8ebc198817b02439b18e7197a40bde4e171a85947c08303731d5\"" Aug 13 01:15:25.449116 containerd[1550]: time="2025-08-13T01:15:25.449036730Z" level=info msg="connecting to shim 15a2beb7207e8ebc198817b02439b18e7197a40bde4e171a85947c08303731d5" address="unix:///run/containerd/s/1497e14d92f5122ad6ded4a95bdde47a0ad64c4f37361c1ac61f1102365c3ea3" protocol=ttrpc version=3 Aug 13 01:15:25.474032 systemd[1]: Started cri-containerd-15a2beb7207e8ebc198817b02439b18e7197a40bde4e171a85947c08303731d5.scope - libcontainer container 15a2beb7207e8ebc198817b02439b18e7197a40bde4e171a85947c08303731d5. Aug 13 01:15:25.515080 systemd-networkd[1449]: calia2a44f9fd9b: Gained IPv6LL Aug 13 01:15:25.546340 containerd[1550]: time="2025-08-13T01:15:25.546197670Z" level=info msg="StartContainer for \"15a2beb7207e8ebc198817b02439b18e7197a40bde4e171a85947c08303731d5\" returns successfully" Aug 13 01:15:25.549043 containerd[1550]: time="2025-08-13T01:15:25.548960622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 01:15:25.679811 sshd[5776]: Connection closed by 147.75.109.163 port 55252 Aug 13 01:15:25.683098 sshd-session[5770]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:25.687911 systemd[1]: sshd@15-172.233.214.103:22-147.75.109.163:55252.service: Deactivated successfully. Aug 13 01:15:25.688930 systemd-logind[1522]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:15:25.691399 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:15:25.694620 systemd-logind[1522]: Removed session 16. Aug 13 01:15:26.742444 containerd[1550]: time="2025-08-13T01:15:26.742370977Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:26.743578 containerd[1550]: time="2025-08-13T01:15:26.743541614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 01:15:26.744868 containerd[1550]: time="2025-08-13T01:15:26.743962153Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:26.745536 containerd[1550]: time="2025-08-13T01:15:26.745482878Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:15:26.746263 containerd[1550]: time="2025-08-13T01:15:26.746230876Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.197106215s" Aug 13 01:15:26.746346 containerd[1550]: time="2025-08-13T01:15:26.746330275Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 01:15:26.749810 containerd[1550]: time="2025-08-13T01:15:26.749780076Z" level=info msg="CreateContainer within sandbox \"0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 01:15:26.757929 containerd[1550]: time="2025-08-13T01:15:26.757247164Z" level=info msg="Container 99f7ec0f62c5fa82bbd81a0d7632f480ccca69e71255d284e5c799fa5735769b: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:15:26.761850 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount657491527.mount: Deactivated successfully. Aug 13 01:15:26.767593 containerd[1550]: time="2025-08-13T01:15:26.767529094Z" level=info msg="CreateContainer within sandbox \"0dff1dceb9d29d9aa936f5866440bfb03e2d7170cf78eb92d6c03672d85f20a7\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"99f7ec0f62c5fa82bbd81a0d7632f480ccca69e71255d284e5c799fa5735769b\"" Aug 13 01:15:26.768831 containerd[1550]: time="2025-08-13T01:15:26.768795999Z" level=info msg="StartContainer for \"99f7ec0f62c5fa82bbd81a0d7632f480ccca69e71255d284e5c799fa5735769b\"" Aug 13 01:15:26.771277 containerd[1550]: time="2025-08-13T01:15:26.771233772Z" level=info msg="connecting to shim 99f7ec0f62c5fa82bbd81a0d7632f480ccca69e71255d284e5c799fa5735769b" address="unix:///run/containerd/s/1497e14d92f5122ad6ded4a95bdde47a0ad64c4f37361c1ac61f1102365c3ea3" protocol=ttrpc version=3 Aug 13 01:15:26.803085 systemd[1]: Started cri-containerd-99f7ec0f62c5fa82bbd81a0d7632f480ccca69e71255d284e5c799fa5735769b.scope - libcontainer container 99f7ec0f62c5fa82bbd81a0d7632f480ccca69e71255d284e5c799fa5735769b. Aug 13 01:15:26.857960 containerd[1550]: time="2025-08-13T01:15:26.857839889Z" level=info msg="StartContainer for \"99f7ec0f62c5fa82bbd81a0d7632f480ccca69e71255d284e5c799fa5735769b\" returns successfully" Aug 13 01:15:27.058434 kubelet[2722]: E0813 01:15:27.058284 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:15:27.059341 containerd[1550]: time="2025-08-13T01:15:27.059063390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,}" Aug 13 01:15:27.169833 systemd-networkd[1449]: cali8068681e21f: Link UP Aug 13 01:15:27.172713 systemd-networkd[1449]: cali8068681e21f: Gained carrier Aug 13 01:15:27.197040 containerd[1550]: 2025-08-13 01:15:27.096 [INFO][5862] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--214--103-k8s-coredns--674b8bbfcf--p259x-eth0 coredns-674b8bbfcf- kube-system a5b0b8ae-a381-43cc-8adc-4e3ee01749bd 864 0 2025-08-13 01:11:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-233-214-103 coredns-674b8bbfcf-p259x eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali8068681e21f [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" Namespace="kube-system" Pod="coredns-674b8bbfcf-p259x" WorkloadEndpoint="172--233--214--103-k8s-coredns--674b8bbfcf--p259x-" Aug 13 01:15:27.197040 containerd[1550]: 2025-08-13 01:15:27.096 [INFO][5862] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" Namespace="kube-system" Pod="coredns-674b8bbfcf-p259x" WorkloadEndpoint="172--233--214--103-k8s-coredns--674b8bbfcf--p259x-eth0" Aug 13 01:15:27.197040 containerd[1550]: 2025-08-13 01:15:27.132 [INFO][5874] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" HandleID="k8s-pod-network.f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" Workload="172--233--214--103-k8s-coredns--674b8bbfcf--p259x-eth0" Aug 13 01:15:27.197262 containerd[1550]: 2025-08-13 01:15:27.132 [INFO][5874] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" HandleID="k8s-pod-network.f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" Workload="172--233--214--103-k8s-coredns--674b8bbfcf--p259x-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f0f0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-233-214-103", "pod":"coredns-674b8bbfcf-p259x", "timestamp":"2025-08-13 01:15:27.13213383 +0000 UTC"}, Hostname:"172-233-214-103", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:15:27.197262 containerd[1550]: 2025-08-13 01:15:27.132 [INFO][5874] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:27.197262 containerd[1550]: 2025-08-13 01:15:27.132 [INFO][5874] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:27.197262 containerd[1550]: 2025-08-13 01:15:27.132 [INFO][5874] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-214-103' Aug 13 01:15:27.197262 containerd[1550]: 2025-08-13 01:15:27.139 [INFO][5874] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" host="172-233-214-103" Aug 13 01:15:27.197262 containerd[1550]: 2025-08-13 01:15:27.143 [INFO][5874] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-214-103" Aug 13 01:15:27.197262 containerd[1550]: 2025-08-13 01:15:27.147 [INFO][5874] ipam/ipam.go 511: Trying affinity for 192.168.26.128/26 host="172-233-214-103" Aug 13 01:15:27.197262 containerd[1550]: 2025-08-13 01:15:27.148 [INFO][5874] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.128/26 host="172-233-214-103" Aug 13 01:15:27.197262 containerd[1550]: 2025-08-13 01:15:27.150 [INFO][5874] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.128/26 host="172-233-214-103" Aug 13 01:15:27.197262 containerd[1550]: 2025-08-13 01:15:27.150 [INFO][5874] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.128/26 handle="k8s-pod-network.f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" host="172-233-214-103" Aug 13 01:15:27.197485 containerd[1550]: 2025-08-13 01:15:27.151 [INFO][5874] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9 Aug 13 01:15:27.197485 containerd[1550]: 2025-08-13 01:15:27.154 [INFO][5874] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.128/26 handle="k8s-pod-network.f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" host="172-233-214-103" Aug 13 01:15:27.197485 containerd[1550]: 2025-08-13 01:15:27.159 [INFO][5874] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.130/26] block=192.168.26.128/26 handle="k8s-pod-network.f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" host="172-233-214-103" Aug 13 01:15:27.197485 containerd[1550]: 2025-08-13 01:15:27.159 [INFO][5874] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.130/26] handle="k8s-pod-network.f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" host="172-233-214-103" Aug 13 01:15:27.197485 containerd[1550]: 2025-08-13 01:15:27.160 [INFO][5874] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:27.197485 containerd[1550]: 2025-08-13 01:15:27.160 [INFO][5874] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.130/26] IPv6=[] ContainerID="f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" HandleID="k8s-pod-network.f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" Workload="172--233--214--103-k8s-coredns--674b8bbfcf--p259x-eth0" Aug 13 01:15:27.197611 containerd[1550]: 2025-08-13 01:15:27.162 [INFO][5862] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" Namespace="kube-system" Pod="coredns-674b8bbfcf-p259x" WorkloadEndpoint="172--233--214--103-k8s-coredns--674b8bbfcf--p259x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--214--103-k8s-coredns--674b8bbfcf--p259x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a5b0b8ae-a381-43cc-8adc-4e3ee01749bd", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 11, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-214-103", ContainerID:"", Pod:"coredns-674b8bbfcf-p259x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8068681e21f", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:27.197611 containerd[1550]: 2025-08-13 01:15:27.163 [INFO][5862] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.130/32] ContainerID="f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" Namespace="kube-system" Pod="coredns-674b8bbfcf-p259x" WorkloadEndpoint="172--233--214--103-k8s-coredns--674b8bbfcf--p259x-eth0" Aug 13 01:15:27.197611 containerd[1550]: 2025-08-13 01:15:27.163 [INFO][5862] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8068681e21f ContainerID="f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" Namespace="kube-system" Pod="coredns-674b8bbfcf-p259x" WorkloadEndpoint="172--233--214--103-k8s-coredns--674b8bbfcf--p259x-eth0" Aug 13 01:15:27.197611 containerd[1550]: 2025-08-13 01:15:27.174 [INFO][5862] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" Namespace="kube-system" Pod="coredns-674b8bbfcf-p259x" WorkloadEndpoint="172--233--214--103-k8s-coredns--674b8bbfcf--p259x-eth0" Aug 13 01:15:27.197611 containerd[1550]: 2025-08-13 01:15:27.174 [INFO][5862] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" Namespace="kube-system" Pod="coredns-674b8bbfcf-p259x" WorkloadEndpoint="172--233--214--103-k8s-coredns--674b8bbfcf--p259x-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--214--103-k8s-coredns--674b8bbfcf--p259x-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"a5b0b8ae-a381-43cc-8adc-4e3ee01749bd", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 11, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-214-103", ContainerID:"f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9", Pod:"coredns-674b8bbfcf-p259x", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali8068681e21f", MAC:"fe:3c:12:61:93:27", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:27.197611 containerd[1550]: 2025-08-13 01:15:27.184 [INFO][5862] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" Namespace="kube-system" Pod="coredns-674b8bbfcf-p259x" WorkloadEndpoint="172--233--214--103-k8s-coredns--674b8bbfcf--p259x-eth0" Aug 13 01:15:27.235920 containerd[1550]: time="2025-08-13T01:15:27.235522821Z" level=info msg="connecting to shim f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9" address="unix:///run/containerd/s/7ca7769c0543c3808f1ef87d7050944d46ff0d102ae1d39382c0e8e7ba940fd9" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:15:27.268032 systemd[1]: Started cri-containerd-f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9.scope - libcontainer container f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9. Aug 13 01:15:27.304176 kubelet[2722]: I0813 01:15:27.304129 2722 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 01:15:27.304313 kubelet[2722]: I0813 01:15:27.304187 2722 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 01:15:27.323985 containerd[1550]: time="2025-08-13T01:15:27.323831417Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-p259x,Uid:a5b0b8ae-a381-43cc-8adc-4e3ee01749bd,Namespace:kube-system,Attempt:0,} returns sandbox id \"f7a216e31e33980cc9a0f5c7ec73ef0410e24776f5f78e49494a2edd4b3a1cb9\"" Aug 13 01:15:27.327171 kubelet[2722]: E0813 01:15:27.327130 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:15:27.329415 containerd[1550]: time="2025-08-13T01:15:27.329344220Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 01:15:27.763013 kubelet[2722]: I0813 01:15:27.762867 2722 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-l7lv4" podStartSLOduration=195.508918027 podStartE2EDuration="3m17.762844551s" podCreationTimestamp="2025-08-13 01:12:10 +0000 UTC" firstStartedPulling="2025-08-13 01:15:24.493320629 +0000 UTC m=+210.531249938" lastFinishedPulling="2025-08-13 01:15:26.747247163 +0000 UTC m=+212.785176462" observedRunningTime="2025-08-13 01:15:27.735494809 +0000 UTC m=+213.773424118" watchObservedRunningTime="2025-08-13 01:15:27.762844551 +0000 UTC m=+213.800773860" Aug 13 01:15:28.148114 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount838094397.mount: Deactivated successfully. Aug 13 01:15:28.899314 containerd[1550]: time="2025-08-13T01:15:28.899255418Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/92/fs/coredns: no space left on device" Aug 13 01:15:28.900133 containerd[1550]: time="2025-08-13T01:15:28.899291447Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 01:15:28.900167 kubelet[2722]: E0813 01:15:28.899510 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/92/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:15:28.900167 kubelet[2722]: E0813 01:15:28.899560 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/92/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:15:28.900167 kubelet[2722]: E0813 01:15:28.899740 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.12.0,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l85x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd): ErrImagePull: failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/92/fs/coredns: no space left on device" logger="UnhandledError" Aug 13 01:15:28.901021 kubelet[2722]: E0813 01:15:28.900974 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/92/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:15:28.971040 systemd-networkd[1449]: cali8068681e21f: Gained IPv6LL Aug 13 01:15:29.058509 containerd[1550]: time="2025-08-13T01:15:29.058466568Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,}" Aug 13 01:15:29.149423 systemd-networkd[1449]: calie826c3d2c8f: Link UP Aug 13 01:15:29.149935 systemd-networkd[1449]: calie826c3d2c8f: Gained carrier Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.088 [INFO][5994] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--214--103-k8s-calico--kube--controllers--cddc95b58--6t6z7-eth0 calico-kube-controllers-cddc95b58- calico-system 2dab385f-2367-4e01-8d78-2247bcba7bcc 873 0 2025-08-13 01:12:11 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:cddc95b58 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-233-214-103 calico-kube-controllers-cddc95b58-6t6z7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie826c3d2c8f [] [] }} ContainerID="3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" Namespace="calico-system" Pod="calico-kube-controllers-cddc95b58-6t6z7" WorkloadEndpoint="172--233--214--103-k8s-calico--kube--controllers--cddc95b58--6t6z7-" Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.088 [INFO][5994] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" Namespace="calico-system" Pod="calico-kube-controllers-cddc95b58-6t6z7" WorkloadEndpoint="172--233--214--103-k8s-calico--kube--controllers--cddc95b58--6t6z7-eth0" Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.106 [INFO][6006] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" HandleID="k8s-pod-network.3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" Workload="172--233--214--103-k8s-calico--kube--controllers--cddc95b58--6t6z7-eth0" Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.107 [INFO][6006] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" HandleID="k8s-pod-network.3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" Workload="172--233--214--103-k8s-calico--kube--controllers--cddc95b58--6t6z7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4f70), Attrs:map[string]string{"namespace":"calico-system", "node":"172-233-214-103", "pod":"calico-kube-controllers-cddc95b58-6t6z7", "timestamp":"2025-08-13 01:15:29.106868084 +0000 UTC"}, Hostname:"172-233-214-103", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.107 [INFO][6006] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.107 [INFO][6006] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.107 [INFO][6006] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-214-103' Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.111 [INFO][6006] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" host="172-233-214-103" Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.115 [INFO][6006] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-214-103" Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.120 [INFO][6006] ipam/ipam.go 511: Trying affinity for 192.168.26.128/26 host="172-233-214-103" Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.123 [INFO][6006] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.128/26 host="172-233-214-103" Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.127 [INFO][6006] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.128/26 host="172-233-214-103" Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.127 [INFO][6006] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.128/26 handle="k8s-pod-network.3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" host="172-233-214-103" Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.130 [INFO][6006] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.135 [INFO][6006] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.128/26 handle="k8s-pod-network.3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" host="172-233-214-103" Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.142 [INFO][6006] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.131/26] block=192.168.26.128/26 handle="k8s-pod-network.3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" host="172-233-214-103" Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.142 [INFO][6006] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.131/26] handle="k8s-pod-network.3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" host="172-233-214-103" Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.142 [INFO][6006] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:29.167855 containerd[1550]: 2025-08-13 01:15:29.142 [INFO][6006] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.131/26] IPv6=[] ContainerID="3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" HandleID="k8s-pod-network.3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" Workload="172--233--214--103-k8s-calico--kube--controllers--cddc95b58--6t6z7-eth0" Aug 13 01:15:29.168370 containerd[1550]: 2025-08-13 01:15:29.147 [INFO][5994] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" Namespace="calico-system" Pod="calico-kube-controllers-cddc95b58-6t6z7" WorkloadEndpoint="172--233--214--103-k8s-calico--kube--controllers--cddc95b58--6t6z7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--214--103-k8s-calico--kube--controllers--cddc95b58--6t6z7-eth0", GenerateName:"calico-kube-controllers-cddc95b58-", Namespace:"calico-system", SelfLink:"", UID:"2dab385f-2367-4e01-8d78-2247bcba7bcc", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 12, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cddc95b58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-214-103", ContainerID:"", Pod:"calico-kube-controllers-cddc95b58-6t6z7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie826c3d2c8f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:29.168370 containerd[1550]: 2025-08-13 01:15:29.147 [INFO][5994] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.131/32] ContainerID="3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" Namespace="calico-system" Pod="calico-kube-controllers-cddc95b58-6t6z7" WorkloadEndpoint="172--233--214--103-k8s-calico--kube--controllers--cddc95b58--6t6z7-eth0" Aug 13 01:15:29.168370 containerd[1550]: 2025-08-13 01:15:29.147 [INFO][5994] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie826c3d2c8f ContainerID="3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" Namespace="calico-system" Pod="calico-kube-controllers-cddc95b58-6t6z7" WorkloadEndpoint="172--233--214--103-k8s-calico--kube--controllers--cddc95b58--6t6z7-eth0" Aug 13 01:15:29.168370 containerd[1550]: 2025-08-13 01:15:29.150 [INFO][5994] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" Namespace="calico-system" Pod="calico-kube-controllers-cddc95b58-6t6z7" WorkloadEndpoint="172--233--214--103-k8s-calico--kube--controllers--cddc95b58--6t6z7-eth0" Aug 13 01:15:29.168370 containerd[1550]: 2025-08-13 01:15:29.150 [INFO][5994] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" Namespace="calico-system" Pod="calico-kube-controllers-cddc95b58-6t6z7" WorkloadEndpoint="172--233--214--103-k8s-calico--kube--controllers--cddc95b58--6t6z7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--214--103-k8s-calico--kube--controllers--cddc95b58--6t6z7-eth0", GenerateName:"calico-kube-controllers-cddc95b58-", Namespace:"calico-system", SelfLink:"", UID:"2dab385f-2367-4e01-8d78-2247bcba7bcc", ResourceVersion:"873", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 12, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"cddc95b58", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-214-103", ContainerID:"3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db", Pod:"calico-kube-controllers-cddc95b58-6t6z7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.26.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie826c3d2c8f", MAC:"1a:3d:58:df:b0:d8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:29.168370 containerd[1550]: 2025-08-13 01:15:29.161 [INFO][5994] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" Namespace="calico-system" Pod="calico-kube-controllers-cddc95b58-6t6z7" WorkloadEndpoint="172--233--214--103-k8s-calico--kube--controllers--cddc95b58--6t6z7-eth0" Aug 13 01:15:29.204919 containerd[1550]: time="2025-08-13T01:15:29.203822944Z" level=info msg="connecting to shim 3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db" address="unix:///run/containerd/s/062337958d3c5eabee4f7fff584bff87d748920c968c54e02389ac71d715d7d2" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:15:29.243693 systemd[1]: Started cri-containerd-3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db.scope - libcontainer container 3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db. Aug 13 01:15:29.298118 containerd[1550]: time="2025-08-13T01:15:29.298091081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-cddc95b58-6t6z7,Uid:2dab385f-2367-4e01-8d78-2247bcba7bcc,Namespace:calico-system,Attempt:0,} returns sandbox id \"3081300b551bd4cc5ef03ea88a4fd9c85de353943ac2593781bad12ca70787db\"" Aug 13 01:15:29.302293 containerd[1550]: time="2025-08-13T01:15:29.302275369Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:15:29.727689 kubelet[2722]: E0813 01:15:29.727635 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:15:29.728348 kubelet[2722]: E0813 01:15:29.728310 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/92/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:15:30.097563 kubelet[2722]: I0813 01:15:30.097234 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:30.097563 kubelet[2722]: I0813 01:15:30.097286 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:15:30.099829 kubelet[2722]: I0813 01:15:30.099771 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:15:30.117107 kubelet[2722]: I0813 01:15:30.117088 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:30.117241 kubelet[2722]: I0813 01:15:30.117220 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","calico-system/csi-node-driver-l7lv4","kube-system/kube-apiserver-172-233-214-103","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:15:30.117335 kubelet[2722]: E0813 01:15:30.117250 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:15:30.117335 kubelet[2722]: E0813 01:15:30.117259 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:15:30.117335 kubelet[2722]: E0813 01:15:30.117266 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:15:30.117335 kubelet[2722]: E0813 01:15:30.117276 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:15:30.117335 kubelet[2722]: E0813 01:15:30.117284 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:15:30.117335 kubelet[2722]: E0813 01:15:30.117291 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:15:30.117335 kubelet[2722]: E0813 01:15:30.117333 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:15:30.117335 kubelet[2722]: E0813 01:15:30.117345 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:15:30.117335 kubelet[2722]: E0813 01:15:30.117353 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:15:30.117663 kubelet[2722]: E0813 01:15:30.117360 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:15:30.117663 kubelet[2722]: I0813 01:15:30.117370 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:15:30.488796 containerd[1550]: time="2025-08-13T01:15:30.488743037Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 01:15:30.489657 containerd[1550]: time="2025-08-13T01:15:30.489018396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/94/fs/usr/bin/kube-controllers: no space left on device" Aug 13 01:15:30.490145 kubelet[2722]: E0813 01:15:30.490068 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/94/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:15:30.490145 kubelet[2722]: E0813 01:15:30.490117 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/94/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:15:30.490630 kubelet[2722]: E0813 01:15:30.490240 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcb9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/94/fs/usr/bin/kube-controllers: no space left on device" logger="UnhandledError" Aug 13 01:15:30.491578 kubelet[2722]: E0813 01:15:30.491438 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/94/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:15:30.730256 kubelet[2722]: E0813 01:15:30.730212 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/94/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:15:30.737109 systemd[1]: Started sshd@16-172.233.214.103:22-147.75.109.163:60100.service - OpenSSH per-connection server daemon (147.75.109.163:60100). Aug 13 01:15:31.058026 kubelet[2722]: E0813 01:15:31.057993 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:15:31.058451 containerd[1550]: time="2025-08-13T01:15:31.058387869Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,}" Aug 13 01:15:31.076882 sshd[6074]: Accepted publickey for core from 147.75.109.163 port 60100 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:31.080558 sshd-session[6074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:31.083793 systemd-networkd[1449]: calie826c3d2c8f: Gained IPv6LL Aug 13 01:15:31.092871 systemd-logind[1522]: New session 17 of user core. Aug 13 01:15:31.095181 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 01:15:31.161807 systemd-networkd[1449]: calia24ac674ac3: Link UP Aug 13 01:15:31.163163 systemd-networkd[1449]: calia24ac674ac3: Gained carrier Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.105 [INFO][6078] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--214--103-k8s-coredns--674b8bbfcf--fgsjn-eth0 coredns-674b8bbfcf- kube-system 27718112-1bb9-402a-89c8-f4890dedf664 872 0 2025-08-13 01:11:59 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-233-214-103 coredns-674b8bbfcf-fgsjn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calia24ac674ac3 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" Namespace="kube-system" Pod="coredns-674b8bbfcf-fgsjn" WorkloadEndpoint="172--233--214--103-k8s-coredns--674b8bbfcf--fgsjn-" Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.105 [INFO][6078] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" Namespace="kube-system" Pod="coredns-674b8bbfcf-fgsjn" WorkloadEndpoint="172--233--214--103-k8s-coredns--674b8bbfcf--fgsjn-eth0" Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.129 [INFO][6092] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" HandleID="k8s-pod-network.d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" Workload="172--233--214--103-k8s-coredns--674b8bbfcf--fgsjn-eth0" Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.129 [INFO][6092] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" HandleID="k8s-pod-network.d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" Workload="172--233--214--103-k8s-coredns--674b8bbfcf--fgsjn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-233-214-103", "pod":"coredns-674b8bbfcf-fgsjn", "timestamp":"2025-08-13 01:15:31.129226978 +0000 UTC"}, Hostname:"172-233-214-103", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.129 [INFO][6092] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.129 [INFO][6092] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.129 [INFO][6092] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-214-103' Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.134 [INFO][6092] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" host="172-233-214-103" Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.138 [INFO][6092] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-214-103" Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.141 [INFO][6092] ipam/ipam.go 511: Trying affinity for 192.168.26.128/26 host="172-233-214-103" Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.142 [INFO][6092] ipam/ipam.go 158: Attempting to load block cidr=192.168.26.128/26 host="172-233-214-103" Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.144 [INFO][6092] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.26.128/26 host="172-233-214-103" Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.144 [INFO][6092] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.26.128/26 handle="k8s-pod-network.d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" host="172-233-214-103" Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.145 [INFO][6092] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440 Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.149 [INFO][6092] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.26.128/26 handle="k8s-pod-network.d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" host="172-233-214-103" Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.153 [INFO][6092] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.26.132/26] block=192.168.26.128/26 handle="k8s-pod-network.d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" host="172-233-214-103" Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.153 [INFO][6092] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.26.132/26] handle="k8s-pod-network.d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" host="172-233-214-103" Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.153 [INFO][6092] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:15:31.182799 containerd[1550]: 2025-08-13 01:15:31.153 [INFO][6092] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.26.132/26] IPv6=[] ContainerID="d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" HandleID="k8s-pod-network.d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" Workload="172--233--214--103-k8s-coredns--674b8bbfcf--fgsjn-eth0" Aug 13 01:15:31.183246 containerd[1550]: 2025-08-13 01:15:31.157 [INFO][6078] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" Namespace="kube-system" Pod="coredns-674b8bbfcf-fgsjn" WorkloadEndpoint="172--233--214--103-k8s-coredns--674b8bbfcf--fgsjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--214--103-k8s-coredns--674b8bbfcf--fgsjn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"27718112-1bb9-402a-89c8-f4890dedf664", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 11, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-214-103", ContainerID:"", Pod:"coredns-674b8bbfcf-fgsjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia24ac674ac3", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:31.183246 containerd[1550]: 2025-08-13 01:15:31.157 [INFO][6078] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.26.132/32] ContainerID="d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" Namespace="kube-system" Pod="coredns-674b8bbfcf-fgsjn" WorkloadEndpoint="172--233--214--103-k8s-coredns--674b8bbfcf--fgsjn-eth0" Aug 13 01:15:31.183246 containerd[1550]: 2025-08-13 01:15:31.157 [INFO][6078] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calia24ac674ac3 ContainerID="d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" Namespace="kube-system" Pod="coredns-674b8bbfcf-fgsjn" WorkloadEndpoint="172--233--214--103-k8s-coredns--674b8bbfcf--fgsjn-eth0" Aug 13 01:15:31.183246 containerd[1550]: 2025-08-13 01:15:31.163 [INFO][6078] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" Namespace="kube-system" Pod="coredns-674b8bbfcf-fgsjn" WorkloadEndpoint="172--233--214--103-k8s-coredns--674b8bbfcf--fgsjn-eth0" Aug 13 01:15:31.183246 containerd[1550]: 2025-08-13 01:15:31.164 [INFO][6078] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" Namespace="kube-system" Pod="coredns-674b8bbfcf-fgsjn" WorkloadEndpoint="172--233--214--103-k8s-coredns--674b8bbfcf--fgsjn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--214--103-k8s-coredns--674b8bbfcf--fgsjn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"27718112-1bb9-402a-89c8-f4890dedf664", ResourceVersion:"872", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 11, 59, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-214-103", ContainerID:"d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440", Pod:"coredns-674b8bbfcf-fgsjn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.26.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calia24ac674ac3", MAC:"4e:8b:39:78:c6:9c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:15:31.183246 containerd[1550]: 2025-08-13 01:15:31.176 [INFO][6078] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" Namespace="kube-system" Pod="coredns-674b8bbfcf-fgsjn" WorkloadEndpoint="172--233--214--103-k8s-coredns--674b8bbfcf--fgsjn-eth0" Aug 13 01:15:31.220164 containerd[1550]: time="2025-08-13T01:15:31.220102253Z" level=info msg="connecting to shim d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440" address="unix:///run/containerd/s/b3f7cada4c052050ca2cf19c2e0c0bbb620654ae5d9b87acf6dc424c7cbbe76d" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:15:31.249035 systemd[1]: Started cri-containerd-d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440.scope - libcontainer container d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440. Aug 13 01:15:31.324522 containerd[1550]: time="2025-08-13T01:15:31.324179503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-fgsjn,Uid:27718112-1bb9-402a-89c8-f4890dedf664,Namespace:kube-system,Attempt:0,} returns sandbox id \"d140e07d481e230b6eab8d1d78ee9722cb52792f2caf4ec12885971f46275440\"" Aug 13 01:15:31.328474 kubelet[2722]: E0813 01:15:31.328292 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:15:31.329850 containerd[1550]: time="2025-08-13T01:15:31.329803998Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 01:15:31.423331 sshd[6089]: Connection closed by 147.75.109.163 port 60100 Aug 13 01:15:31.422485 sshd-session[6074]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:31.428727 systemd[1]: sshd@16-172.233.214.103:22-147.75.109.163:60100.service: Deactivated successfully. Aug 13 01:15:31.431739 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 01:15:31.432778 systemd-logind[1522]: Session 17 logged out. Waiting for processes to exit. Aug 13 01:15:31.435582 systemd-logind[1522]: Removed session 17. Aug 13 01:15:32.075793 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2764267306.mount: Deactivated successfully. Aug 13 01:15:32.843672 containerd[1550]: time="2025-08-13T01:15:32.843585384Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/108/fs/coredns: no space left on device" Aug 13 01:15:32.844628 containerd[1550]: time="2025-08-13T01:15:32.843687733Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 01:15:32.844664 kubelet[2722]: E0813 01:15:32.844114 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/108/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:15:32.844664 kubelet[2722]: E0813 01:15:32.844165 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/108/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:15:32.845617 kubelet[2722]: E0813 01:15:32.845044 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.12.0,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnhgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664): ErrImagePull: failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/108/fs/coredns: no space left on device" logger="UnhandledError" Aug 13 01:15:32.847447 kubelet[2722]: E0813 01:15:32.847408 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/108/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:15:33.195189 systemd-networkd[1449]: calia24ac674ac3: Gained IPv6LL Aug 13 01:15:33.738913 kubelet[2722]: E0813 01:15:33.738443 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:15:33.739397 kubelet[2722]: E0813 01:15:33.739359 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/108/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:15:36.494492 systemd[1]: Started sshd@17-172.233.214.103:22-147.75.109.163:60108.service - OpenSSH per-connection server daemon (147.75.109.163:60108). Aug 13 01:15:36.849721 sshd[6232]: Accepted publickey for core from 147.75.109.163 port 60108 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:36.853213 sshd-session[6232]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:36.869411 systemd-logind[1522]: New session 18 of user core. Aug 13 01:15:36.875441 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 01:15:37.176393 sshd[6234]: Connection closed by 147.75.109.163 port 60108 Aug 13 01:15:37.178079 sshd-session[6232]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:37.182894 systemd-logind[1522]: Session 18 logged out. Waiting for processes to exit. Aug 13 01:15:37.183424 systemd[1]: sshd@17-172.233.214.103:22-147.75.109.163:60108.service: Deactivated successfully. Aug 13 01:15:37.187032 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 01:15:37.188818 systemd-logind[1522]: Removed session 18. Aug 13 01:15:40.138467 kubelet[2722]: I0813 01:15:40.138405 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:40.138467 kubelet[2722]: I0813 01:15:40.138468 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:15:40.142632 kubelet[2722]: I0813 01:15:40.142582 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:15:40.162579 kubelet[2722]: I0813 01:15:40.162532 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:40.162694 kubelet[2722]: I0813 01:15:40.162681 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","calico-system/csi-node-driver-l7lv4","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:15:40.162824 kubelet[2722]: E0813 01:15:40.162710 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:15:40.162824 kubelet[2722]: E0813 01:15:40.162718 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:15:40.162824 kubelet[2722]: E0813 01:15:40.162725 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:15:40.162824 kubelet[2722]: E0813 01:15:40.162735 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:15:40.162824 kubelet[2722]: E0813 01:15:40.162744 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:15:40.162824 kubelet[2722]: E0813 01:15:40.162753 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:15:40.162824 kubelet[2722]: E0813 01:15:40.162760 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:15:40.162824 kubelet[2722]: E0813 01:15:40.162768 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:15:40.162824 kubelet[2722]: E0813 01:15:40.162790 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:15:40.162824 kubelet[2722]: E0813 01:15:40.162797 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:15:40.162824 kubelet[2722]: I0813 01:15:40.162805 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:15:42.240070 systemd[1]: Started sshd@18-172.233.214.103:22-147.75.109.163:41218.service - OpenSSH per-connection server daemon (147.75.109.163:41218). Aug 13 01:15:42.577453 sshd[6251]: Accepted publickey for core from 147.75.109.163 port 41218 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:42.580045 sshd-session[6251]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:42.586130 systemd-logind[1522]: New session 19 of user core. Aug 13 01:15:42.591194 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 01:15:42.935889 sshd[6258]: Connection closed by 147.75.109.163 port 41218 Aug 13 01:15:42.936937 sshd-session[6251]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:42.944483 systemd[1]: sshd@18-172.233.214.103:22-147.75.109.163:41218.service: Deactivated successfully. Aug 13 01:15:42.947801 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 01:15:42.953012 systemd-logind[1522]: Session 19 logged out. Waiting for processes to exit. Aug 13 01:15:42.954996 systemd-logind[1522]: Removed session 19. Aug 13 01:15:44.069479 kubelet[2722]: E0813 01:15:44.069038 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:15:44.079504 containerd[1550]: time="2025-08-13T01:15:44.078961106Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 01:15:44.815155 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount832537505.mount: Deactivated successfully. Aug 13 01:15:45.725418 containerd[1550]: time="2025-08-13T01:15:45.725329115Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/121/fs/coredns: no space left on device" Aug 13 01:15:45.726151 containerd[1550]: time="2025-08-13T01:15:45.725449534Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 01:15:45.726191 kubelet[2722]: E0813 01:15:45.725680 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/121/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:15:45.726191 kubelet[2722]: E0813 01:15:45.725746 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/121/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:15:45.726191 kubelet[2722]: E0813 01:15:45.726053 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.12.0,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l85x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd): ErrImagePull: failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/121/fs/coredns: no space left on device" logger="UnhandledError" Aug 13 01:15:45.726851 containerd[1550]: time="2025-08-13T01:15:45.726446863Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:15:45.728101 kubelet[2722]: E0813 01:15:45.728031 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/121/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:15:46.884860 containerd[1550]: time="2025-08-13T01:15:46.884795296Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/122/fs/usr/bin/kube-controllers: no space left on device" Aug 13 01:15:46.885416 containerd[1550]: time="2025-08-13T01:15:46.884891936Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 01:15:46.885448 kubelet[2722]: E0813 01:15:46.885096 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/122/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:15:46.885448 kubelet[2722]: E0813 01:15:46.885166 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/122/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:15:46.886034 kubelet[2722]: E0813 01:15:46.885941 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcb9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/122/fs/usr/bin/kube-controllers: no space left on device" logger="UnhandledError" Aug 13 01:15:46.887162 kubelet[2722]: E0813 01:15:46.887137 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/122/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:15:46.904499 containerd[1550]: time="2025-08-13T01:15:46.904435525Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-node-hq29b_3c0f3b86-7d63-44df-843e-763eb95a8b94/calico-node/0.log\"" error="write /var/log/pods/calico-system_calico-node-hq29b_3c0f3b86-7d63-44df-843e-763eb95a8b94/calico-node/0.log: no space left on device" Aug 13 01:15:48.003952 systemd[1]: Started sshd@19-172.233.214.103:22-147.75.109.163:41220.service - OpenSSH per-connection server daemon (147.75.109.163:41220). Aug 13 01:15:48.058866 kubelet[2722]: E0813 01:15:48.057830 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:15:48.060358 containerd[1550]: time="2025-08-13T01:15:48.060326565Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 01:15:48.344489 sshd[6326]: Accepted publickey for core from 147.75.109.163 port 41220 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:48.346610 sshd-session[6326]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:48.355232 systemd-logind[1522]: New session 20 of user core. Aug 13 01:15:48.360998 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 01:15:48.641275 sshd[6328]: Connection closed by 147.75.109.163 port 41220 Aug 13 01:15:48.642338 sshd-session[6326]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:48.645929 systemd[1]: sshd@19-172.233.214.103:22-147.75.109.163:41220.service: Deactivated successfully. Aug 13 01:15:48.649563 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 01:15:48.649676 systemd-logind[1522]: Session 20 logged out. Waiting for processes to exit. Aug 13 01:15:48.654417 systemd-logind[1522]: Removed session 20. Aug 13 01:15:48.796093 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3623849008.mount: Deactivated successfully. Aug 13 01:15:49.566989 containerd[1550]: time="2025-08-13T01:15:49.566932098Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/135/fs/coredns: no space left on device" Aug 13 01:15:49.567574 containerd[1550]: time="2025-08-13T01:15:49.567033138Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 01:15:49.567610 kubelet[2722]: E0813 01:15:49.567143 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/135/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:15:49.567610 kubelet[2722]: E0813 01:15:49.567183 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/135/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:15:49.567610 kubelet[2722]: E0813 01:15:49.567479 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.12.0,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnhgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664): ErrImagePull: failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/135/fs/coredns: no space left on device" logger="UnhandledError" Aug 13 01:15:49.569348 kubelet[2722]: E0813 01:15:49.569299 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/135/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:15:50.181007 kubelet[2722]: I0813 01:15:50.180969 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:50.181007 kubelet[2722]: I0813 01:15:50.181015 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:15:50.182783 kubelet[2722]: I0813 01:15:50.182756 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:15:50.200506 kubelet[2722]: I0813 01:15:50.200479 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:15:50.200654 kubelet[2722]: I0813 01:15:50.200616 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","calico-system/csi-node-driver-l7lv4","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:15:50.200654 kubelet[2722]: E0813 01:15:50.200651 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:15:50.200834 kubelet[2722]: E0813 01:15:50.200659 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:15:50.200834 kubelet[2722]: E0813 01:15:50.200664 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:15:50.200834 kubelet[2722]: E0813 01:15:50.200672 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:15:50.200834 kubelet[2722]: E0813 01:15:50.200679 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:15:50.200834 kubelet[2722]: E0813 01:15:50.200685 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:15:50.200834 kubelet[2722]: E0813 01:15:50.200692 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:15:50.200834 kubelet[2722]: E0813 01:15:50.200697 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:15:50.200834 kubelet[2722]: E0813 01:15:50.200706 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:15:50.200834 kubelet[2722]: E0813 01:15:50.200712 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:15:50.200834 kubelet[2722]: I0813 01:15:50.200719 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:15:51.790700 containerd[1550]: time="2025-08-13T01:15:51.790534838Z" level=info msg="TaskExit event in podsandbox handler container_id:\"912e55c883b9193775a8e2855a8f299720d449e5b9a7826e028fd24a993416d7\" id:\"fd720a524ad170c3b2c5db718a321faed2a27e0d508622ea046f3a67a5a94d85\" pid:6405 exited_at:{seconds:1755047751 nanos:788513732}" Aug 13 01:15:53.711633 systemd[1]: Started sshd@20-172.233.214.103:22-147.75.109.163:54012.service - OpenSSH per-connection server daemon (147.75.109.163:54012). Aug 13 01:15:54.053330 sshd[6418]: Accepted publickey for core from 147.75.109.163 port 54012 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:54.057800 sshd-session[6418]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:54.069180 systemd-logind[1522]: New session 21 of user core. Aug 13 01:15:54.077832 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 01:15:54.378492 sshd[6422]: Connection closed by 147.75.109.163 port 54012 Aug 13 01:15:54.380032 sshd-session[6418]: pam_unix(sshd:session): session closed for user core Aug 13 01:15:54.385517 systemd[1]: sshd@20-172.233.214.103:22-147.75.109.163:54012.service: Deactivated successfully. Aug 13 01:15:54.388515 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 01:15:54.390231 systemd-logind[1522]: Session 21 logged out. Waiting for processes to exit. Aug 13 01:15:54.393805 systemd-logind[1522]: Removed session 21. Aug 13 01:15:57.059275 kubelet[2722]: E0813 01:15:57.058717 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/122/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:15:59.057970 kubelet[2722]: E0813 01:15:59.057659 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:15:59.058749 kubelet[2722]: E0813 01:15:59.058727 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/121/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:15:59.444114 systemd[1]: Started sshd@21-172.233.214.103:22-147.75.109.163:43184.service - OpenSSH per-connection server daemon (147.75.109.163:43184). Aug 13 01:15:59.782753 sshd[6435]: Accepted publickey for core from 147.75.109.163 port 43184 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:15:59.784813 sshd-session[6435]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:15:59.792790 systemd-logind[1522]: New session 22 of user core. Aug 13 01:15:59.798183 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 01:16:00.090036 sshd[6437]: Connection closed by 147.75.109.163 port 43184 Aug 13 01:16:00.092435 sshd-session[6435]: pam_unix(sshd:session): session closed for user core Aug 13 01:16:00.098719 systemd-logind[1522]: Session 22 logged out. Waiting for processes to exit. Aug 13 01:16:00.098799 systemd[1]: sshd@21-172.233.214.103:22-147.75.109.163:43184.service: Deactivated successfully. Aug 13 01:16:00.101420 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 01:16:00.103439 systemd-logind[1522]: Removed session 22. Aug 13 01:16:00.224134 kubelet[2722]: I0813 01:16:00.224104 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:16:00.224134 kubelet[2722]: I0813 01:16:00.224145 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:16:00.226784 kubelet[2722]: I0813 01:16:00.226758 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:16:00.240918 kubelet[2722]: I0813 01:16:00.240731 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:16:00.240918 kubelet[2722]: I0813 01:16:00.240842 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","calico-system/csi-node-driver-l7lv4","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:16:00.240918 kubelet[2722]: E0813 01:16:00.240869 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:16:00.240918 kubelet[2722]: E0813 01:16:00.240877 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:16:00.240918 kubelet[2722]: E0813 01:16:00.240882 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:16:00.240918 kubelet[2722]: E0813 01:16:00.240891 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:16:00.241232 kubelet[2722]: E0813 01:16:00.241169 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:16:00.241232 kubelet[2722]: E0813 01:16:00.241183 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:16:00.241232 kubelet[2722]: E0813 01:16:00.241190 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:16:00.241232 kubelet[2722]: E0813 01:16:00.241197 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:16:00.241232 kubelet[2722]: E0813 01:16:00.241208 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:16:00.241232 kubelet[2722]: E0813 01:16:00.241216 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:16:00.241416 kubelet[2722]: I0813 01:16:00.241335 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:16:04.067930 kubelet[2722]: E0813 01:16:04.067445 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:16:04.070469 kubelet[2722]: E0813 01:16:04.070285 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/135/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:16:05.059291 kubelet[2722]: E0813 01:16:05.059236 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:16:05.154345 systemd[1]: Started sshd@22-172.233.214.103:22-147.75.109.163:43192.service - OpenSSH per-connection server daemon (147.75.109.163:43192). Aug 13 01:16:05.502524 sshd[6457]: Accepted publickey for core from 147.75.109.163 port 43192 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:16:05.504022 sshd-session[6457]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:16:05.510797 systemd-logind[1522]: New session 23 of user core. Aug 13 01:16:05.518044 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 01:16:05.823447 sshd[6459]: Connection closed by 147.75.109.163 port 43192 Aug 13 01:16:05.825507 sshd-session[6457]: pam_unix(sshd:session): session closed for user core Aug 13 01:16:05.831167 systemd[1]: sshd@22-172.233.214.103:22-147.75.109.163:43192.service: Deactivated successfully. Aug 13 01:16:05.833888 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 01:16:05.835483 systemd-logind[1522]: Session 23 logged out. Waiting for processes to exit. Aug 13 01:16:05.837468 systemd-logind[1522]: Removed session 23. Aug 13 01:16:08.062465 containerd[1550]: time="2025-08-13T01:16:08.062136316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:16:09.029601 containerd[1550]: time="2025-08-13T01:16:09.029542248Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/136/fs/usr/bin/kube-controllers: no space left on device" Aug 13 01:16:09.029968 containerd[1550]: time="2025-08-13T01:16:09.029663688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 01:16:09.030012 kubelet[2722]: E0813 01:16:09.029830 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/136/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:16:09.030012 kubelet[2722]: E0813 01:16:09.029876 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/136/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:16:09.031085 kubelet[2722]: E0813 01:16:09.030026 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcb9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/136/fs/usr/bin/kube-controllers: no space left on device" logger="UnhandledError" Aug 13 01:16:09.031300 kubelet[2722]: E0813 01:16:09.031245 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/136/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:16:09.057838 kubelet[2722]: E0813 01:16:09.057817 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:16:09.058131 kubelet[2722]: E0813 01:16:09.058116 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:16:10.058862 kubelet[2722]: E0813 01:16:10.058824 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:16:10.264425 kubelet[2722]: I0813 01:16:10.264389 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:16:10.264425 kubelet[2722]: I0813 01:16:10.264437 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:16:10.267212 kubelet[2722]: I0813 01:16:10.267199 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:16:10.281304 kubelet[2722]: I0813 01:16:10.281282 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:16:10.281399 kubelet[2722]: I0813 01:16:10.281382 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","calico-system/csi-node-driver-l7lv4","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:16:10.281467 kubelet[2722]: E0813 01:16:10.281412 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:16:10.281467 kubelet[2722]: E0813 01:16:10.281421 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:16:10.281467 kubelet[2722]: E0813 01:16:10.281428 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:16:10.281467 kubelet[2722]: E0813 01:16:10.281436 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:16:10.281467 kubelet[2722]: E0813 01:16:10.281444 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:16:10.281467 kubelet[2722]: E0813 01:16:10.281450 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:16:10.281467 kubelet[2722]: E0813 01:16:10.281459 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:16:10.281467 kubelet[2722]: E0813 01:16:10.281466 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:16:10.281612 kubelet[2722]: E0813 01:16:10.281478 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:16:10.281612 kubelet[2722]: E0813 01:16:10.281485 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:16:10.281612 kubelet[2722]: I0813 01:16:10.281492 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:16:10.886020 systemd[1]: Started sshd@23-172.233.214.103:22-147.75.109.163:44598.service - OpenSSH per-connection server daemon (147.75.109.163:44598). Aug 13 01:16:11.226727 sshd[6475]: Accepted publickey for core from 147.75.109.163 port 44598 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:16:11.228190 sshd-session[6475]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:16:11.232687 systemd-logind[1522]: New session 24 of user core. Aug 13 01:16:11.240043 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 01:16:11.540372 sshd[6479]: Connection closed by 147.75.109.163 port 44598 Aug 13 01:16:11.542016 sshd-session[6475]: pam_unix(sshd:session): session closed for user core Aug 13 01:16:11.545754 systemd[1]: sshd@23-172.233.214.103:22-147.75.109.163:44598.service: Deactivated successfully. Aug 13 01:16:11.548024 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 01:16:11.548891 systemd-logind[1522]: Session 24 logged out. Waiting for processes to exit. Aug 13 01:16:11.550570 systemd-logind[1522]: Removed session 24. Aug 13 01:16:13.058123 kubelet[2722]: E0813 01:16:13.057685 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:16:13.060240 kubelet[2722]: E0813 01:16:13.058851 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:16:13.061844 containerd[1550]: time="2025-08-13T01:16:13.061685791Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 01:16:13.892635 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3741160084.mount: Deactivated successfully. Aug 13 01:16:14.695218 containerd[1550]: time="2025-08-13T01:16:14.695060522Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/149/fs/coredns: no space left on device" Aug 13 01:16:14.696432 containerd[1550]: time="2025-08-13T01:16:14.695087182Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 01:16:14.696484 kubelet[2722]: E0813 01:16:14.695574 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/149/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:16:14.696484 kubelet[2722]: E0813 01:16:14.695620 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/149/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:16:14.696484 kubelet[2722]: E0813 01:16:14.695758 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.12.0,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l85x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd): ErrImagePull: failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/149/fs/coredns: no space left on device" logger="UnhandledError" Aug 13 01:16:14.697390 kubelet[2722]: E0813 01:16:14.697360 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/149/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:16:16.058809 kubelet[2722]: E0813 01:16:16.057936 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:16:16.060006 containerd[1550]: time="2025-08-13T01:16:16.059958853Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 01:16:16.607974 systemd[1]: Started sshd@24-172.233.214.103:22-147.75.109.163:44604.service - OpenSSH per-connection server daemon (147.75.109.163:44604). Aug 13 01:16:16.811376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2743334404.mount: Deactivated successfully. Aug 13 01:16:16.945111 sshd[6544]: Accepted publickey for core from 147.75.109.163 port 44604 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:16:16.950627 sshd-session[6544]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:16:16.962391 systemd-logind[1522]: New session 25 of user core. Aug 13 01:16:16.970206 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 01:16:17.295765 sshd[6558]: Connection closed by 147.75.109.163 port 44604 Aug 13 01:16:17.296551 sshd-session[6544]: pam_unix(sshd:session): session closed for user core Aug 13 01:16:17.303769 systemd[1]: sshd@24-172.233.214.103:22-147.75.109.163:44604.service: Deactivated successfully. Aug 13 01:16:17.307522 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 01:16:17.309389 systemd-logind[1522]: Session 25 logged out. Waiting for processes to exit. Aug 13 01:16:17.311535 systemd-logind[1522]: Removed session 25. Aug 13 01:16:17.685154 containerd[1550]: time="2025-08-13T01:16:17.684981868Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/162/fs/coredns: no space left on device" Aug 13 01:16:17.685154 containerd[1550]: time="2025-08-13T01:16:17.685063568Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 01:16:17.685811 kubelet[2722]: E0813 01:16:17.685298 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/162/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:16:17.685811 kubelet[2722]: E0813 01:16:17.685369 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/162/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:16:17.685811 kubelet[2722]: E0813 01:16:17.685519 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.12.0,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnhgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664): ErrImagePull: failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/162/fs/coredns: no space left on device" logger="UnhandledError" Aug 13 01:16:17.686958 kubelet[2722]: E0813 01:16:17.686849 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/162/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:16:20.311563 kubelet[2722]: I0813 01:16:20.311508 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:16:20.311563 kubelet[2722]: I0813 01:16:20.311575 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:16:20.314510 kubelet[2722]: I0813 01:16:20.314492 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:16:20.331818 kubelet[2722]: I0813 01:16:20.331776 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:16:20.332076 kubelet[2722]: I0813 01:16:20.331958 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","calico-system/csi-node-driver-l7lv4","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:16:20.332076 kubelet[2722]: E0813 01:16:20.331996 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:16:20.332076 kubelet[2722]: E0813 01:16:20.332006 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:16:20.332076 kubelet[2722]: E0813 01:16:20.332013 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:16:20.332076 kubelet[2722]: E0813 01:16:20.332026 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:16:20.332076 kubelet[2722]: E0813 01:16:20.332037 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:16:20.332076 kubelet[2722]: E0813 01:16:20.332044 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:16:20.332076 kubelet[2722]: E0813 01:16:20.332053 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:16:20.332076 kubelet[2722]: E0813 01:16:20.332060 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:16:20.332076 kubelet[2722]: E0813 01:16:20.332071 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:16:20.332076 kubelet[2722]: E0813 01:16:20.332078 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:16:20.332076 kubelet[2722]: I0813 01:16:20.332088 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:16:21.783734 containerd[1550]: time="2025-08-13T01:16:21.783680883Z" level=info msg="TaskExit event in podsandbox handler container_id:\"912e55c883b9193775a8e2855a8f299720d449e5b9a7826e028fd24a993416d7\" id:\"1e152aa297a3d2f3c57421a4865f029e83925b0323bdb2a226df01ee936b9883\" pid:6622 exited_at:{seconds:1755047781 nanos:782993243}" Aug 13 01:16:22.367496 systemd[1]: Started sshd@25-172.233.214.103:22-147.75.109.163:41188.service - OpenSSH per-connection server daemon (147.75.109.163:41188). Aug 13 01:16:22.715056 sshd[6635]: Accepted publickey for core from 147.75.109.163 port 41188 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:16:22.720958 sshd-session[6635]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:16:22.727947 systemd-logind[1522]: New session 26 of user core. Aug 13 01:16:22.731056 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 01:16:23.039405 sshd[6637]: Connection closed by 147.75.109.163 port 41188 Aug 13 01:16:23.039268 sshd-session[6635]: pam_unix(sshd:session): session closed for user core Aug 13 01:16:23.048210 systemd[1]: sshd@25-172.233.214.103:22-147.75.109.163:41188.service: Deactivated successfully. Aug 13 01:16:23.048775 systemd-logind[1522]: Session 26 logged out. Waiting for processes to exit. Aug 13 01:16:23.051625 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 01:16:23.056810 systemd-logind[1522]: Removed session 26. Aug 13 01:16:24.067506 kubelet[2722]: E0813 01:16:24.067440 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/136/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:16:28.059014 kubelet[2722]: E0813 01:16:28.058531 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:16:28.061553 kubelet[2722]: E0813 01:16:28.060230 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/149/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:16:28.104389 systemd[1]: Started sshd@26-172.233.214.103:22-147.75.109.163:44168.service - OpenSSH per-connection server daemon (147.75.109.163:44168). Aug 13 01:16:28.445548 sshd[6650]: Accepted publickey for core from 147.75.109.163 port 44168 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:16:28.448561 sshd-session[6650]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:16:28.455307 systemd-logind[1522]: New session 27 of user core. Aug 13 01:16:28.461015 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 01:16:28.790458 sshd[6655]: Connection closed by 147.75.109.163 port 44168 Aug 13 01:16:28.790563 sshd-session[6650]: pam_unix(sshd:session): session closed for user core Aug 13 01:16:28.798843 systemd-logind[1522]: Session 27 logged out. Waiting for processes to exit. Aug 13 01:16:28.801084 systemd[1]: sshd@26-172.233.214.103:22-147.75.109.163:44168.service: Deactivated successfully. Aug 13 01:16:28.803856 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 01:16:28.806549 systemd-logind[1522]: Removed session 27. Aug 13 01:16:30.359921 kubelet[2722]: I0813 01:16:30.359873 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:16:30.360345 kubelet[2722]: I0813 01:16:30.359952 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:16:30.363039 kubelet[2722]: I0813 01:16:30.363013 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:16:30.377136 kubelet[2722]: I0813 01:16:30.377076 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:16:30.377339 kubelet[2722]: I0813 01:16:30.377244 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","calico-system/csi-node-driver-l7lv4","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:16:30.377339 kubelet[2722]: E0813 01:16:30.377311 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:16:30.377339 kubelet[2722]: E0813 01:16:30.377328 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:16:30.377339 kubelet[2722]: E0813 01:16:30.377337 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:16:30.377578 kubelet[2722]: E0813 01:16:30.377349 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:16:30.377578 kubelet[2722]: E0813 01:16:30.377362 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:16:30.377578 kubelet[2722]: E0813 01:16:30.377371 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:16:30.377578 kubelet[2722]: E0813 01:16:30.377382 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:16:30.377578 kubelet[2722]: E0813 01:16:30.377391 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:16:30.377578 kubelet[2722]: E0813 01:16:30.377404 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:16:30.377578 kubelet[2722]: E0813 01:16:30.377413 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:16:30.377578 kubelet[2722]: I0813 01:16:30.377424 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:16:32.058918 kubelet[2722]: E0813 01:16:32.057944 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:16:32.060474 kubelet[2722]: E0813 01:16:32.060436 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/162/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:16:33.856108 systemd[1]: Started sshd@27-172.233.214.103:22-147.75.109.163:44184.service - OpenSSH per-connection server daemon (147.75.109.163:44184). Aug 13 01:16:34.195919 sshd[6670]: Accepted publickey for core from 147.75.109.163 port 44184 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:16:34.197608 sshd-session[6670]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:16:34.203546 systemd-logind[1522]: New session 28 of user core. Aug 13 01:16:34.207038 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 01:16:34.509709 sshd[6672]: Connection closed by 147.75.109.163 port 44184 Aug 13 01:16:34.510196 sshd-session[6670]: pam_unix(sshd:session): session closed for user core Aug 13 01:16:34.515105 systemd-logind[1522]: Session 28 logged out. Waiting for processes to exit. Aug 13 01:16:34.515856 systemd[1]: sshd@27-172.233.214.103:22-147.75.109.163:44184.service: Deactivated successfully. Aug 13 01:16:34.519519 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 01:16:34.522813 systemd-logind[1522]: Removed session 28. Aug 13 01:16:35.060124 kubelet[2722]: E0813 01:16:35.059259 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/136/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:16:39.572090 systemd[1]: Started sshd@28-172.233.214.103:22-147.75.109.163:38858.service - OpenSSH per-connection server daemon (147.75.109.163:38858). Aug 13 01:16:39.913027 sshd[6684]: Accepted publickey for core from 147.75.109.163 port 38858 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:16:39.914365 sshd-session[6684]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:16:39.919582 systemd-logind[1522]: New session 29 of user core. Aug 13 01:16:39.925003 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 01:16:40.221483 sshd[6686]: Connection closed by 147.75.109.163 port 38858 Aug 13 01:16:40.223093 sshd-session[6684]: pam_unix(sshd:session): session closed for user core Aug 13 01:16:40.226676 systemd[1]: sshd@28-172.233.214.103:22-147.75.109.163:38858.service: Deactivated successfully. Aug 13 01:16:40.228797 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 01:16:40.229853 systemd-logind[1522]: Session 29 logged out. Waiting for processes to exit. Aug 13 01:16:40.231753 systemd-logind[1522]: Removed session 29. Aug 13 01:16:40.398200 kubelet[2722]: I0813 01:16:40.398174 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:16:40.398919 kubelet[2722]: I0813 01:16:40.398301 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:16:40.399602 kubelet[2722]: I0813 01:16:40.399590 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:16:40.412129 kubelet[2722]: I0813 01:16:40.412110 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:16:40.412256 kubelet[2722]: I0813 01:16:40.412229 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","calico-system/csi-node-driver-l7lv4","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:16:40.412345 kubelet[2722]: E0813 01:16:40.412267 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:16:40.412345 kubelet[2722]: E0813 01:16:40.412277 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:16:40.412345 kubelet[2722]: E0813 01:16:40.412283 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:16:40.412345 kubelet[2722]: E0813 01:16:40.412292 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:16:40.412345 kubelet[2722]: E0813 01:16:40.412300 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:16:40.412345 kubelet[2722]: E0813 01:16:40.412307 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:16:40.412345 kubelet[2722]: E0813 01:16:40.412314 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:16:40.412345 kubelet[2722]: E0813 01:16:40.412321 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:16:40.412345 kubelet[2722]: E0813 01:16:40.412330 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:16:40.412345 kubelet[2722]: E0813 01:16:40.412337 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:16:40.412345 kubelet[2722]: I0813 01:16:40.412346 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:16:42.058235 kubelet[2722]: E0813 01:16:42.058009 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:16:42.060236 kubelet[2722]: E0813 01:16:42.059813 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/149/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:16:43.058549 kubelet[2722]: E0813 01:16:43.058246 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:16:43.061927 kubelet[2722]: E0813 01:16:43.059887 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/162/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:16:45.296793 systemd[1]: Started sshd@29-172.233.214.103:22-147.75.109.163:38862.service - OpenSSH per-connection server daemon (147.75.109.163:38862). Aug 13 01:16:45.641022 sshd[6705]: Accepted publickey for core from 147.75.109.163 port 38862 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:16:45.643385 sshd-session[6705]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:16:45.650352 systemd-logind[1522]: New session 30 of user core. Aug 13 01:16:45.655060 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 01:16:45.965421 sshd[6707]: Connection closed by 147.75.109.163 port 38862 Aug 13 01:16:45.966135 sshd-session[6705]: pam_unix(sshd:session): session closed for user core Aug 13 01:16:45.971842 systemd-logind[1522]: Session 30 logged out. Waiting for processes to exit. Aug 13 01:16:45.972380 systemd[1]: sshd@29-172.233.214.103:22-147.75.109.163:38862.service: Deactivated successfully. Aug 13 01:16:45.975622 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 01:16:45.984201 systemd-logind[1522]: Removed session 30. Aug 13 01:16:49.501711 containerd[1550]: time="2025-08-13T01:16:49.501518775Z" level=warning msg="container event discarded" container=3ba1029d91379fdc42075b6f50d5e113a9faa23d7721323205dcca36b4e6d87e type=CONTAINER_CREATED_EVENT Aug 13 01:16:49.513296 containerd[1550]: time="2025-08-13T01:16:49.513199378Z" level=warning msg="container event discarded" container=3ba1029d91379fdc42075b6f50d5e113a9faa23d7721323205dcca36b4e6d87e type=CONTAINER_STARTED_EVENT Aug 13 01:16:49.530416 containerd[1550]: time="2025-08-13T01:16:49.530374908Z" level=warning msg="container event discarded" container=5620fab08ad904335078b51fa0df0fee22c83b5de80a53cce6f3990cfc2e3a63 type=CONTAINER_CREATED_EVENT Aug 13 01:16:49.567640 containerd[1550]: time="2025-08-13T01:16:49.567543385Z" level=warning msg="container event discarded" container=a81325bae0f4ca129126a3448b2a21b3f1d04a81b9c063a8cf754bb2db92b5d7 type=CONTAINER_CREATED_EVENT Aug 13 01:16:49.567640 containerd[1550]: time="2025-08-13T01:16:49.567599325Z" level=warning msg="container event discarded" container=a81325bae0f4ca129126a3448b2a21b3f1d04a81b9c063a8cf754bb2db92b5d7 type=CONTAINER_STARTED_EVENT Aug 13 01:16:49.567640 containerd[1550]: time="2025-08-13T01:16:49.567609835Z" level=warning msg="container event discarded" container=69422833489f143c5ab0ead4825cb997ea65d2f1bd3d74c6635cf58a4b5f493d type=CONTAINER_CREATED_EVENT Aug 13 01:16:49.567640 containerd[1550]: time="2025-08-13T01:16:49.567616365Z" level=warning msg="container event discarded" container=69422833489f143c5ab0ead4825cb997ea65d2f1bd3d74c6635cf58a4b5f493d type=CONTAINER_STARTED_EVENT Aug 13 01:16:49.597170 containerd[1550]: time="2025-08-13T01:16:49.597051968Z" level=warning msg="container event discarded" container=adf1b5f86017b32e545fa339781f00af34c441e58ec4dd6b6df57708b8aa50f8 type=CONTAINER_CREATED_EVENT Aug 13 01:16:49.611396 containerd[1550]: time="2025-08-13T01:16:49.611351519Z" level=warning msg="container event discarded" container=a827b4fab1567fd044afa9fef69027605ada1093d1778f4ec69b61f82c013c30 type=CONTAINER_CREATED_EVENT Aug 13 01:16:49.641654 containerd[1550]: time="2025-08-13T01:16:49.641588111Z" level=warning msg="container event discarded" container=5620fab08ad904335078b51fa0df0fee22c83b5de80a53cce6f3990cfc2e3a63 type=CONTAINER_STARTED_EVENT Aug 13 01:16:49.721513 containerd[1550]: time="2025-08-13T01:16:49.721420523Z" level=warning msg="container event discarded" container=a827b4fab1567fd044afa9fef69027605ada1093d1778f4ec69b61f82c013c30 type=CONTAINER_STARTED_EVENT Aug 13 01:16:49.737735 containerd[1550]: time="2025-08-13T01:16:49.737683223Z" level=warning msg="container event discarded" container=adf1b5f86017b32e545fa339781f00af34c441e58ec4dd6b6df57708b8aa50f8 type=CONTAINER_STARTED_EVENT Aug 13 01:16:50.061471 containerd[1550]: time="2025-08-13T01:16:50.061397411Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:16:50.454502 kubelet[2722]: I0813 01:16:50.454441 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:16:50.454502 kubelet[2722]: I0813 01:16:50.454505 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:16:50.457045 kubelet[2722]: I0813 01:16:50.457022 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:16:50.484675 kubelet[2722]: I0813 01:16:50.484625 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:16:50.485340 kubelet[2722]: I0813 01:16:50.484987 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","calico-system/csi-node-driver-l7lv4","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:16:50.485340 kubelet[2722]: E0813 01:16:50.485030 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:16:50.485340 kubelet[2722]: E0813 01:16:50.485041 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:16:50.485340 kubelet[2722]: E0813 01:16:50.485049 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:16:50.485340 kubelet[2722]: E0813 01:16:50.485060 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:16:50.485340 kubelet[2722]: E0813 01:16:50.485072 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:16:50.485340 kubelet[2722]: E0813 01:16:50.485081 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:16:50.485340 kubelet[2722]: E0813 01:16:50.485090 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:16:50.485340 kubelet[2722]: E0813 01:16:50.485098 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:16:50.485340 kubelet[2722]: E0813 01:16:50.485108 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:16:50.485340 kubelet[2722]: E0813 01:16:50.485116 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:16:50.485340 kubelet[2722]: I0813 01:16:50.485124 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:16:51.030738 systemd[1]: Started sshd@30-172.233.214.103:22-147.75.109.163:49004.service - OpenSSH per-connection server daemon (147.75.109.163:49004). Aug 13 01:16:51.231653 containerd[1550]: time="2025-08-13T01:16:51.231479212Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/163/fs/usr/bin/kube-controllers: no space left on device" Aug 13 01:16:51.231653 containerd[1550]: time="2025-08-13T01:16:51.231531922Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 01:16:51.232208 kubelet[2722]: E0813 01:16:51.231917 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/163/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:16:51.232208 kubelet[2722]: E0813 01:16:51.232008 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/163/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:16:51.232930 kubelet[2722]: E0813 01:16:51.232540 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcb9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/163/fs/usr/bin/kube-controllers: no space left on device" logger="UnhandledError" Aug 13 01:16:51.233941 kubelet[2722]: E0813 01:16:51.233821 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/163/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:16:51.275118 containerd[1550]: time="2025-08-13T01:16:51.274174007Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/kube-system_kube-apiserver-172-233-214-103_2dd8b161f65c7eea6bb980d72abd4859/kube-apiserver/0.log\"" error="write /var/log/pods/kube-system_kube-apiserver-172-233-214-103_2dd8b161f65c7eea6bb980d72abd4859/kube-apiserver/0.log: no space left on device" Aug 13 01:16:51.275118 containerd[1550]: time="2025-08-13T01:16:51.274272157Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/kube-system_kube-apiserver-172-233-214-103_2dd8b161f65c7eea6bb980d72abd4859/kube-apiserver/0.log\"" error="write /var/log/pods/kube-system_kube-apiserver-172-233-214-103_2dd8b161f65c7eea6bb980d72abd4859/kube-apiserver/0.log: no space left on device" Aug 13 01:16:51.275118 containerd[1550]: time="2025-08-13T01:16:51.274297417Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/kube-system_kube-apiserver-172-233-214-103_2dd8b161f65c7eea6bb980d72abd4859/kube-apiserver/0.log\"" error="write /var/log/pods/kube-system_kube-apiserver-172-233-214-103_2dd8b161f65c7eea6bb980d72abd4859/kube-apiserver/0.log: no space left on device" Aug 13 01:16:51.275118 containerd[1550]: time="2025-08-13T01:16:51.274320267Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/kube-system_kube-apiserver-172-233-214-103_2dd8b161f65c7eea6bb980d72abd4859/kube-apiserver/0.log\"" error="write /var/log/pods/kube-system_kube-apiserver-172-233-214-103_2dd8b161f65c7eea6bb980d72abd4859/kube-apiserver/0.log: no space left on device" Aug 13 01:16:51.390041 sshd[6723]: Accepted publickey for core from 147.75.109.163 port 49004 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:16:51.393553 sshd-session[6723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:16:51.401721 systemd-logind[1522]: New session 31 of user core. Aug 13 01:16:51.409797 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 13 01:16:51.744091 sshd[6727]: Connection closed by 147.75.109.163 port 49004 Aug 13 01:16:51.744571 sshd-session[6723]: pam_unix(sshd:session): session closed for user core Aug 13 01:16:51.749451 systemd-logind[1522]: Session 31 logged out. Waiting for processes to exit. Aug 13 01:16:51.752080 systemd[1]: sshd@30-172.233.214.103:22-147.75.109.163:49004.service: Deactivated successfully. Aug 13 01:16:51.754843 systemd[1]: session-31.scope: Deactivated successfully. Aug 13 01:16:51.757628 systemd-logind[1522]: Removed session 31. Aug 13 01:16:51.808091 containerd[1550]: time="2025-08-13T01:16:51.808033973Z" level=info msg="TaskExit event in podsandbox handler container_id:\"912e55c883b9193775a8e2855a8f299720d449e5b9a7826e028fd24a993416d7\" id:\"972b564f7d54abfb9d486fc695575383ceda395dda9bab7658b280a4b7cb4481\" pid:6748 exited_at:{seconds:1755047811 nanos:807483743}" Aug 13 01:16:54.075838 kubelet[2722]: I0813 01:16:54.075204 2722 image_gc_manager.go:391] "Disk usage on image filesystem is over the high threshold, trying to free bytes down to the low threshold" usage=100 highThreshold=85 amountToFree=411531673 lowThreshold=80 Aug 13 01:16:54.075838 kubelet[2722]: E0813 01:16:54.075245 2722 kubelet.go:1596] "Image garbage collection failed multiple times in a row" err="Failed to garbage collect required amount of images. Attempted to free 411531673 bytes, but only found 0 bytes eligible to free." Aug 13 01:16:56.061778 kubelet[2722]: E0813 01:16:56.061745 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:16:56.062584 kubelet[2722]: E0813 01:16:56.062338 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/162/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:16:56.800692 systemd[1]: Started sshd@31-172.233.214.103:22-147.75.109.163:49018.service - OpenSSH per-connection server daemon (147.75.109.163:49018). Aug 13 01:16:57.057729 kubelet[2722]: E0813 01:16:57.057632 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:16:57.058588 containerd[1550]: time="2025-08-13T01:16:57.058561987Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 01:16:57.135940 sshd[6775]: Accepted publickey for core from 147.75.109.163 port 49018 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:16:57.137317 sshd-session[6775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:16:57.142940 systemd-logind[1522]: New session 32 of user core. Aug 13 01:16:57.147026 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 13 01:16:57.445924 sshd[6777]: Connection closed by 147.75.109.163 port 49018 Aug 13 01:16:57.446588 sshd-session[6775]: pam_unix(sshd:session): session closed for user core Aug 13 01:16:57.451228 systemd-logind[1522]: Session 32 logged out. Waiting for processes to exit. Aug 13 01:16:57.451384 systemd[1]: sshd@31-172.233.214.103:22-147.75.109.163:49018.service: Deactivated successfully. Aug 13 01:16:57.453273 systemd[1]: session-32.scope: Deactivated successfully. Aug 13 01:16:57.454758 systemd-logind[1522]: Removed session 32. Aug 13 01:16:58.006675 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1352846186.mount: Deactivated successfully. Aug 13 01:16:58.866303 containerd[1550]: time="2025-08-13T01:16:58.866226723Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/176/fs/coredns: no space left on device" Aug 13 01:16:58.866950 containerd[1550]: time="2025-08-13T01:16:58.866322833Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 01:16:58.867012 kubelet[2722]: E0813 01:16:58.866596 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/176/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:16:58.867012 kubelet[2722]: E0813 01:16:58.866643 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/176/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:16:58.867012 kubelet[2722]: E0813 01:16:58.866781 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.12.0,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l85x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd): ErrImagePull: failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/176/fs/coredns: no space left on device" logger="UnhandledError" Aug 13 01:16:58.868145 kubelet[2722]: E0813 01:16:58.868078 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/176/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:17:00.279552 containerd[1550]: time="2025-08-13T01:17:00.279475648Z" level=warning msg="container event discarded" container=42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497 type=CONTAINER_CREATED_EVENT Aug 13 01:17:00.279552 containerd[1550]: time="2025-08-13T01:17:00.279537588Z" level=warning msg="container event discarded" container=42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497 type=CONTAINER_STARTED_EVENT Aug 13 01:17:00.513874 kubelet[2722]: I0813 01:17:00.513827 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:17:00.513874 kubelet[2722]: I0813 01:17:00.513886 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:17:00.515785 kubelet[2722]: I0813 01:17:00.515676 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:17:00.527683 kubelet[2722]: I0813 01:17:00.527662 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:17:00.527799 kubelet[2722]: I0813 01:17:00.527767 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","calico-system/csi-node-driver-l7lv4","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:17:00.527890 kubelet[2722]: E0813 01:17:00.527810 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:17:00.527890 kubelet[2722]: E0813 01:17:00.527819 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:17:00.527890 kubelet[2722]: E0813 01:17:00.527825 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:17:00.527890 kubelet[2722]: E0813 01:17:00.527835 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:17:00.527890 kubelet[2722]: E0813 01:17:00.527845 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:17:00.527890 kubelet[2722]: E0813 01:17:00.527853 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:17:00.527890 kubelet[2722]: E0813 01:17:00.527861 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:17:00.527890 kubelet[2722]: E0813 01:17:00.527868 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:17:00.527890 kubelet[2722]: E0813 01:17:00.527877 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:17:00.527890 kubelet[2722]: E0813 01:17:00.527885 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:17:00.527890 kubelet[2722]: I0813 01:17:00.527910 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:17:00.596555 containerd[1550]: time="2025-08-13T01:17:00.596441344Z" level=warning msg="container event discarded" container=d48c69dbba2bdff219723ad14648f4bc7dc3ea2c8a8efa21048893f4f70ca9d9 type=CONTAINER_CREATED_EVENT Aug 13 01:17:00.596555 containerd[1550]: time="2025-08-13T01:17:00.596503743Z" level=warning msg="container event discarded" container=d48c69dbba2bdff219723ad14648f4bc7dc3ea2c8a8efa21048893f4f70ca9d9 type=CONTAINER_STARTED_EVENT Aug 13 01:17:00.619912 containerd[1550]: time="2025-08-13T01:17:00.619833052Z" level=warning msg="container event discarded" container=d4453309d9b381ba2cb4f9b0fbe1f6fc972e542a1d59193e86118d95825b0320 type=CONTAINER_CREATED_EVENT Aug 13 01:17:00.684285 containerd[1550]: time="2025-08-13T01:17:00.684226963Z" level=warning msg="container event discarded" container=d4453309d9b381ba2cb4f9b0fbe1f6fc972e542a1d59193e86118d95825b0320 type=CONTAINER_STARTED_EVENT Aug 13 01:17:01.945890 containerd[1550]: time="2025-08-13T01:17:01.945750115Z" level=warning msg="container event discarded" container=b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd type=CONTAINER_CREATED_EVENT Aug 13 01:17:01.997511 containerd[1550]: time="2025-08-13T01:17:01.997437322Z" level=warning msg="container event discarded" container=b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd type=CONTAINER_STARTED_EVENT Aug 13 01:17:02.510349 systemd[1]: Started sshd@32-172.233.214.103:22-147.75.109.163:42488.service - OpenSSH per-connection server daemon (147.75.109.163:42488). Aug 13 01:17:02.852592 sshd[6849]: Accepted publickey for core from 147.75.109.163 port 42488 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:17:02.853880 sshd-session[6849]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:17:02.858116 systemd-logind[1522]: New session 33 of user core. Aug 13 01:17:02.864020 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 13 01:17:03.159142 sshd[6851]: Connection closed by 147.75.109.163 port 42488 Aug 13 01:17:03.160761 sshd-session[6849]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:03.164741 systemd[1]: sshd@32-172.233.214.103:22-147.75.109.163:42488.service: Deactivated successfully. Aug 13 01:17:03.167497 systemd[1]: session-33.scope: Deactivated successfully. Aug 13 01:17:03.168970 systemd-logind[1522]: Session 33 logged out. Waiting for processes to exit. Aug 13 01:17:03.172559 systemd-logind[1522]: Removed session 33. Aug 13 01:17:06.060567 kubelet[2722]: E0813 01:17:06.060098 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/163/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:17:08.223332 systemd[1]: Started sshd@33-172.233.214.103:22-147.75.109.163:48570.service - OpenSSH per-connection server daemon (147.75.109.163:48570). Aug 13 01:17:08.563371 sshd[6863]: Accepted publickey for core from 147.75.109.163 port 48570 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:17:08.564887 sshd-session[6863]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:17:08.570854 systemd-logind[1522]: New session 34 of user core. Aug 13 01:17:08.577121 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 13 01:17:08.875174 sshd[6865]: Connection closed by 147.75.109.163 port 48570 Aug 13 01:17:08.876382 sshd-session[6863]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:08.882374 systemd[1]: sshd@33-172.233.214.103:22-147.75.109.163:48570.service: Deactivated successfully. Aug 13 01:17:08.887130 systemd[1]: session-34.scope: Deactivated successfully. Aug 13 01:17:08.889176 systemd-logind[1522]: Session 34 logged out. Waiting for processes to exit. Aug 13 01:17:08.890828 systemd-logind[1522]: Removed session 34. Aug 13 01:17:10.063859 kubelet[2722]: E0813 01:17:10.063806 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:17:10.065864 containerd[1550]: time="2025-08-13T01:17:10.065794386Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 01:17:10.550217 kubelet[2722]: I0813 01:17:10.550185 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:17:10.550217 kubelet[2722]: I0813 01:17:10.550224 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:17:10.553645 kubelet[2722]: I0813 01:17:10.553329 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:17:10.564846 kubelet[2722]: I0813 01:17:10.564827 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:17:10.564974 kubelet[2722]: I0813 01:17:10.564953 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","calico-system/csi-node-driver-l7lv4","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:17:10.565031 kubelet[2722]: E0813 01:17:10.564983 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:17:10.565031 kubelet[2722]: E0813 01:17:10.564991 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:17:10.565031 kubelet[2722]: E0813 01:17:10.564998 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:17:10.565031 kubelet[2722]: E0813 01:17:10.565006 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:17:10.565031 kubelet[2722]: E0813 01:17:10.565013 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:17:10.565031 kubelet[2722]: E0813 01:17:10.565019 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:17:10.565031 kubelet[2722]: E0813 01:17:10.565025 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:17:10.565031 kubelet[2722]: E0813 01:17:10.565031 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:17:10.565170 kubelet[2722]: E0813 01:17:10.565039 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:17:10.565170 kubelet[2722]: E0813 01:17:10.565045 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:17:10.565170 kubelet[2722]: I0813 01:17:10.565053 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:17:10.779564 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2146664723.mount: Deactivated successfully. Aug 13 01:17:10.898535 containerd[1550]: time="2025-08-13T01:17:10.898297192Z" level=warning msg="container event discarded" container=f47ba15e1c906c0e4a4e96b52a0a92b5bd9b6708de325076d6845d41ce57c618 type=CONTAINER_CREATED_EVENT Aug 13 01:17:10.898535 containerd[1550]: time="2025-08-13T01:17:10.898366601Z" level=warning msg="container event discarded" container=f47ba15e1c906c0e4a4e96b52a0a92b5bd9b6708de325076d6845d41ce57c618 type=CONTAINER_STARTED_EVENT Aug 13 01:17:11.174822 containerd[1550]: time="2025-08-13T01:17:11.174663926Z" level=warning msg="container event discarded" container=230d5e2260a9c5817bd0783127b59ee5e78b885b601c7a71a82f5c041382166d type=CONTAINER_CREATED_EVENT Aug 13 01:17:11.174822 containerd[1550]: time="2025-08-13T01:17:11.174696366Z" level=warning msg="container event discarded" container=230d5e2260a9c5817bd0783127b59ee5e78b885b601c7a71a82f5c041382166d type=CONTAINER_STARTED_EVENT Aug 13 01:17:11.549221 containerd[1550]: time="2025-08-13T01:17:11.549160117Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/189/fs/coredns: no space left on device" Aug 13 01:17:11.549457 containerd[1550]: time="2025-08-13T01:17:11.549249197Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 01:17:11.550140 kubelet[2722]: E0813 01:17:11.550080 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/189/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:17:11.550618 kubelet[2722]: E0813 01:17:11.550425 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/189/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:17:11.550938 kubelet[2722]: E0813 01:17:11.550868 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.12.0,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnhgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664): ErrImagePull: failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/189/fs/coredns: no space left on device" logger="UnhandledError" Aug 13 01:17:11.552195 kubelet[2722]: E0813 01:17:11.552168 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/189/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:17:12.061738 kubelet[2722]: E0813 01:17:12.061705 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:17:12.062516 kubelet[2722]: E0813 01:17:12.062489 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/176/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:17:12.062779 kubelet[2722]: E0813 01:17:12.062754 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:17:12.445644 containerd[1550]: time="2025-08-13T01:17:12.445576414Z" level=warning msg="container event discarded" container=2de1488f2b9e06fbb97e8b977f3a3bafb0a5c60120679a25b0e2c96e545add39 type=CONTAINER_CREATED_EVENT Aug 13 01:17:12.522970 containerd[1550]: time="2025-08-13T01:17:12.522877059Z" level=warning msg="container event discarded" container=2de1488f2b9e06fbb97e8b977f3a3bafb0a5c60120679a25b0e2c96e545add39 type=CONTAINER_STARTED_EVENT Aug 13 01:17:13.046483 containerd[1550]: time="2025-08-13T01:17:13.046422114Z" level=warning msg="container event discarded" container=49fe246f999d64e040b53746c2793e83baae491c225e0bc1e02c089474f32e8e type=CONTAINER_CREATED_EVENT Aug 13 01:17:13.058405 kubelet[2722]: E0813 01:17:13.058051 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:17:13.123986 containerd[1550]: time="2025-08-13T01:17:13.123870069Z" level=warning msg="container event discarded" container=49fe246f999d64e040b53746c2793e83baae491c225e0bc1e02c089474f32e8e type=CONTAINER_STARTED_EVENT Aug 13 01:17:13.256639 containerd[1550]: time="2025-08-13T01:17:13.256374296Z" level=warning msg="container event discarded" container=49fe246f999d64e040b53746c2793e83baae491c225e0bc1e02c089474f32e8e type=CONTAINER_STOPPED_EVENT Aug 13 01:17:13.935732 systemd[1]: Started sshd@34-172.233.214.103:22-147.75.109.163:48582.service - OpenSSH per-connection server daemon (147.75.109.163:48582). Aug 13 01:17:14.272474 sshd[6929]: Accepted publickey for core from 147.75.109.163 port 48582 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:17:14.273738 sshd-session[6929]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:17:14.277953 systemd-logind[1522]: New session 35 of user core. Aug 13 01:17:14.282234 systemd[1]: Started session-35.scope - Session 35 of User core. Aug 13 01:17:14.619474 sshd[6931]: Connection closed by 147.75.109.163 port 48582 Aug 13 01:17:14.619526 sshd-session[6929]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:14.623851 systemd-logind[1522]: Session 35 logged out. Waiting for processes to exit. Aug 13 01:17:14.624091 systemd[1]: sshd@34-172.233.214.103:22-147.75.109.163:48582.service: Deactivated successfully. Aug 13 01:17:14.626590 systemd[1]: session-35.scope: Deactivated successfully. Aug 13 01:17:14.628840 systemd-logind[1522]: Removed session 35. Aug 13 01:17:15.993164 containerd[1550]: time="2025-08-13T01:17:15.993010918Z" level=warning msg="container event discarded" container=1ed66b1e64a3e12a6ca570671127cd4085864a1f9adce2a8a3563d52ce2ecb22 type=CONTAINER_CREATED_EVENT Aug 13 01:17:16.080466 containerd[1550]: time="2025-08-13T01:17:16.080391461Z" level=warning msg="container event discarded" container=1ed66b1e64a3e12a6ca570671127cd4085864a1f9adce2a8a3563d52ce2ecb22 type=CONTAINER_STARTED_EVENT Aug 13 01:17:16.658714 containerd[1550]: time="2025-08-13T01:17:16.658652740Z" level=warning msg="container event discarded" container=1ed66b1e64a3e12a6ca570671127cd4085864a1f9adce2a8a3563d52ce2ecb22 type=CONTAINER_STOPPED_EVENT Aug 13 01:17:19.057970 kubelet[2722]: E0813 01:17:19.057935 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:17:19.683673 systemd[1]: Started sshd@35-172.233.214.103:22-147.75.109.163:53966.service - OpenSSH per-connection server daemon (147.75.109.163:53966). Aug 13 01:17:20.020135 sshd[6943]: Accepted publickey for core from 147.75.109.163 port 53966 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:17:20.022153 sshd-session[6943]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:17:20.027264 systemd-logind[1522]: New session 36 of user core. Aug 13 01:17:20.035059 systemd[1]: Started session-36.scope - Session 36 of User core. Aug 13 01:17:20.059581 kubelet[2722]: E0813 01:17:20.059536 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/163/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:17:20.320132 sshd[6945]: Connection closed by 147.75.109.163 port 53966 Aug 13 01:17:20.321074 sshd-session[6943]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:20.324296 systemd-logind[1522]: Session 36 logged out. Waiting for processes to exit. Aug 13 01:17:20.324525 systemd[1]: sshd@35-172.233.214.103:22-147.75.109.163:53966.service: Deactivated successfully. Aug 13 01:17:20.326371 systemd[1]: session-36.scope: Deactivated successfully. Aug 13 01:17:20.328113 systemd-logind[1522]: Removed session 36. Aug 13 01:17:20.586409 kubelet[2722]: I0813 01:17:20.586257 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:17:20.586409 kubelet[2722]: I0813 01:17:20.586294 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:17:20.589917 kubelet[2722]: I0813 01:17:20.589878 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:17:20.610610 kubelet[2722]: I0813 01:17:20.610592 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:17:20.610707 kubelet[2722]: I0813 01:17:20.610689 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","calico-system/csi-node-driver-l7lv4","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:17:20.610781 kubelet[2722]: E0813 01:17:20.610715 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:17:20.610781 kubelet[2722]: E0813 01:17:20.610724 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:17:20.610781 kubelet[2722]: E0813 01:17:20.610729 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:17:20.610781 kubelet[2722]: E0813 01:17:20.610736 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:17:20.610781 kubelet[2722]: E0813 01:17:20.610743 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:17:20.610781 kubelet[2722]: E0813 01:17:20.610750 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:17:20.610781 kubelet[2722]: E0813 01:17:20.610756 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:17:20.610781 kubelet[2722]: E0813 01:17:20.610762 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:17:20.610781 kubelet[2722]: E0813 01:17:20.610770 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:17:20.610781 kubelet[2722]: E0813 01:17:20.610775 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:17:20.610781 kubelet[2722]: I0813 01:17:20.610783 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:17:21.771378 containerd[1550]: time="2025-08-13T01:17:21.771340053Z" level=info msg="TaskExit event in podsandbox handler container_id:\"912e55c883b9193775a8e2855a8f299720d449e5b9a7826e028fd24a993416d7\" id:\"e500f187f0bdff7fcd469dcfb2987cb7003a3649c6aa62647a5b2c764f47d7f4\" pid:6968 exited_at:{seconds:1755047841 nanos:771057713}" Aug 13 01:17:24.065795 kubelet[2722]: E0813 01:17:24.065161 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:17:24.066416 kubelet[2722]: E0813 01:17:24.065945 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/189/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:17:25.382249 systemd[1]: Started sshd@36-172.233.214.103:22-147.75.109.163:53968.service - OpenSSH per-connection server daemon (147.75.109.163:53968). Aug 13 01:17:25.711996 sshd[6980]: Accepted publickey for core from 147.75.109.163 port 53968 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:17:25.714190 sshd-session[6980]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:17:25.719984 systemd-logind[1522]: New session 37 of user core. Aug 13 01:17:25.725010 systemd[1]: Started session-37.scope - Session 37 of User core. Aug 13 01:17:26.019168 sshd[6982]: Connection closed by 147.75.109.163 port 53968 Aug 13 01:17:26.020549 sshd-session[6980]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:26.025997 systemd-logind[1522]: Session 37 logged out. Waiting for processes to exit. Aug 13 01:17:26.026338 systemd[1]: sshd@36-172.233.214.103:22-147.75.109.163:53968.service: Deactivated successfully. Aug 13 01:17:26.028483 systemd[1]: session-37.scope: Deactivated successfully. Aug 13 01:17:26.030824 systemd-logind[1522]: Removed session 37. Aug 13 01:17:27.058935 kubelet[2722]: E0813 01:17:27.058198 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:17:27.060986 kubelet[2722]: E0813 01:17:27.060958 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/176/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:17:28.416799 containerd[1550]: time="2025-08-13T01:17:28.416681485Z" level=warning msg="container event discarded" container=b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd type=CONTAINER_STOPPED_EVENT Aug 13 01:17:28.452241 containerd[1550]: time="2025-08-13T01:17:28.452111998Z" level=warning msg="container event discarded" container=42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497 type=CONTAINER_STOPPED_EVENT Aug 13 01:17:29.211429 containerd[1550]: time="2025-08-13T01:17:29.211328389Z" level=warning msg="container event discarded" container=b83d995895675db204a196b526eb0e4b20615732507801c51c0045b4f35997dd type=CONTAINER_DELETED_EVENT Aug 13 01:17:29.500462 containerd[1550]: time="2025-08-13T01:17:29.500011194Z" level=warning msg="container event discarded" container=42adf78598739d12d2f5ef7d6fd4bbb4703c1d7f7a7cc89b6747188980e70497 type=CONTAINER_DELETED_EVENT Aug 13 01:17:30.633559 kubelet[2722]: I0813 01:17:30.633510 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:17:30.633559 kubelet[2722]: I0813 01:17:30.633577 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:17:30.635528 kubelet[2722]: I0813 01:17:30.635477 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:17:30.656755 kubelet[2722]: I0813 01:17:30.656727 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:17:30.656944 kubelet[2722]: I0813 01:17:30.656889 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","calico-system/csi-node-driver-l7lv4","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:17:30.657059 kubelet[2722]: E0813 01:17:30.656960 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:17:30.657059 kubelet[2722]: E0813 01:17:30.656972 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:17:30.657059 kubelet[2722]: E0813 01:17:30.656979 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:17:30.657059 kubelet[2722]: E0813 01:17:30.656989 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:17:30.657059 kubelet[2722]: E0813 01:17:30.657015 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:17:30.657059 kubelet[2722]: E0813 01:17:30.657024 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:17:30.657059 kubelet[2722]: E0813 01:17:30.657033 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:17:30.657059 kubelet[2722]: E0813 01:17:30.657041 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:17:30.657059 kubelet[2722]: E0813 01:17:30.657052 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:17:30.657059 kubelet[2722]: E0813 01:17:30.657061 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:17:30.657059 kubelet[2722]: I0813 01:17:30.657072 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:17:31.085117 systemd[1]: Started sshd@37-172.233.214.103:22-147.75.109.163:50352.service - OpenSSH per-connection server daemon (147.75.109.163:50352). Aug 13 01:17:31.420748 sshd[6995]: Accepted publickey for core from 147.75.109.163 port 50352 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:17:31.422393 sshd-session[6995]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:17:31.427337 systemd-logind[1522]: New session 38 of user core. Aug 13 01:17:31.434074 systemd[1]: Started session-38.scope - Session 38 of User core. Aug 13 01:17:31.734065 sshd[6997]: Connection closed by 147.75.109.163 port 50352 Aug 13 01:17:31.735185 sshd-session[6995]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:31.739752 systemd[1]: sshd@37-172.233.214.103:22-147.75.109.163:50352.service: Deactivated successfully. Aug 13 01:17:31.741678 systemd[1]: session-38.scope: Deactivated successfully. Aug 13 01:17:31.744435 systemd-logind[1522]: Session 38 logged out. Waiting for processes to exit. Aug 13 01:17:31.748021 systemd-logind[1522]: Removed session 38. Aug 13 01:17:33.058299 kubelet[2722]: E0813 01:17:33.057784 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:17:34.068293 kubelet[2722]: E0813 01:17:34.067907 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/163/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:17:36.058926 kubelet[2722]: E0813 01:17:36.058395 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:17:36.797698 systemd[1]: Started sshd@38-172.233.214.103:22-147.75.109.163:50366.service - OpenSSH per-connection server daemon (147.75.109.163:50366). Aug 13 01:17:37.145768 sshd[7009]: Accepted publickey for core from 147.75.109.163 port 50366 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:17:37.147452 sshd-session[7009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:17:37.154282 systemd-logind[1522]: New session 39 of user core. Aug 13 01:17:37.161052 systemd[1]: Started session-39.scope - Session 39 of User core. Aug 13 01:17:37.476440 sshd[7011]: Connection closed by 147.75.109.163 port 50366 Aug 13 01:17:37.478169 sshd-session[7009]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:37.483226 systemd[1]: sshd@38-172.233.214.103:22-147.75.109.163:50366.service: Deactivated successfully. Aug 13 01:17:37.486774 systemd[1]: session-39.scope: Deactivated successfully. Aug 13 01:17:37.488060 systemd-logind[1522]: Session 39 logged out. Waiting for processes to exit. Aug 13 01:17:37.490995 systemd-logind[1522]: Removed session 39. Aug 13 01:17:38.059991 kubelet[2722]: E0813 01:17:38.058122 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:17:38.062530 kubelet[2722]: E0813 01:17:38.061208 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/189/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:17:40.672140 kubelet[2722]: I0813 01:17:40.672114 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:17:40.672140 kubelet[2722]: I0813 01:17:40.672152 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:17:40.673373 kubelet[2722]: I0813 01:17:40.673358 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:17:40.690132 kubelet[2722]: I0813 01:17:40.690115 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:17:40.690230 kubelet[2722]: I0813 01:17:40.690215 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","calico-system/csi-node-driver-l7lv4","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:17:40.690319 kubelet[2722]: E0813 01:17:40.690243 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:17:40.690319 kubelet[2722]: E0813 01:17:40.690249 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:17:40.690319 kubelet[2722]: E0813 01:17:40.690255 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:17:40.690319 kubelet[2722]: E0813 01:17:40.690262 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:17:40.690319 kubelet[2722]: E0813 01:17:40.690269 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:17:40.690319 kubelet[2722]: E0813 01:17:40.690276 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:17:40.690319 kubelet[2722]: E0813 01:17:40.690282 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:17:40.690319 kubelet[2722]: E0813 01:17:40.690287 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:17:40.690319 kubelet[2722]: E0813 01:17:40.690295 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:17:40.690319 kubelet[2722]: E0813 01:17:40.690301 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:17:40.690319 kubelet[2722]: I0813 01:17:40.690308 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:17:41.058375 kubelet[2722]: E0813 01:17:41.058333 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:17:41.058912 kubelet[2722]: E0813 01:17:41.058881 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/176/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:17:42.542171 systemd[1]: Started sshd@39-172.233.214.103:22-147.75.109.163:55758.service - OpenSSH per-connection server daemon (147.75.109.163:55758). Aug 13 01:17:42.879270 sshd[7023]: Accepted publickey for core from 147.75.109.163 port 55758 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:17:42.881102 sshd-session[7023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:17:42.886011 systemd-logind[1522]: New session 40 of user core. Aug 13 01:17:42.893047 systemd[1]: Started session-40.scope - Session 40 of User core. Aug 13 01:17:43.189119 sshd[7025]: Connection closed by 147.75.109.163 port 55758 Aug 13 01:17:43.189689 sshd-session[7023]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:43.194486 systemd-logind[1522]: Session 40 logged out. Waiting for processes to exit. Aug 13 01:17:43.195326 systemd[1]: sshd@39-172.233.214.103:22-147.75.109.163:55758.service: Deactivated successfully. Aug 13 01:17:43.198788 systemd[1]: session-40.scope: Deactivated successfully. Aug 13 01:17:43.202542 systemd-logind[1522]: Removed session 40. Aug 13 01:17:48.253122 systemd[1]: Started sshd@40-172.233.214.103:22-147.75.109.163:40898.service - OpenSSH per-connection server daemon (147.75.109.163:40898). Aug 13 01:17:48.594824 sshd[7037]: Accepted publickey for core from 147.75.109.163 port 40898 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:17:48.596177 sshd-session[7037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:17:48.602966 systemd-logind[1522]: New session 41 of user core. Aug 13 01:17:48.607030 systemd[1]: Started session-41.scope - Session 41 of User core. Aug 13 01:17:48.913662 sshd[7039]: Connection closed by 147.75.109.163 port 40898 Aug 13 01:17:48.915322 sshd-session[7037]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:48.919111 systemd-logind[1522]: Session 41 logged out. Waiting for processes to exit. Aug 13 01:17:48.919821 systemd[1]: sshd@40-172.233.214.103:22-147.75.109.163:40898.service: Deactivated successfully. Aug 13 01:17:48.921643 systemd[1]: session-41.scope: Deactivated successfully. Aug 13 01:17:48.923684 systemd-logind[1522]: Removed session 41. Aug 13 01:17:49.058597 kubelet[2722]: E0813 01:17:49.058562 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/163/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:17:50.714349 kubelet[2722]: I0813 01:17:50.714300 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:17:50.714349 kubelet[2722]: I0813 01:17:50.714347 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:17:50.715786 kubelet[2722]: I0813 01:17:50.715768 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:17:50.727644 kubelet[2722]: I0813 01:17:50.727626 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:17:50.727735 kubelet[2722]: I0813 01:17:50.727717 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","calico-system/csi-node-driver-l7lv4","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:17:50.727826 kubelet[2722]: E0813 01:17:50.727747 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:17:50.727826 kubelet[2722]: E0813 01:17:50.727755 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:17:50.727826 kubelet[2722]: E0813 01:17:50.727761 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:17:50.727826 kubelet[2722]: E0813 01:17:50.727770 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:17:50.727826 kubelet[2722]: E0813 01:17:50.727778 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:17:50.727826 kubelet[2722]: E0813 01:17:50.727785 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:17:50.727826 kubelet[2722]: E0813 01:17:50.727792 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:17:50.727826 kubelet[2722]: E0813 01:17:50.727799 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:17:50.727826 kubelet[2722]: E0813 01:17:50.727807 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:17:50.727826 kubelet[2722]: E0813 01:17:50.727823 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:17:50.727826 kubelet[2722]: I0813 01:17:50.727832 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:17:51.058022 kubelet[2722]: E0813 01:17:51.057819 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:17:51.059109 kubelet[2722]: E0813 01:17:51.058851 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/189/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:17:51.756796 containerd[1550]: time="2025-08-13T01:17:51.756743354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"912e55c883b9193775a8e2855a8f299720d449e5b9a7826e028fd24a993416d7\" id:\"c65dd6e91f38301d90f620438a648c3a83b8f52a5b869e580319a8b95fb54e72\" pid:7063 exited_at:{seconds:1755047871 nanos:756478974}" Aug 13 01:17:53.973918 systemd[1]: Started sshd@41-172.233.214.103:22-147.75.109.163:40910.service - OpenSSH per-connection server daemon (147.75.109.163:40910). Aug 13 01:17:54.308003 sshd[7075]: Accepted publickey for core from 147.75.109.163 port 40910 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:17:54.308807 sshd-session[7075]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:17:54.314703 systemd-logind[1522]: New session 42 of user core. Aug 13 01:17:54.322015 systemd[1]: Started session-42.scope - Session 42 of User core. Aug 13 01:17:54.607984 sshd[7079]: Connection closed by 147.75.109.163 port 40910 Aug 13 01:17:54.608683 sshd-session[7075]: pam_unix(sshd:session): session closed for user core Aug 13 01:17:54.612143 systemd[1]: sshd@41-172.233.214.103:22-147.75.109.163:40910.service: Deactivated successfully. Aug 13 01:17:54.614067 systemd[1]: session-42.scope: Deactivated successfully. Aug 13 01:17:54.614939 systemd-logind[1522]: Session 42 logged out. Waiting for processes to exit. Aug 13 01:17:54.616556 systemd-logind[1522]: Removed session 42. Aug 13 01:17:55.058109 kubelet[2722]: E0813 01:17:55.058047 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:17:55.059604 kubelet[2722]: E0813 01:17:55.059457 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/176/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:17:59.668253 systemd[1]: Started sshd@42-172.233.214.103:22-147.75.109.163:39304.service - OpenSSH per-connection server daemon (147.75.109.163:39304). Aug 13 01:17:59.995938 sshd[7090]: Accepted publickey for core from 147.75.109.163 port 39304 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:17:59.997608 sshd-session[7090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:18:00.003296 systemd-logind[1522]: New session 43 of user core. Aug 13 01:18:00.010013 systemd[1]: Started session-43.scope - Session 43 of User core. Aug 13 01:18:00.296343 sshd[7092]: Connection closed by 147.75.109.163 port 39304 Aug 13 01:18:00.298076 sshd-session[7090]: pam_unix(sshd:session): session closed for user core Aug 13 01:18:00.302430 systemd-logind[1522]: Session 43 logged out. Waiting for processes to exit. Aug 13 01:18:00.302859 systemd[1]: sshd@42-172.233.214.103:22-147.75.109.163:39304.service: Deactivated successfully. Aug 13 01:18:00.304934 systemd[1]: session-43.scope: Deactivated successfully. Aug 13 01:18:00.306347 systemd-logind[1522]: Removed session 43. Aug 13 01:18:00.749513 kubelet[2722]: I0813 01:18:00.749454 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:18:00.749513 kubelet[2722]: I0813 01:18:00.749502 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:18:00.752939 kubelet[2722]: I0813 01:18:00.752433 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:18:00.768358 kubelet[2722]: I0813 01:18:00.768335 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:18:00.768582 kubelet[2722]: I0813 01:18:00.768537 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","calico-system/csi-node-driver-l7lv4","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:18:00.768582 kubelet[2722]: E0813 01:18:00.768578 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:18:00.768582 kubelet[2722]: E0813 01:18:00.768589 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:18:00.768582 kubelet[2722]: E0813 01:18:00.768596 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:18:00.768845 kubelet[2722]: E0813 01:18:00.768606 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:18:00.768845 kubelet[2722]: E0813 01:18:00.768619 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:18:00.768845 kubelet[2722]: E0813 01:18:00.768629 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:18:00.768845 kubelet[2722]: E0813 01:18:00.768637 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:18:00.768845 kubelet[2722]: E0813 01:18:00.768646 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:18:00.768845 kubelet[2722]: E0813 01:18:00.768657 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:18:00.768845 kubelet[2722]: E0813 01:18:00.768666 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:18:00.768845 kubelet[2722]: I0813 01:18:00.768675 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:18:02.063423 kubelet[2722]: E0813 01:18:02.063356 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/163/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:18:04.066997 kubelet[2722]: E0813 01:18:04.066957 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:18:04.068860 kubelet[2722]: E0813 01:18:04.068289 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/189/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:18:05.361241 systemd[1]: Started sshd@43-172.233.214.103:22-147.75.109.163:39320.service - OpenSSH per-connection server daemon (147.75.109.163:39320). Aug 13 01:18:05.707714 sshd[7111]: Accepted publickey for core from 147.75.109.163 port 39320 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:18:05.709330 sshd-session[7111]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:18:05.714662 systemd-logind[1522]: New session 44 of user core. Aug 13 01:18:05.724041 systemd[1]: Started session-44.scope - Session 44 of User core. Aug 13 01:18:06.018207 sshd[7113]: Connection closed by 147.75.109.163 port 39320 Aug 13 01:18:06.019213 sshd-session[7111]: pam_unix(sshd:session): session closed for user core Aug 13 01:18:06.023946 systemd[1]: sshd@43-172.233.214.103:22-147.75.109.163:39320.service: Deactivated successfully. Aug 13 01:18:06.026177 systemd[1]: session-44.scope: Deactivated successfully. Aug 13 01:18:06.027601 systemd-logind[1522]: Session 44 logged out. Waiting for processes to exit. Aug 13 01:18:06.029122 systemd-logind[1522]: Removed session 44. Aug 13 01:18:08.058547 kubelet[2722]: E0813 01:18:08.058454 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:18:08.060310 kubelet[2722]: E0813 01:18:08.059231 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/176/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:18:10.795370 kubelet[2722]: I0813 01:18:10.795300 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:18:10.795370 kubelet[2722]: I0813 01:18:10.795369 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:18:10.800395 kubelet[2722]: I0813 01:18:10.800265 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:18:10.817890 kubelet[2722]: I0813 01:18:10.817858 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:18:10.818024 kubelet[2722]: I0813 01:18:10.817991 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","calico-system/csi-node-driver-l7lv4","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:18:10.818101 kubelet[2722]: E0813 01:18:10.818026 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:18:10.818101 kubelet[2722]: E0813 01:18:10.818035 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:18:10.818101 kubelet[2722]: E0813 01:18:10.818040 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:18:10.818101 kubelet[2722]: E0813 01:18:10.818049 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:18:10.818101 kubelet[2722]: E0813 01:18:10.818057 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:18:10.818101 kubelet[2722]: E0813 01:18:10.818064 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:18:10.818101 kubelet[2722]: E0813 01:18:10.818071 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:18:10.818101 kubelet[2722]: E0813 01:18:10.818079 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:18:10.818101 kubelet[2722]: E0813 01:18:10.818089 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:18:10.818101 kubelet[2722]: E0813 01:18:10.818097 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:18:10.818101 kubelet[2722]: I0813 01:18:10.818106 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:18:11.083889 systemd[1]: Started sshd@44-172.233.214.103:22-147.75.109.163:46944.service - OpenSSH per-connection server daemon (147.75.109.163:46944). Aug 13 01:18:11.415702 sshd[7124]: Accepted publickey for core from 147.75.109.163 port 46944 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:18:11.417374 sshd-session[7124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:18:11.423611 systemd-logind[1522]: New session 45 of user core. Aug 13 01:18:11.429042 systemd[1]: Started session-45.scope - Session 45 of User core. Aug 13 01:18:11.743935 sshd[7128]: Connection closed by 147.75.109.163 port 46944 Aug 13 01:18:11.745700 sshd-session[7124]: pam_unix(sshd:session): session closed for user core Aug 13 01:18:11.751722 systemd-logind[1522]: Session 45 logged out. Waiting for processes to exit. Aug 13 01:18:11.752543 systemd[1]: sshd@44-172.233.214.103:22-147.75.109.163:46944.service: Deactivated successfully. Aug 13 01:18:11.755692 systemd[1]: session-45.scope: Deactivated successfully. Aug 13 01:18:11.759356 systemd-logind[1522]: Removed session 45. Aug 13 01:18:15.059505 containerd[1550]: time="2025-08-13T01:18:15.059427283Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:18:16.054191 containerd[1550]: time="2025-08-13T01:18:16.054110648Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/190/fs/usr/bin/kube-controllers: no space left on device" Aug 13 01:18:16.054417 containerd[1550]: time="2025-08-13T01:18:16.054208138Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 01:18:16.054447 kubelet[2722]: E0813 01:18:16.054301 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/190/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:18:16.054447 kubelet[2722]: E0813 01:18:16.054337 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/190/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:18:16.054777 kubelet[2722]: E0813 01:18:16.054443 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hcb9j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-cddc95b58-6t6z7_calico-system(2dab385f-2367-4e01-8d78-2247bcba7bcc): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/190/fs/usr/bin/kube-controllers: no space left on device" logger="UnhandledError" Aug 13 01:18:16.055829 kubelet[2722]: E0813 01:18:16.055770 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/190/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:18:16.811201 systemd[1]: Started sshd@45-172.233.214.103:22-147.75.109.163:46956.service - OpenSSH per-connection server daemon (147.75.109.163:46956). Aug 13 01:18:17.148021 sshd[7144]: Accepted publickey for core from 147.75.109.163 port 46956 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:18:17.149332 sshd-session[7144]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:18:17.157361 systemd-logind[1522]: New session 46 of user core. Aug 13 01:18:17.161040 systemd[1]: Started session-46.scope - Session 46 of User core. Aug 13 01:18:17.470269 sshd[7146]: Connection closed by 147.75.109.163 port 46956 Aug 13 01:18:17.470938 sshd-session[7144]: pam_unix(sshd:session): session closed for user core Aug 13 01:18:17.476620 systemd-logind[1522]: Session 46 logged out. Waiting for processes to exit. Aug 13 01:18:17.477096 systemd[1]: sshd@45-172.233.214.103:22-147.75.109.163:46956.service: Deactivated successfully. Aug 13 01:18:17.479318 systemd[1]: session-46.scope: Deactivated successfully. Aug 13 01:18:17.481280 systemd-logind[1522]: Removed session 46. Aug 13 01:18:18.058256 kubelet[2722]: E0813 01:18:18.058214 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:18:18.059171 kubelet[2722]: E0813 01:18:18.059090 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/189/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:18:19.057784 kubelet[2722]: E0813 01:18:19.057743 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:18:19.058739 containerd[1550]: time="2025-08-13T01:18:19.058691879Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 01:18:20.068653 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1461634145.mount: Deactivated successfully. Aug 13 01:18:20.863085 kubelet[2722]: I0813 01:18:20.863026 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:18:20.863085 kubelet[2722]: I0813 01:18:20.863061 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:18:20.864716 kubelet[2722]: I0813 01:18:20.864686 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:18:20.888298 kubelet[2722]: I0813 01:18:20.888231 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:18:20.888602 kubelet[2722]: I0813 01:18:20.888328 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","kube-system/coredns-674b8bbfcf-p259x","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","calico-system/csi-node-driver-l7lv4","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:18:20.888602 kubelet[2722]: E0813 01:18:20.888354 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:18:20.888602 kubelet[2722]: E0813 01:18:20.888362 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:18:20.888602 kubelet[2722]: E0813 01:18:20.888368 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:18:20.888602 kubelet[2722]: E0813 01:18:20.888376 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:18:20.888602 kubelet[2722]: E0813 01:18:20.888383 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:18:20.888602 kubelet[2722]: E0813 01:18:20.888389 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:18:20.888602 kubelet[2722]: E0813 01:18:20.888396 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:18:20.888602 kubelet[2722]: E0813 01:18:20.888402 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:18:20.888602 kubelet[2722]: E0813 01:18:20.888410 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:18:20.888602 kubelet[2722]: E0813 01:18:20.888416 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:18:20.888602 kubelet[2722]: I0813 01:18:20.888424 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:18:21.011750 containerd[1550]: time="2025-08-13T01:18:21.011620929Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/203/fs/coredns: no space left on device" Aug 13 01:18:21.012247 containerd[1550]: time="2025-08-13T01:18:21.011665449Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 01:18:21.012609 kubelet[2722]: E0813 01:18:21.012380 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/203/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:18:21.012609 kubelet[2722]: E0813 01:18:21.012417 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/203/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:18:21.012609 kubelet[2722]: E0813 01:18:21.012547 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.12.0,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-l85x2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-674b8bbfcf-p259x_kube-system(a5b0b8ae-a381-43cc-8adc-4e3ee01749bd): ErrImagePull: failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/203/fs/coredns: no space left on device" logger="UnhandledError" Aug 13 01:18:21.013781 kubelet[2722]: E0813 01:18:21.013739 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/203/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:18:21.784067 containerd[1550]: time="2025-08-13T01:18:21.784017710Z" level=info msg="TaskExit event in podsandbox handler container_id:\"912e55c883b9193775a8e2855a8f299720d449e5b9a7826e028fd24a993416d7\" id:\"fa54fb41971df5eb474ea7a6bd00cdb623df78fb34c6c2273c9e25a50b8caf0e\" pid:7221 exited_at:{seconds:1755047901 nanos:782282740}" Aug 13 01:18:22.532194 systemd[1]: Started sshd@46-172.233.214.103:22-147.75.109.163:49944.service - OpenSSH per-connection server daemon (147.75.109.163:49944). Aug 13 01:18:22.862805 sshd[7234]: Accepted publickey for core from 147.75.109.163 port 49944 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:18:22.864609 sshd-session[7234]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:18:22.870983 systemd-logind[1522]: New session 47 of user core. Aug 13 01:18:22.879143 systemd[1]: Started session-47.scope - Session 47 of User core. Aug 13 01:18:23.181563 sshd[7236]: Connection closed by 147.75.109.163 port 49944 Aug 13 01:18:23.182388 sshd-session[7234]: pam_unix(sshd:session): session closed for user core Aug 13 01:18:23.188704 systemd-logind[1522]: Session 47 logged out. Waiting for processes to exit. Aug 13 01:18:23.189453 systemd[1]: sshd@46-172.233.214.103:22-147.75.109.163:49944.service: Deactivated successfully. Aug 13 01:18:23.192706 systemd[1]: session-47.scope: Deactivated successfully. Aug 13 01:18:23.196085 systemd-logind[1522]: Removed session 47. Aug 13 01:18:28.061446 kubelet[2722]: E0813 01:18:28.061384 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/190/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" podUID="2dab385f-2367-4e01-8d78-2247bcba7bcc" Aug 13 01:18:28.245086 systemd[1]: Started sshd@47-172.233.214.103:22-147.75.109.163:45484.service - OpenSSH per-connection server daemon (147.75.109.163:45484). Aug 13 01:18:28.589804 sshd[7248]: Accepted publickey for core from 147.75.109.163 port 45484 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:18:28.593113 sshd-session[7248]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:18:28.602084 systemd-logind[1522]: New session 48 of user core. Aug 13 01:18:28.609065 systemd[1]: Started session-48.scope - Session 48 of User core. Aug 13 01:18:28.899165 sshd[7250]: Connection closed by 147.75.109.163 port 45484 Aug 13 01:18:28.900280 sshd-session[7248]: pam_unix(sshd:session): session closed for user core Aug 13 01:18:28.904683 systemd-logind[1522]: Session 48 logged out. Waiting for processes to exit. Aug 13 01:18:28.905026 systemd[1]: sshd@47-172.233.214.103:22-147.75.109.163:45484.service: Deactivated successfully. Aug 13 01:18:28.908334 systemd[1]: session-48.scope: Deactivated successfully. Aug 13 01:18:28.910058 systemd-logind[1522]: Removed session 48. Aug 13 01:18:28.958072 systemd[1]: Started sshd@48-172.233.214.103:22-147.75.109.163:45490.service - OpenSSH per-connection server daemon (147.75.109.163:45490). Aug 13 01:18:29.297062 sshd[7262]: Accepted publickey for core from 147.75.109.163 port 45490 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:18:29.299207 sshd-session[7262]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:18:29.309430 systemd-logind[1522]: New session 49 of user core. Aug 13 01:18:29.316081 systemd[1]: Started session-49.scope - Session 49 of User core. Aug 13 01:18:29.635170 sshd[7269]: Connection closed by 147.75.109.163 port 45490 Aug 13 01:18:29.636118 sshd-session[7262]: pam_unix(sshd:session): session closed for user core Aug 13 01:18:29.642111 systemd[1]: sshd@48-172.233.214.103:22-147.75.109.163:45490.service: Deactivated successfully. Aug 13 01:18:29.645083 systemd[1]: session-49.scope: Deactivated successfully. Aug 13 01:18:29.648279 systemd-logind[1522]: Session 49 logged out. Waiting for processes to exit. Aug 13 01:18:29.650454 systemd-logind[1522]: Removed session 49. Aug 13 01:18:30.919422 kubelet[2722]: I0813 01:18:30.919381 2722 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:18:30.919422 kubelet[2722]: I0813 01:18:30.919416 2722 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:18:30.921078 kubelet[2722]: I0813 01:18:30.921050 2722 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:18:30.934705 kubelet[2722]: I0813 01:18:30.934681 2722 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:18:30.934814 kubelet[2722]: I0813 01:18:30.934790 2722 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-p259x","kube-system/coredns-674b8bbfcf-fgsjn","calico-system/calico-kube-controllers-cddc95b58-6t6z7","calico-system/calico-typha-55bf5cd98c-8lqpc","calico-system/calico-node-hq29b","kube-system/kube-controller-manager-172-233-214-103","kube-system/kube-proxy-tb5sq","kube-system/kube-apiserver-172-233-214-103","calico-system/csi-node-driver-l7lv4","kube-system/kube-scheduler-172-233-214-103"] Aug 13 01:18:30.934956 kubelet[2722]: E0813 01:18:30.934820 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-p259x" Aug 13 01:18:30.934956 kubelet[2722]: E0813 01:18:30.934828 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-fgsjn" Aug 13 01:18:30.934956 kubelet[2722]: E0813 01:18:30.934834 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-cddc95b58-6t6z7" Aug 13 01:18:30.934956 kubelet[2722]: E0813 01:18:30.934842 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-55bf5cd98c-8lqpc" Aug 13 01:18:30.934956 kubelet[2722]: E0813 01:18:30.934849 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-hq29b" Aug 13 01:18:30.934956 kubelet[2722]: E0813 01:18:30.934855 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-214-103" Aug 13 01:18:30.934956 kubelet[2722]: E0813 01:18:30.934865 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-tb5sq" Aug 13 01:18:30.934956 kubelet[2722]: E0813 01:18:30.934882 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-214-103" Aug 13 01:18:30.934956 kubelet[2722]: E0813 01:18:30.934926 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-l7lv4" Aug 13 01:18:30.934956 kubelet[2722]: E0813 01:18:30.934938 2722 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-214-103" Aug 13 01:18:30.934956 kubelet[2722]: I0813 01:18:30.934950 2722 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:18:31.057498 kubelet[2722]: E0813 01:18:31.057455 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:18:32.061605 kubelet[2722]: E0813 01:18:32.061499 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:18:32.064864 kubelet[2722]: E0813 01:18:32.064270 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": ErrImagePull: failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/203/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-p259x" podUID="a5b0b8ae-a381-43cc-8adc-4e3ee01749bd" Aug 13 01:18:33.058385 kubelet[2722]: E0813 01:18:33.058316 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:18:33.059934 containerd[1550]: time="2025-08-13T01:18:33.059870082Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 01:18:33.849479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2415763454.mount: Deactivated successfully. Aug 13 01:18:34.763096 containerd[1550]: time="2025-08-13T01:18:34.763025827Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/216/fs/coredns: no space left on device" Aug 13 01:18:34.764417 containerd[1550]: time="2025-08-13T01:18:34.763086857Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 01:18:34.765525 kubelet[2722]: E0813 01:18:34.764852 2722 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/216/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:18:34.765525 kubelet[2722]: E0813 01:18:34.765136 2722 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/216/fs/coredns: no space left on device" image="registry.k8s.io/coredns/coredns:v1.12.0" Aug 13 01:18:34.766927 kubelet[2722]: E0813 01:18:34.766228 2722 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.12.0,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bnhgk,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-674b8bbfcf-fgsjn_kube-system(27718112-1bb9-402a-89c8-f4890dedf664): ErrImagePull: failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.12.0\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/216/fs/coredns: no space left on device" logger="UnhandledError" Aug 13 01:18:34.768177 kubelet[2722]: E0813 01:18:34.768152 2722 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.12.0\\\": failed to extract layer sha256:25359bcca1bb70511c264e8a14c78fbc75c226344e91d19f733ed309b336949f: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/216/fs/coredns: no space left on device\"" pod="kube-system/coredns-674b8bbfcf-fgsjn" podUID="27718112-1bb9-402a-89c8-f4890dedf664" Aug 13 01:18:35.058656 kubelet[2722]: E0813 01:18:35.058512 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:18:37.058107 kubelet[2722]: E0813 01:18:37.058046 2722 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9"