Aug 13 02:06:01.816966 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 02:06:01.816985 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 02:06:01.816994 kernel: BIOS-provided physical RAM map: Aug 13 02:06:01.817002 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 02:06:01.817007 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 02:06:01.817012 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 02:06:01.817019 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 02:06:01.817025 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 02:06:01.817030 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 02:06:01.817035 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 02:06:01.817041 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 02:06:01.817047 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 02:06:01.817054 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 02:06:01.817060 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 02:06:01.817066 kernel: NX (Execute Disable) protection: active Aug 13 02:06:01.817072 kernel: APIC: Static calls initialized Aug 13 02:06:01.817078 kernel: SMBIOS 2.8 present. Aug 13 02:06:01.817086 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 02:06:01.817092 kernel: DMI: Memory slots populated: 1/1 Aug 13 02:06:01.817098 kernel: Hypervisor detected: KVM Aug 13 02:06:01.817104 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 02:06:01.817109 kernel: kvm-clock: using sched offset of 5495114960 cycles Aug 13 02:06:01.817115 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 02:06:01.817122 kernel: tsc: Detected 2000.000 MHz processor Aug 13 02:06:01.817128 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 02:06:01.817135 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 02:06:01.817141 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 02:06:01.817149 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 02:06:01.817155 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 02:06:01.817161 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 02:06:01.817167 kernel: Using GB pages for direct mapping Aug 13 02:06:01.817173 kernel: ACPI: Early table checksum verification disabled Aug 13 02:06:01.817179 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 02:06:01.817185 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 02:06:01.817191 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 02:06:01.817198 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 02:06:01.817205 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 02:06:01.817211 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 02:06:01.817217 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 02:06:01.817224 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 02:06:01.817233 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 02:06:01.817240 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 02:06:01.817250 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 02:06:01.817257 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 02:06:01.817265 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 02:06:01.817272 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 02:06:01.817280 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 02:06:01.817287 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 02:06:01.817295 kernel: No NUMA configuration found Aug 13 02:06:01.817302 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 02:06:01.817312 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Aug 13 02:06:01.817319 kernel: Zone ranges: Aug 13 02:06:01.817327 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 02:06:01.817334 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 02:06:01.817342 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 02:06:01.817349 kernel: Device empty Aug 13 02:06:01.817357 kernel: Movable zone start for each node Aug 13 02:06:01.817364 kernel: Early memory node ranges Aug 13 02:06:01.817372 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 02:06:01.817379 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 02:06:01.817389 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 02:06:01.817397 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 02:06:01.817404 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 02:06:01.817411 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 02:06:01.817419 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 02:06:01.817426 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 02:06:01.817432 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 02:06:01.817438 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 02:06:01.817444 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 02:06:01.817452 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 02:06:01.817459 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 02:06:01.817465 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 02:06:01.817471 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 02:06:01.817477 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 02:06:01.817484 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 02:06:01.817490 kernel: TSC deadline timer available Aug 13 02:06:01.817496 kernel: CPU topo: Max. logical packages: 1 Aug 13 02:06:01.817502 kernel: CPU topo: Max. logical dies: 1 Aug 13 02:06:01.817510 kernel: CPU topo: Max. dies per package: 1 Aug 13 02:06:01.817516 kernel: CPU topo: Max. threads per core: 1 Aug 13 02:06:01.817523 kernel: CPU topo: Num. cores per package: 2 Aug 13 02:06:01.817529 kernel: CPU topo: Num. threads per package: 2 Aug 13 02:06:01.817535 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 02:06:01.817541 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 02:06:01.817548 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 02:06:01.817554 kernel: kvm-guest: setup PV sched yield Aug 13 02:06:01.817560 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 02:06:01.817568 kernel: Booting paravirtualized kernel on KVM Aug 13 02:06:01.817574 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 02:06:01.817579 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 02:06:01.817585 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 02:06:01.817613 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 02:06:01.817619 kernel: pcpu-alloc: [0] 0 1 Aug 13 02:06:01.817624 kernel: kvm-guest: PV spinlocks enabled Aug 13 02:06:01.817629 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 02:06:01.817635 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 02:06:01.817643 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 02:06:01.817648 kernel: random: crng init done Aug 13 02:06:01.817654 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 02:06:01.817659 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 02:06:01.817665 kernel: Fallback order for Node 0: 0 Aug 13 02:06:01.817670 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Aug 13 02:06:01.817675 kernel: Policy zone: Normal Aug 13 02:06:01.817680 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 02:06:01.817687 kernel: software IO TLB: area num 2. Aug 13 02:06:01.817693 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 02:06:01.817698 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 02:06:01.817703 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 02:06:01.817708 kernel: Dynamic Preempt: voluntary Aug 13 02:06:01.817714 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 02:06:01.817719 kernel: rcu: RCU event tracing is enabled. Aug 13 02:06:01.817725 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 02:06:01.817730 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 02:06:01.817736 kernel: Rude variant of Tasks RCU enabled. Aug 13 02:06:01.817743 kernel: Tracing variant of Tasks RCU enabled. Aug 13 02:06:01.817748 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 02:06:01.817753 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 02:06:01.817759 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 02:06:01.817769 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 02:06:01.817776 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 02:06:01.817781 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 02:06:01.817787 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 02:06:01.817792 kernel: Console: colour VGA+ 80x25 Aug 13 02:06:01.817798 kernel: printk: legacy console [tty0] enabled Aug 13 02:06:01.817804 kernel: printk: legacy console [ttyS0] enabled Aug 13 02:06:01.817811 kernel: ACPI: Core revision 20240827 Aug 13 02:06:01.817816 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 02:06:01.817822 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 02:06:01.817827 kernel: x2apic enabled Aug 13 02:06:01.817833 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 02:06:01.817840 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 02:06:01.817846 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 02:06:01.817851 kernel: kvm-guest: setup PV IPIs Aug 13 02:06:01.817857 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 02:06:01.817862 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Aug 13 02:06:01.817868 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Aug 13 02:06:01.817873 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 02:06:01.817879 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 02:06:01.817884 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 02:06:01.817891 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 02:06:01.817897 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 02:06:01.817903 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 02:06:01.817908 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 02:06:01.817914 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 02:06:01.817919 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 02:06:01.817925 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 02:06:01.817931 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 02:06:01.817938 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 02:06:01.817943 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 02:06:01.817949 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 02:06:01.817955 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 02:06:01.817960 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 02:06:01.817966 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 02:06:01.817971 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 02:06:01.817977 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 02:06:01.817982 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 02:06:01.817989 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 02:06:01.817995 kernel: Freeing SMP alternatives memory: 32K Aug 13 02:06:01.818000 kernel: pid_max: default: 32768 minimum: 301 Aug 13 02:06:01.818006 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 02:06:01.818011 kernel: landlock: Up and running. Aug 13 02:06:01.818017 kernel: SELinux: Initializing. Aug 13 02:06:01.818022 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 02:06:01.818028 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 02:06:01.818033 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 02:06:01.818040 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 02:06:01.818046 kernel: ... version: 0 Aug 13 02:06:01.818051 kernel: ... bit width: 48 Aug 13 02:06:01.818057 kernel: ... generic registers: 6 Aug 13 02:06:01.818062 kernel: ... value mask: 0000ffffffffffff Aug 13 02:06:01.818068 kernel: ... max period: 00007fffffffffff Aug 13 02:06:01.818073 kernel: ... fixed-purpose events: 0 Aug 13 02:06:01.818079 kernel: ... event mask: 000000000000003f Aug 13 02:06:01.818084 kernel: signal: max sigframe size: 3376 Aug 13 02:06:01.818091 kernel: rcu: Hierarchical SRCU implementation. Aug 13 02:06:01.818097 kernel: rcu: Max phase no-delay instances is 400. Aug 13 02:06:01.818102 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 02:06:01.818108 kernel: smp: Bringing up secondary CPUs ... Aug 13 02:06:01.818113 kernel: smpboot: x86: Booting SMP configuration: Aug 13 02:06:01.818119 kernel: .... node #0, CPUs: #1 Aug 13 02:06:01.818124 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 02:06:01.818130 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Aug 13 02:06:01.818136 kernel: Memory: 3961804K/4193772K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 227292K reserved, 0K cma-reserved) Aug 13 02:06:01.818142 kernel: devtmpfs: initialized Aug 13 02:06:01.818148 kernel: x86/mm: Memory block size: 128MB Aug 13 02:06:01.818154 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 02:06:01.818159 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 02:06:01.818165 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 02:06:01.818170 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 02:06:01.818176 kernel: audit: initializing netlink subsys (disabled) Aug 13 02:06:01.818181 kernel: audit: type=2000 audit(1755050758.682:1): state=initialized audit_enabled=0 res=1 Aug 13 02:06:01.818187 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 02:06:01.818194 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 02:06:01.818199 kernel: cpuidle: using governor menu Aug 13 02:06:01.818205 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 02:06:01.818210 kernel: dca service started, version 1.12.1 Aug 13 02:06:01.818216 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Aug 13 02:06:01.818221 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 02:06:01.818227 kernel: PCI: Using configuration type 1 for base access Aug 13 02:06:01.818233 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 02:06:01.818238 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 02:06:01.818245 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 02:06:01.818251 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 02:06:01.818256 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 02:06:01.818262 kernel: ACPI: Added _OSI(Module Device) Aug 13 02:06:01.818267 kernel: ACPI: Added _OSI(Processor Device) Aug 13 02:06:01.818273 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 02:06:01.818278 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 02:06:01.818284 kernel: ACPI: Interpreter enabled Aug 13 02:06:01.818289 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 02:06:01.818296 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 02:06:01.818302 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 02:06:01.818308 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 02:06:01.818313 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 02:06:01.818319 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 02:06:01.818447 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 02:06:01.818540 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 02:06:01.818648 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 02:06:01.818660 kernel: PCI host bridge to bus 0000:00 Aug 13 02:06:01.818751 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 02:06:01.818831 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 02:06:01.818910 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 02:06:01.818988 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 02:06:01.819066 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 02:06:01.819144 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 02:06:01.819237 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 02:06:01.819379 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 02:06:01.819496 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 02:06:01.819657 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Aug 13 02:06:01.819769 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Aug 13 02:06:01.819873 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Aug 13 02:06:01.821683 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 02:06:01.821809 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 02:06:01.821919 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Aug 13 02:06:01.822024 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Aug 13 02:06:01.822129 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 02:06:01.822242 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 02:06:01.822346 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Aug 13 02:06:01.822455 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Aug 13 02:06:01.822575 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 02:06:01.822713 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Aug 13 02:06:01.822829 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 02:06:01.822934 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 02:06:01.823046 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 02:06:01.823156 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Aug 13 02:06:01.823259 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Aug 13 02:06:01.823370 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 02:06:01.823474 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Aug 13 02:06:01.823483 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 02:06:01.823490 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 02:06:01.823497 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 02:06:01.823504 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 02:06:01.823513 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 02:06:01.823520 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 02:06:01.823526 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 02:06:01.823533 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 02:06:01.823539 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 02:06:01.823546 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 02:06:01.823552 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 02:06:01.823559 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 02:06:01.823565 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 02:06:01.823574 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 02:06:01.823580 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 02:06:01.823602 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 02:06:01.823609 kernel: iommu: Default domain type: Translated Aug 13 02:06:01.823616 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 02:06:01.823622 kernel: PCI: Using ACPI for IRQ routing Aug 13 02:06:01.823629 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 02:06:01.823636 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 02:06:01.823642 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 02:06:01.823752 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 02:06:01.823856 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 02:06:01.823959 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 02:06:01.823968 kernel: vgaarb: loaded Aug 13 02:06:01.823975 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 02:06:01.823981 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 02:06:01.823988 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 02:06:01.823995 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 02:06:01.824004 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 02:06:01.824011 kernel: pnp: PnP ACPI init Aug 13 02:06:01.824124 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 02:06:01.824134 kernel: pnp: PnP ACPI: found 5 devices Aug 13 02:06:01.824141 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 02:06:01.824148 kernel: NET: Registered PF_INET protocol family Aug 13 02:06:01.824155 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 02:06:01.824161 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 02:06:01.824171 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 02:06:01.824178 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 02:06:01.824184 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 02:06:01.824191 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 02:06:01.824198 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 02:06:01.824204 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 02:06:01.824211 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 02:06:01.824217 kernel: NET: Registered PF_XDP protocol family Aug 13 02:06:01.824315 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 02:06:01.824413 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 02:06:01.824507 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 02:06:01.825713 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 02:06:01.825824 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 02:06:01.826482 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 02:06:01.826496 kernel: PCI: CLS 0 bytes, default 64 Aug 13 02:06:01.826503 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 02:06:01.826510 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 02:06:01.826521 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Aug 13 02:06:01.826527 kernel: Initialise system trusted keyrings Aug 13 02:06:01.826546 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 02:06:01.826562 kernel: Key type asymmetric registered Aug 13 02:06:01.826568 kernel: Asymmetric key parser 'x509' registered Aug 13 02:06:01.826575 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 02:06:01.826582 kernel: io scheduler mq-deadline registered Aug 13 02:06:01.826645 kernel: io scheduler kyber registered Aug 13 02:06:01.826652 kernel: io scheduler bfq registered Aug 13 02:06:01.826662 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 02:06:01.826669 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 02:06:01.826676 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 02:06:01.826682 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 02:06:01.826689 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 02:06:01.826696 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 02:06:01.826702 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 02:06:01.826709 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 02:06:01.826828 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 02:06:01.826934 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 02:06:01.827032 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T02:06:01 UTC (1755050761) Aug 13 02:06:01.827134 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 02:06:01.827143 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 02:06:01.827150 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Aug 13 02:06:01.827157 kernel: NET: Registered PF_INET6 protocol family Aug 13 02:06:01.827164 kernel: Segment Routing with IPv6 Aug 13 02:06:01.827170 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 02:06:01.827179 kernel: NET: Registered PF_PACKET protocol family Aug 13 02:06:01.827186 kernel: Key type dns_resolver registered Aug 13 02:06:01.827192 kernel: IPI shorthand broadcast: enabled Aug 13 02:06:01.827199 kernel: sched_clock: Marking stable (2594002860, 206728380)->(2838159890, -37428650) Aug 13 02:06:01.827206 kernel: registered taskstats version 1 Aug 13 02:06:01.827212 kernel: Loading compiled-in X.509 certificates Aug 13 02:06:01.827219 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 02:06:01.827226 kernel: Demotion targets for Node 0: null Aug 13 02:06:01.827232 kernel: Key type .fscrypt registered Aug 13 02:06:01.827240 kernel: Key type fscrypt-provisioning registered Aug 13 02:06:01.827247 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 02:06:01.827254 kernel: ima: Allocated hash algorithm: sha1 Aug 13 02:06:01.827261 kernel: ima: No architecture policies found Aug 13 02:06:01.827267 kernel: clk: Disabling unused clocks Aug 13 02:06:01.827274 kernel: Warning: unable to open an initial console. Aug 13 02:06:01.827281 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 02:06:01.827288 kernel: Write protecting the kernel read-only data: 24576k Aug 13 02:06:01.827294 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 02:06:01.827303 kernel: Run /init as init process Aug 13 02:06:01.827309 kernel: with arguments: Aug 13 02:06:01.827316 kernel: /init Aug 13 02:06:01.827322 kernel: with environment: Aug 13 02:06:01.827329 kernel: HOME=/ Aug 13 02:06:01.827347 kernel: TERM=linux Aug 13 02:06:01.827356 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 02:06:01.827363 systemd[1]: Successfully made /usr/ read-only. Aug 13 02:06:01.827375 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 02:06:01.827383 systemd[1]: Detected virtualization kvm. Aug 13 02:06:01.827390 systemd[1]: Detected architecture x86-64. Aug 13 02:06:01.827397 systemd[1]: Running in initrd. Aug 13 02:06:01.827404 systemd[1]: No hostname configured, using default hostname. Aug 13 02:06:01.827411 systemd[1]: Hostname set to . Aug 13 02:06:01.827418 systemd[1]: Initializing machine ID from random generator. Aug 13 02:06:01.827425 systemd[1]: Queued start job for default target initrd.target. Aug 13 02:06:01.827434 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 02:06:01.827442 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 02:06:01.827449 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 02:06:01.827458 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 02:06:01.827466 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 02:06:01.827474 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 02:06:01.827482 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 02:06:01.827492 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 02:06:01.827499 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 02:06:01.827506 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 02:06:01.827513 systemd[1]: Reached target paths.target - Path Units. Aug 13 02:06:01.827521 systemd[1]: Reached target slices.target - Slice Units. Aug 13 02:06:01.827540 systemd[1]: Reached target swap.target - Swaps. Aug 13 02:06:01.827563 systemd[1]: Reached target timers.target - Timer Units. Aug 13 02:06:01.827570 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 02:06:01.827580 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 02:06:01.827599 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 02:06:01.827606 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 02:06:01.827614 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 02:06:01.827632 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 02:06:01.827639 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 02:06:01.827647 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 02:06:01.827656 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 02:06:01.827663 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 02:06:01.827671 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 02:06:01.827678 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 02:06:01.827686 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 02:06:01.827693 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 02:06:01.827700 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 02:06:01.827710 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 02:06:01.827737 systemd-journald[206]: Collecting audit messages is disabled. Aug 13 02:06:01.827755 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 02:06:01.827765 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 02:06:01.827773 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 02:06:01.827780 systemd-journald[206]: Journal started Aug 13 02:06:01.827800 systemd-journald[206]: Runtime Journal (/run/log/journal/03047cfed2e4413e815dc4c0893d5e88) is 8M, max 78.5M, 70.5M free. Aug 13 02:06:01.826574 systemd-modules-load[207]: Inserted module 'overlay' Aug 13 02:06:01.835618 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 02:06:01.838701 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 02:06:01.911753 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 02:06:01.911772 kernel: Bridge firewalling registered Aug 13 02:06:01.846077 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 02:06:01.867025 systemd-modules-load[207]: Inserted module 'br_netfilter' Aug 13 02:06:01.917928 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 02:06:01.921088 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 02:06:01.924408 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 02:06:01.925475 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 02:06:01.928740 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 02:06:01.929449 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 02:06:01.932212 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 02:06:01.937797 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 02:06:01.947245 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 02:06:01.948703 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 02:06:01.952691 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 02:06:01.954937 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 02:06:01.956735 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 02:06:01.972268 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 02:06:01.990760 systemd-resolved[242]: Positive Trust Anchors: Aug 13 02:06:01.990772 systemd-resolved[242]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 02:06:01.990795 systemd-resolved[242]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 02:06:01.994074 systemd-resolved[242]: Defaulting to hostname 'linux'. Aug 13 02:06:01.997439 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 02:06:01.998293 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 02:06:02.050616 kernel: SCSI subsystem initialized Aug 13 02:06:02.059623 kernel: Loading iSCSI transport class v2.0-870. Aug 13 02:06:02.069613 kernel: iscsi: registered transport (tcp) Aug 13 02:06:02.088175 kernel: iscsi: registered transport (qla4xxx) Aug 13 02:06:02.088216 kernel: QLogic iSCSI HBA Driver Aug 13 02:06:02.105092 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 02:06:02.118121 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 02:06:02.120282 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 02:06:02.156964 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 02:06:02.158706 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 02:06:02.201614 kernel: raid6: avx2x4 gen() 33294 MB/s Aug 13 02:06:02.219611 kernel: raid6: avx2x2 gen() 36301 MB/s Aug 13 02:06:02.237932 kernel: raid6: avx2x1 gen() 22530 MB/s Aug 13 02:06:02.237946 kernel: raid6: using algorithm avx2x2 gen() 36301 MB/s Aug 13 02:06:02.256963 kernel: raid6: .... xor() 29913 MB/s, rmw enabled Aug 13 02:06:02.256993 kernel: raid6: using avx2x2 recovery algorithm Aug 13 02:06:02.275620 kernel: xor: automatically using best checksumming function avx Aug 13 02:06:02.404620 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 02:06:02.411169 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 02:06:02.413076 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 02:06:02.439472 systemd-udevd[455]: Using default interface naming scheme 'v255'. Aug 13 02:06:02.444352 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 02:06:02.447336 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 02:06:02.472576 dracut-pre-trigger[465]: rd.md=0: removing MD RAID activation Aug 13 02:06:02.495939 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 02:06:02.497876 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 02:06:02.555936 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 02:06:02.560096 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 02:06:02.618647 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 02:06:02.631868 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 02:06:02.633620 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Aug 13 02:06:02.633958 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 02:06:02.634072 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 02:06:02.635357 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 02:06:02.638920 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 02:06:02.652657 kernel: scsi host0: Virtio SCSI HBA Aug 13 02:06:02.662626 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 02:06:02.745720 kernel: libata version 3.00 loaded. Aug 13 02:06:02.800675 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 02:06:02.803935 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 02:06:02.803972 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 02:06:02.804135 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 02:06:02.804265 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 02:06:02.807510 kernel: AES CTR mode by8 optimization enabled Aug 13 02:06:02.811639 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 02:06:02.811913 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 02:06:02.812057 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 02:06:02.812187 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 02:06:02.812317 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 02:06:02.817625 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 02:06:02.817652 kernel: GPT:9289727 != 9297919 Aug 13 02:06:02.817664 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 02:06:02.817674 kernel: GPT:9289727 != 9297919 Aug 13 02:06:02.817689 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 02:06:02.817698 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 02:06:02.817708 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 02:06:02.864779 kernel: scsi host1: ahci Aug 13 02:06:02.865129 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 02:06:02.869646 kernel: scsi host2: ahci Aug 13 02:06:02.872881 kernel: scsi host3: ahci Aug 13 02:06:02.873077 kernel: scsi host4: ahci Aug 13 02:06:02.874776 kernel: scsi host5: ahci Aug 13 02:06:02.877625 kernel: scsi host6: ahci Aug 13 02:06:02.881827 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Aug 13 02:06:02.881850 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Aug 13 02:06:02.883848 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Aug 13 02:06:02.886612 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Aug 13 02:06:02.903800 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 02:06:02.916656 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Aug 13 02:06:02.916672 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Aug 13 02:06:02.929662 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 02:06:02.939999 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 02:06:02.940637 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 02:06:02.949371 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 02:06:02.951653 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 02:06:02.989628 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 02:06:02.990044 disk-uuid[625]: Primary Header is updated. Aug 13 02:06:02.990044 disk-uuid[625]: Secondary Entries is updated. Aug 13 02:06:02.990044 disk-uuid[625]: Secondary Header is updated. Aug 13 02:06:03.216622 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 02:06:03.225222 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 02:06:03.225245 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 02:06:03.225605 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 02:06:03.228242 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 02:06:03.228609 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 02:06:03.245842 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 02:06:03.247129 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 02:06:03.248049 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 02:06:03.249333 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 02:06:03.251490 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 02:06:03.279162 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 02:06:04.015080 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 02:06:04.015161 disk-uuid[626]: The operation has completed successfully. Aug 13 02:06:04.064850 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 02:06:04.065022 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 02:06:04.094775 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 02:06:04.109631 sh[655]: Success Aug 13 02:06:04.127975 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 02:06:04.128009 kernel: device-mapper: uevent: version 1.0.3 Aug 13 02:06:04.128759 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 02:06:04.139620 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 02:06:04.180974 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 02:06:04.184658 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 02:06:04.195284 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 02:06:04.206691 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 02:06:04.206717 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (667) Aug 13 02:06:04.209765 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 02:06:04.213574 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 02:06:04.213627 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 02:06:04.223219 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 02:06:04.224108 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 02:06:04.225036 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 02:06:04.225704 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 02:06:04.228696 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 02:06:04.265672 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (702) Aug 13 02:06:04.265759 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 02:06:04.269822 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 02:06:04.269869 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 02:06:04.281660 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 02:06:04.282961 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 02:06:04.285847 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 02:06:04.377489 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 02:06:04.384890 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 02:06:04.395261 ignition[759]: Ignition 2.21.0 Aug 13 02:06:04.395276 ignition[759]: Stage: fetch-offline Aug 13 02:06:04.395311 ignition[759]: no configs at "/usr/lib/ignition/base.d" Aug 13 02:06:04.395322 ignition[759]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 02:06:04.395410 ignition[759]: parsed url from cmdline: "" Aug 13 02:06:04.398694 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 02:06:04.395414 ignition[759]: no config URL provided Aug 13 02:06:04.395419 ignition[759]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 02:06:04.395427 ignition[759]: no config at "/usr/lib/ignition/user.ign" Aug 13 02:06:04.395432 ignition[759]: failed to fetch config: resource requires networking Aug 13 02:06:04.395619 ignition[759]: Ignition finished successfully Aug 13 02:06:04.422837 systemd-networkd[841]: lo: Link UP Aug 13 02:06:04.422848 systemd-networkd[841]: lo: Gained carrier Aug 13 02:06:04.424246 systemd-networkd[841]: Enumeration completed Aug 13 02:06:04.424664 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 02:06:04.424696 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 02:06:04.424700 systemd-networkd[841]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 02:06:04.426236 systemd-networkd[841]: eth0: Link UP Aug 13 02:06:04.426430 systemd-networkd[841]: eth0: Gained carrier Aug 13 02:06:04.426439 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 02:06:04.427640 systemd[1]: Reached target network.target - Network. Aug 13 02:06:04.428913 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 02:06:04.452823 ignition[845]: Ignition 2.21.0 Aug 13 02:06:04.452833 ignition[845]: Stage: fetch Aug 13 02:06:04.452957 ignition[845]: no configs at "/usr/lib/ignition/base.d" Aug 13 02:06:04.452967 ignition[845]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 02:06:04.453046 ignition[845]: parsed url from cmdline: "" Aug 13 02:06:04.453050 ignition[845]: no config URL provided Aug 13 02:06:04.453054 ignition[845]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 02:06:04.453062 ignition[845]: no config at "/usr/lib/ignition/user.ign" Aug 13 02:06:04.453093 ignition[845]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 02:06:04.453260 ignition[845]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 02:06:04.653977 ignition[845]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 02:06:04.654205 ignition[845]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 02:06:04.967664 systemd-networkd[841]: eth0: DHCPv4 address 172.236.122.171/24, gateway 172.236.122.1 acquired from 23.194.118.51 Aug 13 02:06:05.054451 ignition[845]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 02:06:05.161017 ignition[845]: PUT result: OK Aug 13 02:06:05.161610 ignition[845]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 02:06:05.314209 ignition[845]: GET result: OK Aug 13 02:06:05.314333 ignition[845]: parsing config with SHA512: c0dc84a223f7452bb0bc705e2bb202bb20f20084dd8f4e6a1558e6feb46a2e7437921a793019d80087e999b80e96084be6afcc2df5b4ba8c1cf556db3e62096f Aug 13 02:06:05.320486 unknown[845]: fetched base config from "system" Aug 13 02:06:05.320496 unknown[845]: fetched base config from "system" Aug 13 02:06:05.320789 ignition[845]: fetch: fetch complete Aug 13 02:06:05.320502 unknown[845]: fetched user config from "akamai" Aug 13 02:06:05.320794 ignition[845]: fetch: fetch passed Aug 13 02:06:05.320836 ignition[845]: Ignition finished successfully Aug 13 02:06:05.324785 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 02:06:05.326324 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 02:06:05.370145 ignition[852]: Ignition 2.21.0 Aug 13 02:06:05.370156 ignition[852]: Stage: kargs Aug 13 02:06:05.370283 ignition[852]: no configs at "/usr/lib/ignition/base.d" Aug 13 02:06:05.370294 ignition[852]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 02:06:05.370980 ignition[852]: kargs: kargs passed Aug 13 02:06:05.373851 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 02:06:05.371153 ignition[852]: Ignition finished successfully Aug 13 02:06:05.376064 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 02:06:05.398813 ignition[859]: Ignition 2.21.0 Aug 13 02:06:05.398827 ignition[859]: Stage: disks Aug 13 02:06:05.398960 ignition[859]: no configs at "/usr/lib/ignition/base.d" Aug 13 02:06:05.398970 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 02:06:05.400014 ignition[859]: disks: disks passed Aug 13 02:06:05.401403 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 02:06:05.400050 ignition[859]: Ignition finished successfully Aug 13 02:06:05.402540 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 02:06:05.403417 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 02:06:05.404431 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 02:06:05.405567 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 02:06:05.406764 systemd[1]: Reached target basic.target - Basic System. Aug 13 02:06:05.408525 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 02:06:05.432836 systemd-fsck[868]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 02:06:05.434993 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 02:06:05.436655 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 02:06:05.464913 systemd-networkd[841]: eth0: Gained IPv6LL Aug 13 02:06:05.543606 kernel: EXT4-fs (sda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 02:06:05.544058 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 02:06:05.544962 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 02:06:05.546797 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 02:06:05.549672 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 02:06:05.551361 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 02:06:05.551960 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 02:06:05.551984 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 02:06:05.556647 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 02:06:05.558803 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 02:06:05.566625 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (876) Aug 13 02:06:05.570082 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 02:06:05.570108 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 02:06:05.571873 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 02:06:05.577146 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 02:06:05.603801 initrd-setup-root[900]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 02:06:05.608015 initrd-setup-root[907]: cut: /sysroot/etc/group: No such file or directory Aug 13 02:06:05.612441 initrd-setup-root[914]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 02:06:05.616804 initrd-setup-root[921]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 02:06:05.693790 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 02:06:05.695453 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 02:06:05.697162 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 02:06:05.713184 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 02:06:05.715604 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 02:06:05.728742 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 02:06:05.737810 ignition[989]: INFO : Ignition 2.21.0 Aug 13 02:06:05.737810 ignition[989]: INFO : Stage: mount Aug 13 02:06:05.739064 ignition[989]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 02:06:05.739064 ignition[989]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 02:06:05.739064 ignition[989]: INFO : mount: mount passed Aug 13 02:06:05.739064 ignition[989]: INFO : Ignition finished successfully Aug 13 02:06:05.739773 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 02:06:05.742620 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 02:06:06.545772 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 02:06:06.567756 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1001) Aug 13 02:06:06.567790 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 02:06:06.571600 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 02:06:06.571616 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 02:06:06.576822 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 02:06:06.603561 ignition[1017]: INFO : Ignition 2.21.0 Aug 13 02:06:06.603561 ignition[1017]: INFO : Stage: files Aug 13 02:06:06.604859 ignition[1017]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 02:06:06.604859 ignition[1017]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 02:06:06.604859 ignition[1017]: DEBUG : files: compiled without relabeling support, skipping Aug 13 02:06:06.606995 ignition[1017]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 02:06:06.606995 ignition[1017]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 02:06:06.608734 ignition[1017]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 02:06:06.608734 ignition[1017]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 02:06:06.608734 ignition[1017]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 02:06:06.607446 unknown[1017]: wrote ssh authorized keys file for user: core Aug 13 02:06:06.611618 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 02:06:06.611618 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 02:06:06.834829 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 02:06:07.870492 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 02:06:07.872694 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 02:06:07.872694 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 02:06:07.872694 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 02:06:07.872694 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 02:06:07.872694 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 02:06:07.872694 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 02:06:07.872694 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 02:06:07.872694 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 02:06:07.879485 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 02:06:07.879485 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 02:06:07.879485 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 02:06:07.879485 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 02:06:07.879485 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 02:06:07.879485 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 02:06:08.448078 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 02:06:08.973124 ignition[1017]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 02:06:08.973124 ignition[1017]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 02:06:08.975421 ignition[1017]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 02:06:08.977729 ignition[1017]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 02:06:08.977729 ignition[1017]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 02:06:08.977729 ignition[1017]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 13 02:06:08.977729 ignition[1017]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 02:06:08.977729 ignition[1017]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 02:06:08.977729 ignition[1017]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 13 02:06:08.977729 ignition[1017]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Aug 13 02:06:08.977729 ignition[1017]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 02:06:08.977729 ignition[1017]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 02:06:08.977729 ignition[1017]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 02:06:08.977729 ignition[1017]: INFO : files: files passed Aug 13 02:06:08.977729 ignition[1017]: INFO : Ignition finished successfully Aug 13 02:06:08.978874 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 02:06:08.981976 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 02:06:08.985014 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 02:06:08.994176 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 02:06:08.994298 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 02:06:09.001619 initrd-setup-root-after-ignition[1048]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 02:06:09.002530 initrd-setup-root-after-ignition[1052]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 02:06:09.003457 initrd-setup-root-after-ignition[1048]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 02:06:09.004495 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 02:06:09.005280 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 02:06:09.007112 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 02:06:09.053778 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 02:06:09.053900 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 02:06:09.055172 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 02:06:09.056131 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 02:06:09.057271 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 02:06:09.057982 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 02:06:09.095726 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 02:06:09.098106 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 02:06:09.114435 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 02:06:09.115298 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 02:06:09.116551 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 02:06:09.117707 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 02:06:09.117840 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 02:06:09.119045 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 02:06:09.119804 systemd[1]: Stopped target basic.target - Basic System. Aug 13 02:06:09.120947 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 02:06:09.122009 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 02:06:09.123075 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 02:06:09.124255 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 02:06:09.125435 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 02:06:09.126610 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 02:06:09.127836 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 02:06:09.129008 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 02:06:09.130207 systemd[1]: Stopped target swap.target - Swaps. Aug 13 02:06:09.131291 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 02:06:09.131421 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 02:06:09.132576 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 02:06:09.133334 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 02:06:09.134358 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 02:06:09.134457 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 02:06:09.135533 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 02:06:09.135684 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 02:06:09.137151 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 02:06:09.137257 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 02:06:09.138036 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 02:06:09.138162 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 02:06:09.140681 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 02:06:09.141344 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 02:06:09.141449 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 02:06:09.145566 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 02:06:09.146872 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 02:06:09.146981 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 02:06:09.147620 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 02:06:09.147712 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 02:06:09.153881 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 02:06:09.153982 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 02:06:09.169041 ignition[1072]: INFO : Ignition 2.21.0 Aug 13 02:06:09.169041 ignition[1072]: INFO : Stage: umount Aug 13 02:06:09.171649 ignition[1072]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 02:06:09.171649 ignition[1072]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 02:06:09.174665 ignition[1072]: INFO : umount: umount passed Aug 13 02:06:09.174665 ignition[1072]: INFO : Ignition finished successfully Aug 13 02:06:09.174170 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 02:06:09.174303 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 02:06:09.175347 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 02:06:09.175409 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 02:06:09.198033 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 02:06:09.198097 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 02:06:09.199149 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 02:06:09.199199 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 02:06:09.200200 systemd[1]: Stopped target network.target - Network. Aug 13 02:06:09.201153 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 02:06:09.201204 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 02:06:09.202201 systemd[1]: Stopped target paths.target - Path Units. Aug 13 02:06:09.203172 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 02:06:09.206630 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 02:06:09.207796 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 02:06:09.208785 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 02:06:09.209943 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 02:06:09.209984 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 02:06:09.211390 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 02:06:09.211427 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 02:06:09.212560 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 02:06:09.212659 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 02:06:09.213781 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 02:06:09.213824 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 02:06:09.215024 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 02:06:09.216101 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 02:06:09.218190 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 02:06:09.219031 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 02:06:09.219138 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 02:06:09.221158 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 02:06:09.221266 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 02:06:09.225358 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 02:06:09.226099 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 02:06:09.226184 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 02:06:09.227737 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 02:06:09.227791 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 02:06:09.230389 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 02:06:09.230680 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 02:06:09.230795 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 02:06:09.233150 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 02:06:09.233574 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 02:06:09.234835 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 02:06:09.234886 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 02:06:09.236716 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 02:06:09.237912 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 02:06:09.237963 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 02:06:09.239974 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 02:06:09.240036 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 02:06:09.241325 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 02:06:09.241373 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 02:06:09.242535 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 02:06:09.245056 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 02:06:09.263215 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 02:06:09.263336 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 02:06:09.264827 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 02:06:09.264973 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 02:06:09.266209 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 02:06:09.266275 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 02:06:09.267513 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 02:06:09.267547 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 02:06:09.268660 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 02:06:09.268707 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 02:06:09.270344 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 02:06:09.270390 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 02:06:09.271467 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 02:06:09.271517 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 02:06:09.274690 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 02:06:09.275254 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 02:06:09.275308 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 02:06:09.276708 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 02:06:09.276755 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 02:06:09.279754 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 02:06:09.279801 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 02:06:09.288084 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 02:06:09.288209 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 02:06:09.289518 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 02:06:09.291309 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 02:06:09.312314 systemd[1]: Switching root. Aug 13 02:06:09.347784 systemd-journald[206]: Journal stopped Aug 13 02:06:10.414914 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Aug 13 02:06:10.414941 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 02:06:10.414953 kernel: SELinux: policy capability open_perms=1 Aug 13 02:06:10.414965 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 02:06:10.414973 kernel: SELinux: policy capability always_check_network=0 Aug 13 02:06:10.414982 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 02:06:10.414991 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 02:06:10.415000 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 02:06:10.415008 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 02:06:10.415016 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 02:06:10.415027 kernel: audit: type=1403 audit(1755050769.486:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 02:06:10.415037 systemd[1]: Successfully loaded SELinux policy in 53.042ms. Aug 13 02:06:10.415047 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.109ms. Aug 13 02:06:10.415057 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 02:06:10.415067 systemd[1]: Detected virtualization kvm. Aug 13 02:06:10.415079 systemd[1]: Detected architecture x86-64. Aug 13 02:06:10.415088 systemd[1]: Detected first boot. Aug 13 02:06:10.415097 systemd[1]: Initializing machine ID from random generator. Aug 13 02:06:10.415107 zram_generator::config[1116]: No configuration found. Aug 13 02:06:10.415116 kernel: Guest personality initialized and is inactive Aug 13 02:06:10.415125 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 02:06:10.415134 kernel: Initialized host personality Aug 13 02:06:10.415145 kernel: NET: Registered PF_VSOCK protocol family Aug 13 02:06:10.415154 systemd[1]: Populated /etc with preset unit settings. Aug 13 02:06:10.415164 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 02:06:10.415173 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 02:06:10.415182 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 02:06:10.415192 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 02:06:10.415201 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 02:06:10.415212 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 02:06:10.415222 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 02:06:10.415232 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 02:06:10.415241 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 02:06:10.415251 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 02:06:10.415260 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 02:06:10.415269 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 02:06:10.415280 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 02:06:10.415290 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 02:06:10.415299 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 02:06:10.415309 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 02:06:10.415321 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 02:06:10.415331 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 02:06:10.415341 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 02:06:10.415351 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 02:06:10.415362 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 02:06:10.415372 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 02:06:10.415381 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 02:06:10.415391 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 02:06:10.415401 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 02:06:10.415410 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 02:06:10.415421 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 02:06:10.415430 systemd[1]: Reached target slices.target - Slice Units. Aug 13 02:06:10.415442 systemd[1]: Reached target swap.target - Swaps. Aug 13 02:06:10.415451 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 02:06:10.415461 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 02:06:10.415470 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 02:06:10.415481 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 02:06:10.415492 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 02:06:10.415502 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 02:06:10.415512 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 02:06:10.415521 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 02:06:10.415531 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 02:06:10.415541 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 02:06:10.415551 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 02:06:10.415560 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 02:06:10.415572 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 02:06:10.415581 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 02:06:10.415862 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 02:06:10.415877 systemd[1]: Reached target machines.target - Containers. Aug 13 02:06:10.415888 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 02:06:10.415898 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 02:06:10.417642 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 02:06:10.417657 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 02:06:10.417671 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 02:06:10.417681 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 02:06:10.417690 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 02:06:10.417700 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 02:06:10.417710 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 02:06:10.417720 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 02:06:10.417729 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 02:06:10.417739 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 02:06:10.417749 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 02:06:10.417760 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 02:06:10.417771 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 02:06:10.417781 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 02:06:10.417790 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 02:06:10.417800 kernel: fuse: init (API version 7.41) Aug 13 02:06:10.417809 kernel: ACPI: bus type drm_connector registered Aug 13 02:06:10.417818 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 02:06:10.417828 kernel: loop: module loaded Aug 13 02:06:10.417839 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 02:06:10.417850 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 02:06:10.417859 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 02:06:10.417869 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 02:06:10.417879 systemd[1]: Stopped verity-setup.service. Aug 13 02:06:10.417889 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 02:06:10.417899 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 02:06:10.417909 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 02:06:10.417920 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 02:06:10.417930 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 02:06:10.417940 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 02:06:10.417949 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 02:06:10.417959 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 02:06:10.417968 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 02:06:10.417978 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 02:06:10.417987 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 02:06:10.417997 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 02:06:10.418008 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 02:06:10.418018 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 02:06:10.418027 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 02:06:10.418037 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 02:06:10.418069 systemd-journald[1196]: Collecting audit messages is disabled. Aug 13 02:06:10.418094 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 02:06:10.418105 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 02:06:10.418114 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 02:06:10.418124 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 02:06:10.418134 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 02:06:10.418143 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 02:06:10.418159 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 02:06:10.418169 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 02:06:10.418179 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 02:06:10.418191 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 02:06:10.418201 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 02:06:10.418211 systemd-journald[1196]: Journal started Aug 13 02:06:10.418231 systemd-journald[1196]: Runtime Journal (/run/log/journal/6a19af7ba1ac448dbcaf909544a10119) is 8M, max 78.5M, 70.5M free. Aug 13 02:06:10.052541 systemd[1]: Queued start job for default target multi-user.target. Aug 13 02:06:10.068146 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 02:06:10.068668 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 02:06:10.426544 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 02:06:10.426574 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 02:06:10.428521 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 02:06:10.431612 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 02:06:10.437642 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 02:06:10.440615 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 02:06:10.450078 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 02:06:10.450107 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 02:06:10.455610 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 02:06:10.458636 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 02:06:10.462687 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 02:06:10.470946 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 02:06:10.475654 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 02:06:10.481612 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 02:06:10.482159 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 02:06:10.483241 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 02:06:10.484236 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 02:06:10.486816 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 02:06:10.513545 kernel: loop0: detected capacity change from 0 to 113872 Aug 13 02:06:10.514098 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 02:06:10.518320 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 02:06:10.523771 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 02:06:10.529007 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 02:06:10.542636 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 02:06:10.550781 systemd-journald[1196]: Time spent on flushing to /var/log/journal/6a19af7ba1ac448dbcaf909544a10119 is 23.077ms for 1000 entries. Aug 13 02:06:10.550781 systemd-journald[1196]: System Journal (/var/log/journal/6a19af7ba1ac448dbcaf909544a10119) is 8M, max 195.6M, 187.6M free. Aug 13 02:06:10.581828 systemd-journald[1196]: Received client request to flush runtime journal. Aug 13 02:06:10.581873 kernel: loop1: detected capacity change from 0 to 224512 Aug 13 02:06:10.560332 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 02:06:10.564970 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 02:06:10.569867 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 02:06:10.583336 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 02:06:10.608725 kernel: loop2: detected capacity change from 0 to 146240 Aug 13 02:06:10.609712 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Aug 13 02:06:10.610066 systemd-tmpfiles[1257]: ACLs are not supported, ignoring. Aug 13 02:06:10.624395 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 02:06:10.646611 kernel: loop3: detected capacity change from 0 to 8 Aug 13 02:06:10.661608 kernel: loop4: detected capacity change from 0 to 113872 Aug 13 02:06:10.674611 kernel: loop5: detected capacity change from 0 to 224512 Aug 13 02:06:10.696611 kernel: loop6: detected capacity change from 0 to 146240 Aug 13 02:06:10.715667 kernel: loop7: detected capacity change from 0 to 8 Aug 13 02:06:10.716174 (sd-merge)[1266]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 02:06:10.717009 (sd-merge)[1266]: Merged extensions into '/usr'. Aug 13 02:06:10.723287 systemd[1]: Reload requested from client PID 1223 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 02:06:10.723385 systemd[1]: Reloading... Aug 13 02:06:10.811708 zram_generator::config[1292]: No configuration found. Aug 13 02:06:10.926664 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 02:06:10.980511 ldconfig[1219]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 02:06:11.005797 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 02:06:11.006545 systemd[1]: Reloading finished in 282 ms. Aug 13 02:06:11.039159 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 02:06:11.040211 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 02:06:11.051846 systemd[1]: Starting ensure-sysext.service... Aug 13 02:06:11.055717 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 02:06:11.083686 systemd[1]: Reload requested from client PID 1335 ('systemctl') (unit ensure-sysext.service)... Aug 13 02:06:11.083700 systemd[1]: Reloading... Aug 13 02:06:11.101088 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 02:06:11.101484 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 02:06:11.101889 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 02:06:11.102870 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 02:06:11.107318 systemd-tmpfiles[1336]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 02:06:11.107671 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Aug 13 02:06:11.108132 systemd-tmpfiles[1336]: ACLs are not supported, ignoring. Aug 13 02:06:11.115850 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 02:06:11.116873 systemd-tmpfiles[1336]: Skipping /boot Aug 13 02:06:11.137007 systemd-tmpfiles[1336]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 02:06:11.137077 systemd-tmpfiles[1336]: Skipping /boot Aug 13 02:06:11.154612 zram_generator::config[1362]: No configuration found. Aug 13 02:06:11.249844 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 02:06:11.319676 systemd[1]: Reloading finished in 235 ms. Aug 13 02:06:11.340337 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 02:06:11.350394 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 02:06:11.357792 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 02:06:11.361351 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 02:06:11.369775 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 02:06:11.372372 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 02:06:11.377873 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 02:06:11.382542 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 02:06:11.386106 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 02:06:11.386264 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 02:06:11.390996 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 02:06:11.393482 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 02:06:11.396678 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 02:06:11.397777 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 02:06:11.398065 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 02:06:11.398158 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 02:06:11.411078 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 02:06:11.414773 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 02:06:11.414975 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 02:06:11.426821 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 02:06:11.427062 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 02:06:11.430093 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 02:06:11.441968 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 02:06:11.442665 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 02:06:11.442763 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 02:06:11.442885 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 02:06:11.445000 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 02:06:11.446924 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 02:06:11.448245 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 02:06:11.449656 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 02:06:11.457950 systemd-udevd[1413]: Using default interface naming scheme 'v255'. Aug 13 02:06:11.460930 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 02:06:11.467752 systemd[1]: Finished ensure-sysext.service. Aug 13 02:06:11.469230 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 02:06:11.469428 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 02:06:11.472106 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 02:06:11.476772 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 02:06:11.480289 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 02:06:11.482090 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 02:06:11.483035 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 02:06:11.483225 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 02:06:11.485562 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 02:06:11.495642 augenrules[1449]: No rules Aug 13 02:06:11.496506 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 02:06:11.496773 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 02:06:11.501893 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 02:06:11.524577 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 02:06:11.526869 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 02:06:11.532364 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 02:06:11.534641 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 02:06:11.537278 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 02:06:11.640879 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 02:06:11.700631 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Aug 13 02:06:11.703705 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 02:06:11.732235 systemd-networkd[1461]: lo: Link UP Aug 13 02:06:11.732495 systemd-networkd[1461]: lo: Gained carrier Aug 13 02:06:11.733409 systemd-networkd[1461]: Enumeration completed Aug 13 02:06:11.733666 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 02:06:11.736521 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 02:06:11.739819 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 02:06:11.752614 kernel: ACPI: button: Power Button [PWRF] Aug 13 02:06:11.759268 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 02:06:11.760341 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 02:06:11.785428 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 02:06:11.791063 systemd-resolved[1411]: Positive Trust Anchors: Aug 13 02:06:11.791295 systemd-resolved[1411]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 02:06:11.791361 systemd-resolved[1411]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 02:06:11.795781 systemd-resolved[1411]: Defaulting to hostname 'linux'. Aug 13 02:06:11.798766 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 02:06:11.799361 systemd[1]: Reached target network.target - Network. Aug 13 02:06:11.799868 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 02:06:11.800631 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 02:06:11.801477 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 02:06:11.802427 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 02:06:11.804018 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 02:06:11.804742 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 02:06:11.806756 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 02:06:11.807322 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 02:06:11.807924 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 02:06:11.807958 systemd[1]: Reached target paths.target - Path Units. Aug 13 02:06:11.808438 systemd[1]: Reached target timers.target - Timer Units. Aug 13 02:06:11.810089 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 02:06:11.813164 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 02:06:11.819894 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 02:06:11.820965 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 02:06:11.822663 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 02:06:11.830659 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 02:06:11.831659 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 02:06:11.833343 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 02:06:11.836400 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 02:06:11.838122 systemd[1]: Reached target basic.target - Basic System. Aug 13 02:06:11.839744 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 02:06:11.839772 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 02:06:11.840819 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 02:06:11.845784 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 02:06:11.848782 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 02:06:11.852667 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 02:06:11.857170 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 02:06:11.862147 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 02:06:11.862759 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 02:06:11.864832 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 02:06:11.872487 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 02:06:11.874779 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 02:06:11.873133 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 02:06:11.876864 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 02:06:11.880763 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 02:06:11.883017 jq[1518]: false Aug 13 02:06:11.883786 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 02:06:11.897655 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 02:06:11.900794 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 02:06:11.901195 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 02:06:11.906535 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 02:06:11.912670 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 02:06:11.916961 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 02:06:11.917884 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 02:06:11.919650 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 02:06:11.940108 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 02:06:11.940399 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 02:06:11.944708 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Refreshing passwd entry cache Aug 13 02:06:11.943698 oslogin_cache_refresh[1520]: Refreshing passwd entry cache Aug 13 02:06:11.951347 extend-filesystems[1519]: Found /dev/sda6 Aug 13 02:06:11.955352 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Failure getting users, quitting Aug 13 02:06:11.955352 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 02:06:11.955352 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Refreshing group entry cache Aug 13 02:06:11.955223 oslogin_cache_refresh[1520]: Failure getting users, quitting Aug 13 02:06:11.955236 oslogin_cache_refresh[1520]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 02:06:11.955271 oslogin_cache_refresh[1520]: Refreshing group entry cache Aug 13 02:06:11.956815 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Failure getting groups, quitting Aug 13 02:06:11.956815 google_oslogin_nss_cache[1520]: oslogin_cache_refresh[1520]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 02:06:11.956540 oslogin_cache_refresh[1520]: Failure getting groups, quitting Aug 13 02:06:11.956549 oslogin_cache_refresh[1520]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 02:06:11.966661 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 02:06:11.968440 extend-filesystems[1519]: Found /dev/sda9 Aug 13 02:06:11.973821 jq[1533]: true Aug 13 02:06:11.969763 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 02:06:11.969927 (ntainerd)[1542]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 02:06:11.982256 extend-filesystems[1519]: Checking size of /dev/sda9 Aug 13 02:06:11.984948 update_engine[1529]: I20250813 02:06:11.984673 1529 main.cc:92] Flatcar Update Engine starting Aug 13 02:06:11.989643 tar[1537]: linux-amd64/LICENSE Aug 13 02:06:11.999215 tar[1537]: linux-amd64/helm Aug 13 02:06:12.000139 systemd-networkd[1461]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 02:06:12.000155 systemd-networkd[1461]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 02:06:12.002771 systemd-networkd[1461]: eth0: Link UP Aug 13 02:06:12.002936 systemd-networkd[1461]: eth0: Gained carrier Aug 13 02:06:12.002958 systemd-networkd[1461]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 02:06:12.033094 dbus-daemon[1516]: [system] SELinux support is enabled Aug 13 02:06:12.034004 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 02:06:12.038432 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 02:06:12.039646 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 02:06:12.040340 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 02:06:12.040355 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 02:06:12.056905 jq[1555]: true Aug 13 02:06:12.063014 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 02:06:12.063385 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 02:06:12.076072 systemd[1]: Started update-engine.service - Update Engine. Aug 13 02:06:12.077446 update_engine[1529]: I20250813 02:06:12.076956 1529 update_check_scheduler.cc:74] Next update check in 4m52s Aug 13 02:06:12.113830 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 02:06:12.139330 coreos-metadata[1515]: Aug 13 02:06:12.139 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 02:06:12.159534 extend-filesystems[1519]: Resized partition /dev/sda9 Aug 13 02:06:12.159462 systemd-logind[1527]: New seat seat0. Aug 13 02:06:12.166875 extend-filesystems[1588]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 02:06:12.176370 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 02:06:12.176392 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 02:06:12.167998 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 02:06:12.176863 extend-filesystems[1588]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 02:06:12.176863 extend-filesystems[1588]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 02:06:12.176863 extend-filesystems[1588]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 02:06:12.180287 extend-filesystems[1519]: Resized filesystem in /dev/sda9 Aug 13 02:06:12.179462 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 02:06:12.184175 bash[1587]: Updated "/home/core/.ssh/authorized_keys" Aug 13 02:06:12.179739 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 02:06:12.182798 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 02:06:12.185821 systemd[1]: Starting sshkeys.service... Aug 13 02:06:12.190438 containerd[1542]: time="2025-08-13T02:06:12Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 02:06:12.195494 containerd[1542]: time="2025-08-13T02:06:12.195462810Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 02:06:12.221856 containerd[1542]: time="2025-08-13T02:06:12.221815920Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="7.4µs" Aug 13 02:06:12.221856 containerd[1542]: time="2025-08-13T02:06:12.221842580Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 02:06:12.221856 containerd[1542]: time="2025-08-13T02:06:12.221858730Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 02:06:12.222026 containerd[1542]: time="2025-08-13T02:06:12.222000880Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 02:06:12.222026 containerd[1542]: time="2025-08-13T02:06:12.222023380Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 02:06:12.222073 containerd[1542]: time="2025-08-13T02:06:12.222044510Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 02:06:12.222131 containerd[1542]: time="2025-08-13T02:06:12.222105860Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 02:06:12.222131 containerd[1542]: time="2025-08-13T02:06:12.222124740Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 02:06:12.222348 containerd[1542]: time="2025-08-13T02:06:12.222321500Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 02:06:12.222348 containerd[1542]: time="2025-08-13T02:06:12.222342620Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 02:06:12.222383 containerd[1542]: time="2025-08-13T02:06:12.222352590Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 02:06:12.222383 containerd[1542]: time="2025-08-13T02:06:12.222360610Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 02:06:12.222471 containerd[1542]: time="2025-08-13T02:06:12.222448130Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 02:06:12.222744 containerd[1542]: time="2025-08-13T02:06:12.222718470Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 02:06:12.222768 containerd[1542]: time="2025-08-13T02:06:12.222755750Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 02:06:12.222797 containerd[1542]: time="2025-08-13T02:06:12.222765360Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 02:06:12.226150 containerd[1542]: time="2025-08-13T02:06:12.226120890Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 02:06:12.227807 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 02:06:12.231080 containerd[1542]: time="2025-08-13T02:06:12.231049840Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 02:06:12.231178 containerd[1542]: time="2025-08-13T02:06:12.231152120Z" level=info msg="metadata content store policy set" policy=shared Aug 13 02:06:12.233358 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 02:06:12.239392 containerd[1542]: time="2025-08-13T02:06:12.239357100Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 02:06:12.239428 containerd[1542]: time="2025-08-13T02:06:12.239410930Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 02:06:12.239462 containerd[1542]: time="2025-08-13T02:06:12.239431050Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 02:06:12.239462 containerd[1542]: time="2025-08-13T02:06:12.239442110Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 02:06:12.239497 containerd[1542]: time="2025-08-13T02:06:12.239484130Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 02:06:12.239515 containerd[1542]: time="2025-08-13T02:06:12.239495500Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 02:06:12.239515 containerd[1542]: time="2025-08-13T02:06:12.239506550Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 02:06:12.239546 containerd[1542]: time="2025-08-13T02:06:12.239516080Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 02:06:12.239546 containerd[1542]: time="2025-08-13T02:06:12.239525150Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 02:06:12.239546 containerd[1542]: time="2025-08-13T02:06:12.239534260Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 02:06:12.239546 containerd[1542]: time="2025-08-13T02:06:12.239541620Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 02:06:12.239662 containerd[1542]: time="2025-08-13T02:06:12.239551180Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 02:06:12.239729 containerd[1542]: time="2025-08-13T02:06:12.239678860Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 02:06:12.239729 containerd[1542]: time="2025-08-13T02:06:12.239702280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 02:06:12.239729 containerd[1542]: time="2025-08-13T02:06:12.239714300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 02:06:12.239729 containerd[1542]: time="2025-08-13T02:06:12.239723400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 02:06:12.239851 containerd[1542]: time="2025-08-13T02:06:12.239731460Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 02:06:12.239851 containerd[1542]: time="2025-08-13T02:06:12.239740900Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 02:06:12.239851 containerd[1542]: time="2025-08-13T02:06:12.239750440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 02:06:12.239851 containerd[1542]: time="2025-08-13T02:06:12.239758510Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 02:06:12.239851 containerd[1542]: time="2025-08-13T02:06:12.239767650Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 02:06:12.239851 containerd[1542]: time="2025-08-13T02:06:12.239776200Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 02:06:12.239851 containerd[1542]: time="2025-08-13T02:06:12.239784940Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 02:06:12.239851 containerd[1542]: time="2025-08-13T02:06:12.239835440Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 02:06:12.239851 containerd[1542]: time="2025-08-13T02:06:12.239845870Z" level=info msg="Start snapshots syncer" Aug 13 02:06:12.239999 containerd[1542]: time="2025-08-13T02:06:12.239914400Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 02:06:12.240619 containerd[1542]: time="2025-08-13T02:06:12.240141290Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 02:06:12.240619 containerd[1542]: time="2025-08-13T02:06:12.240186120Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 02:06:12.240920 containerd[1542]: time="2025-08-13T02:06:12.240642450Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 02:06:12.240920 containerd[1542]: time="2025-08-13T02:06:12.240771560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 02:06:12.240920 containerd[1542]: time="2025-08-13T02:06:12.240789680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 02:06:12.240920 containerd[1542]: time="2025-08-13T02:06:12.240798310Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 02:06:12.240920 containerd[1542]: time="2025-08-13T02:06:12.240806570Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 02:06:12.240920 containerd[1542]: time="2025-08-13T02:06:12.240817890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 02:06:12.240920 containerd[1542]: time="2025-08-13T02:06:12.240830840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 02:06:12.240920 containerd[1542]: time="2025-08-13T02:06:12.240839980Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 02:06:12.240920 containerd[1542]: time="2025-08-13T02:06:12.240858300Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 02:06:12.240920 containerd[1542]: time="2025-08-13T02:06:12.240867240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 02:06:12.240920 containerd[1542]: time="2025-08-13T02:06:12.240876520Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 02:06:12.244018 containerd[1542]: time="2025-08-13T02:06:12.242949890Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 02:06:12.244018 containerd[1542]: time="2025-08-13T02:06:12.242993370Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 02:06:12.244018 containerd[1542]: time="2025-08-13T02:06:12.243002310Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 02:06:12.244018 containerd[1542]: time="2025-08-13T02:06:12.243011060Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 02:06:12.244018 containerd[1542]: time="2025-08-13T02:06:12.243018070Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 02:06:12.244018 containerd[1542]: time="2025-08-13T02:06:12.243026090Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 02:06:12.244018 containerd[1542]: time="2025-08-13T02:06:12.243034800Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 02:06:12.244018 containerd[1542]: time="2025-08-13T02:06:12.243049380Z" level=info msg="runtime interface created" Aug 13 02:06:12.244018 containerd[1542]: time="2025-08-13T02:06:12.243054270Z" level=info msg="created NRI interface" Aug 13 02:06:12.244018 containerd[1542]: time="2025-08-13T02:06:12.243061020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 02:06:12.244018 containerd[1542]: time="2025-08-13T02:06:12.243070820Z" level=info msg="Connect containerd service" Aug 13 02:06:12.244018 containerd[1542]: time="2025-08-13T02:06:12.243089950Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 02:06:12.249810 containerd[1542]: time="2025-08-13T02:06:12.249440940Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 02:06:12.289622 kernel: EDAC MC: Ver: 3.0.0 Aug 13 02:06:12.362955 coreos-metadata[1595]: Aug 13 02:06:12.362 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 02:06:12.453760 systemd-logind[1527]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 02:06:12.478203 systemd-logind[1527]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 02:06:12.500090 containerd[1542]: time="2025-08-13T02:06:12.500052590Z" level=info msg="Start subscribing containerd event" Aug 13 02:06:12.500728 containerd[1542]: time="2025-08-13T02:06:12.500704360Z" level=info msg="Start recovering state" Aug 13 02:06:12.501758 containerd[1542]: time="2025-08-13T02:06:12.501733670Z" level=info msg="Start event monitor" Aug 13 02:06:12.501816 containerd[1542]: time="2025-08-13T02:06:12.501759110Z" level=info msg="Start cni network conf syncer for default" Aug 13 02:06:12.501816 containerd[1542]: time="2025-08-13T02:06:12.501812930Z" level=info msg="Start streaming server" Aug 13 02:06:12.501864 containerd[1542]: time="2025-08-13T02:06:12.501824050Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 02:06:12.501864 containerd[1542]: time="2025-08-13T02:06:12.501831990Z" level=info msg="runtime interface starting up..." Aug 13 02:06:12.501897 containerd[1542]: time="2025-08-13T02:06:12.501838080Z" level=info msg="starting plugins..." Aug 13 02:06:12.501915 containerd[1542]: time="2025-08-13T02:06:12.501900350Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 02:06:12.502103 containerd[1542]: time="2025-08-13T02:06:12.501868880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 02:06:12.502776 containerd[1542]: time="2025-08-13T02:06:12.502626220Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 02:06:12.508786 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 02:06:12.511450 containerd[1542]: time="2025-08-13T02:06:12.510697940Z" level=info msg="containerd successfully booted in 0.320636s" Aug 13 02:06:12.548454 sshd_keygen[1556]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 02:06:12.548404 locksmithd[1571]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 02:06:12.573108 dbus-daemon[1516]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1461 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 02:06:12.573551 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 02:06:12.574443 systemd-networkd[1461]: eth0: DHCPv4 address 172.236.122.171/24, gateway 172.236.122.1 acquired from 23.194.118.51 Aug 13 02:06:12.578890 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. Aug 13 02:06:12.580843 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 02:06:12.585891 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 02:06:12.605120 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 02:06:12.616213 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 02:06:12.616456 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 02:06:12.628701 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 02:06:12.659951 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 02:06:12.664004 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 02:06:12.669189 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 02:06:12.674251 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 02:06:12.678113 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 02:06:12.679881 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 02:06:12.719891 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 02:06:12.789623 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 02:06:12.790048 dbus-daemon[1516]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 02:06:12.790826 dbus-daemon[1516]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1630 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 02:06:12.798296 systemd-timesyncd[1445]: Contacted time server 192.189.65.186:123 (0.flatcar.pool.ntp.org). Aug 13 02:06:12.798343 systemd-timesyncd[1445]: Initial clock synchronization to Wed 2025-08-13 02:06:12.654393 UTC. Aug 13 02:06:12.847691 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 02:06:12.922689 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 02:06:12.955824 polkitd[1647]: Started polkitd version 126 Aug 13 02:06:12.959894 polkitd[1647]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 02:06:12.960149 polkitd[1647]: Loading rules from directory /run/polkit-1/rules.d Aug 13 02:06:12.960182 polkitd[1647]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 02:06:12.960393 polkitd[1647]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 13 02:06:12.960411 polkitd[1647]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 02:06:12.960445 polkitd[1647]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 02:06:12.961291 polkitd[1647]: Finished loading, compiling and executing 2 rules Aug 13 02:06:12.961615 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 02:06:12.961902 dbus-daemon[1516]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 02:06:12.962293 polkitd[1647]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 02:06:12.970942 systemd-resolved[1411]: System hostname changed to '172-236-122-171'. Aug 13 02:06:12.971556 systemd-hostnamed[1630]: Hostname set to <172-236-122-171> (transient) Aug 13 02:06:13.031807 tar[1537]: linux-amd64/README.md Aug 13 02:06:13.047914 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 02:06:13.148397 coreos-metadata[1515]: Aug 13 02:06:13.148 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 02:06:13.248804 coreos-metadata[1515]: Aug 13 02:06:13.248 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 02:06:13.372421 coreos-metadata[1595]: Aug 13 02:06:13.372 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 02:06:13.474625 coreos-metadata[1595]: Aug 13 02:06:13.474 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 02:06:13.475521 coreos-metadata[1515]: Aug 13 02:06:13.475 INFO Fetch successful Aug 13 02:06:13.475612 coreos-metadata[1515]: Aug 13 02:06:13.475 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 02:06:13.625444 coreos-metadata[1595]: Aug 13 02:06:13.625 INFO Fetch successful Aug 13 02:06:13.646928 update-ssh-keys[1665]: Updated "/home/core/.ssh/authorized_keys" Aug 13 02:06:13.647858 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 02:06:13.649772 systemd[1]: Finished sshkeys.service. Aug 13 02:06:13.784880 systemd-networkd[1461]: eth0: Gained IPv6LL Aug 13 02:06:13.785283 coreos-metadata[1515]: Aug 13 02:06:13.785 INFO Fetch successful Aug 13 02:06:13.786542 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 02:06:13.789986 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 02:06:13.798732 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 02:06:13.801651 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 02:06:13.834609 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 02:06:13.891262 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 02:06:13.892638 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 02:06:14.620783 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 02:06:14.622406 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 02:06:14.624694 systemd[1]: Startup finished in 2.654s (kernel) + 7.857s (initrd) + 5.190s (userspace) = 15.702s. Aug 13 02:06:14.664294 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 02:06:15.122837 kubelet[1704]: E0813 02:06:15.122781 1704 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 02:06:15.125959 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 02:06:15.126139 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 02:06:15.129929 systemd[1]: kubelet.service: Consumed 823ms CPU time, 262.8M memory peak. Aug 13 02:06:16.183564 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 02:06:16.184789 systemd[1]: Started sshd@0-172.236.122.171:22-147.75.109.163:41352.service - OpenSSH per-connection server daemon (147.75.109.163:41352). Aug 13 02:06:16.529829 sshd[1716]: Accepted publickey for core from 147.75.109.163 port 41352 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:06:16.531400 sshd-session[1716]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:06:16.537205 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 02:06:16.538484 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 02:06:16.545528 systemd-logind[1527]: New session 1 of user core. Aug 13 02:06:16.556488 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 02:06:16.559463 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 02:06:16.572744 (systemd)[1720]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 02:06:16.574988 systemd-logind[1527]: New session c1 of user core. Aug 13 02:06:16.705662 systemd[1720]: Queued start job for default target default.target. Aug 13 02:06:16.722679 systemd[1720]: Created slice app.slice - User Application Slice. Aug 13 02:06:16.722706 systemd[1720]: Reached target paths.target - Paths. Aug 13 02:06:16.722743 systemd[1720]: Reached target timers.target - Timers. Aug 13 02:06:16.724004 systemd[1720]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 02:06:16.733391 systemd[1720]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 02:06:16.733436 systemd[1720]: Reached target sockets.target - Sockets. Aug 13 02:06:16.733470 systemd[1720]: Reached target basic.target - Basic System. Aug 13 02:06:16.733509 systemd[1720]: Reached target default.target - Main User Target. Aug 13 02:06:16.733537 systemd[1720]: Startup finished in 153ms. Aug 13 02:06:16.733700 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 02:06:16.750694 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 02:06:17.004470 systemd[1]: Started sshd@1-172.236.122.171:22-147.75.109.163:41354.service - OpenSSH per-connection server daemon (147.75.109.163:41354). Aug 13 02:06:17.354409 sshd[1731]: Accepted publickey for core from 147.75.109.163 port 41354 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:06:17.356106 sshd-session[1731]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:06:17.365028 systemd-logind[1527]: New session 2 of user core. Aug 13 02:06:17.370841 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 02:06:17.597691 sshd[1733]: Connection closed by 147.75.109.163 port 41354 Aug 13 02:06:17.598293 sshd-session[1731]: pam_unix(sshd:session): session closed for user core Aug 13 02:06:17.602541 systemd-logind[1527]: Session 2 logged out. Waiting for processes to exit. Aug 13 02:06:17.603315 systemd[1]: sshd@1-172.236.122.171:22-147.75.109.163:41354.service: Deactivated successfully. Aug 13 02:06:17.605201 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 02:06:17.606915 systemd-logind[1527]: Removed session 2. Aug 13 02:06:17.664102 systemd[1]: Started sshd@2-172.236.122.171:22-147.75.109.163:41364.service - OpenSSH per-connection server daemon (147.75.109.163:41364). Aug 13 02:06:18.008405 sshd[1739]: Accepted publickey for core from 147.75.109.163 port 41364 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:06:18.010207 sshd-session[1739]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:06:18.015232 systemd-logind[1527]: New session 3 of user core. Aug 13 02:06:18.024714 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 02:06:18.251778 sshd[1741]: Connection closed by 147.75.109.163 port 41364 Aug 13 02:06:18.252484 sshd-session[1739]: pam_unix(sshd:session): session closed for user core Aug 13 02:06:18.257915 systemd[1]: sshd@2-172.236.122.171:22-147.75.109.163:41364.service: Deactivated successfully. Aug 13 02:06:18.267062 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 02:06:18.268281 systemd-logind[1527]: Session 3 logged out. Waiting for processes to exit. Aug 13 02:06:18.269740 systemd-logind[1527]: Removed session 3. Aug 13 02:06:18.319769 systemd[1]: Started sshd@3-172.236.122.171:22-147.75.109.163:43792.service - OpenSSH per-connection server daemon (147.75.109.163:43792). Aug 13 02:06:18.587258 systemd[1]: Started sshd@4-172.236.122.171:22-165.154.201.122:38694.service - OpenSSH per-connection server daemon (165.154.201.122:38694). Aug 13 02:06:18.665880 sshd[1747]: Accepted publickey for core from 147.75.109.163 port 43792 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:06:18.667564 sshd-session[1747]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:06:18.673301 systemd-logind[1527]: New session 4 of user core. Aug 13 02:06:18.678714 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 02:06:18.911980 sshd[1752]: Connection closed by 147.75.109.163 port 43792 Aug 13 02:06:18.913473 sshd-session[1747]: pam_unix(sshd:session): session closed for user core Aug 13 02:06:18.919395 systemd-logind[1527]: Session 4 logged out. Waiting for processes to exit. Aug 13 02:06:18.923407 systemd[1]: sshd@3-172.236.122.171:22-147.75.109.163:43792.service: Deactivated successfully. Aug 13 02:06:18.926346 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 02:06:18.928748 systemd-logind[1527]: Removed session 4. Aug 13 02:06:18.975800 systemd[1]: Started sshd@5-172.236.122.171:22-147.75.109.163:43806.service - OpenSSH per-connection server daemon (147.75.109.163:43806). Aug 13 02:06:19.323295 sshd[1758]: Accepted publickey for core from 147.75.109.163 port 43806 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:06:19.324995 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:06:19.330371 systemd-logind[1527]: New session 5 of user core. Aug 13 02:06:19.339725 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 02:06:19.528079 sudo[1761]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 02:06:19.528374 sudo[1761]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 02:06:19.543903 sudo[1761]: pam_unix(sudo:session): session closed for user root Aug 13 02:06:19.594944 sshd[1760]: Connection closed by 147.75.109.163 port 43806 Aug 13 02:06:19.595685 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Aug 13 02:06:19.599481 systemd[1]: sshd@5-172.236.122.171:22-147.75.109.163:43806.service: Deactivated successfully. Aug 13 02:06:19.601161 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 02:06:19.601859 systemd-logind[1527]: Session 5 logged out. Waiting for processes to exit. Aug 13 02:06:19.603029 systemd-logind[1527]: Removed session 5. Aug 13 02:06:19.661406 systemd[1]: Started sshd@6-172.236.122.171:22-147.75.109.163:43818.service - OpenSSH per-connection server daemon (147.75.109.163:43818). Aug 13 02:06:20.005068 sshd[1767]: Accepted publickey for core from 147.75.109.163 port 43818 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:06:20.007341 sshd-session[1767]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:06:20.012805 systemd-logind[1527]: New session 6 of user core. Aug 13 02:06:20.021726 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 02:06:20.137497 sshd[1750]: Received disconnect from 165.154.201.122 port 38694:11: Bye Bye [preauth] Aug 13 02:06:20.137497 sshd[1750]: Disconnected from authenticating user root 165.154.201.122 port 38694 [preauth] Aug 13 02:06:20.140274 systemd[1]: sshd@4-172.236.122.171:22-165.154.201.122:38694.service: Deactivated successfully. Aug 13 02:06:20.206223 sudo[1773]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 02:06:20.206529 sudo[1773]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 02:06:20.212706 sudo[1773]: pam_unix(sudo:session): session closed for user root Aug 13 02:06:20.218892 sudo[1772]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 02:06:20.219215 sudo[1772]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 02:06:20.230043 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 02:06:20.269825 augenrules[1795]: No rules Aug 13 02:06:20.271742 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 02:06:20.272096 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 02:06:20.273749 sudo[1772]: pam_unix(sudo:session): session closed for user root Aug 13 02:06:20.325982 sshd[1769]: Connection closed by 147.75.109.163 port 43818 Aug 13 02:06:20.326424 sshd-session[1767]: pam_unix(sshd:session): session closed for user core Aug 13 02:06:20.330896 systemd-logind[1527]: Session 6 logged out. Waiting for processes to exit. Aug 13 02:06:20.331626 systemd[1]: sshd@6-172.236.122.171:22-147.75.109.163:43818.service: Deactivated successfully. Aug 13 02:06:20.333911 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 02:06:20.335859 systemd-logind[1527]: Removed session 6. Aug 13 02:06:20.384355 systemd[1]: Started sshd@7-172.236.122.171:22-147.75.109.163:43830.service - OpenSSH per-connection server daemon (147.75.109.163:43830). Aug 13 02:06:20.715501 sshd[1804]: Accepted publickey for core from 147.75.109.163 port 43830 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:06:20.717039 sshd-session[1804]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:06:20.722193 systemd-logind[1527]: New session 7 of user core. Aug 13 02:06:20.725711 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 02:06:20.910171 sudo[1807]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 02:06:20.910487 sudo[1807]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 02:06:21.183402 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 02:06:21.201913 (dockerd)[1825]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 02:06:21.375962 dockerd[1825]: time="2025-08-13T02:06:21.375903332Z" level=info msg="Starting up" Aug 13 02:06:21.377280 dockerd[1825]: time="2025-08-13T02:06:21.377254455Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 02:06:21.425776 dockerd[1825]: time="2025-08-13T02:06:21.425479834Z" level=info msg="Loading containers: start." Aug 13 02:06:21.436611 kernel: Initializing XFRM netlink socket Aug 13 02:06:21.664024 systemd-networkd[1461]: docker0: Link UP Aug 13 02:06:21.667099 dockerd[1825]: time="2025-08-13T02:06:21.667055851Z" level=info msg="Loading containers: done." Aug 13 02:06:21.679099 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1801132054-merged.mount: Deactivated successfully. Aug 13 02:06:21.682611 dockerd[1825]: time="2025-08-13T02:06:21.682561285Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 02:06:21.682733 dockerd[1825]: time="2025-08-13T02:06:21.682652269Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 02:06:21.682759 dockerd[1825]: time="2025-08-13T02:06:21.682749346Z" level=info msg="Initializing buildkit" Aug 13 02:06:21.701525 dockerd[1825]: time="2025-08-13T02:06:21.701487826Z" level=info msg="Completed buildkit initialization" Aug 13 02:06:21.707857 dockerd[1825]: time="2025-08-13T02:06:21.707833091Z" level=info msg="Daemon has completed initialization" Aug 13 02:06:21.707996 dockerd[1825]: time="2025-08-13T02:06:21.707959962Z" level=info msg="API listen on /run/docker.sock" Aug 13 02:06:21.708066 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 02:06:22.535370 containerd[1542]: time="2025-08-13T02:06:22.535322585Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 02:06:23.345987 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3824875027.mount: Deactivated successfully. Aug 13 02:06:24.299498 containerd[1542]: time="2025-08-13T02:06:24.299405711Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:24.301020 containerd[1542]: time="2025-08-13T02:06:24.300829525Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Aug 13 02:06:24.301682 containerd[1542]: time="2025-08-13T02:06:24.301649219Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:24.304096 containerd[1542]: time="2025-08-13T02:06:24.304066227Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:24.305056 containerd[1542]: time="2025-08-13T02:06:24.305017414Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 1.769647498s" Aug 13 02:06:24.305136 containerd[1542]: time="2025-08-13T02:06:24.305119749Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 02:06:24.306347 containerd[1542]: time="2025-08-13T02:06:24.306170284Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 02:06:25.272652 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 02:06:25.275431 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 02:06:25.463708 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 02:06:25.472979 (kubelet)[2094]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 02:06:25.520736 kubelet[2094]: E0813 02:06:25.520670 2094 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 02:06:25.529501 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 02:06:25.529716 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 02:06:25.530096 systemd[1]: kubelet.service: Consumed 194ms CPU time, 109.3M memory peak. Aug 13 02:06:25.762610 containerd[1542]: time="2025-08-13T02:06:25.761888261Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:25.762610 containerd[1542]: time="2025-08-13T02:06:25.762562899Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Aug 13 02:06:25.763159 containerd[1542]: time="2025-08-13T02:06:25.763138468Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:25.764772 containerd[1542]: time="2025-08-13T02:06:25.764753051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:25.765621 containerd[1542]: time="2025-08-13T02:06:25.765565608Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 1.459368631s" Aug 13 02:06:25.765658 containerd[1542]: time="2025-08-13T02:06:25.765623777Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 02:06:25.766074 containerd[1542]: time="2025-08-13T02:06:25.766047029Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 02:06:26.868413 containerd[1542]: time="2025-08-13T02:06:26.868348092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:26.869259 containerd[1542]: time="2025-08-13T02:06:26.869135969Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Aug 13 02:06:26.869686 containerd[1542]: time="2025-08-13T02:06:26.869647403Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:26.871615 containerd[1542]: time="2025-08-13T02:06:26.871564072Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:26.872479 containerd[1542]: time="2025-08-13T02:06:26.872341670Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 1.106265376s" Aug 13 02:06:26.872479 containerd[1542]: time="2025-08-13T02:06:26.872368665Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 02:06:26.872887 containerd[1542]: time="2025-08-13T02:06:26.872865117Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 02:06:27.991852 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount268930578.mount: Deactivated successfully. Aug 13 02:06:28.298226 containerd[1542]: time="2025-08-13T02:06:28.298176194Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:28.299017 containerd[1542]: time="2025-08-13T02:06:28.298874037Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 13 02:06:28.299548 containerd[1542]: time="2025-08-13T02:06:28.299514550Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:28.301112 containerd[1542]: time="2025-08-13T02:06:28.301084912Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:28.301543 containerd[1542]: time="2025-08-13T02:06:28.301522967Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 1.428631907s" Aug 13 02:06:28.301630 containerd[1542]: time="2025-08-13T02:06:28.301615352Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 02:06:28.302129 containerd[1542]: time="2025-08-13T02:06:28.302109281Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 02:06:29.020512 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3479934149.mount: Deactivated successfully. Aug 13 02:06:29.672205 containerd[1542]: time="2025-08-13T02:06:29.672154144Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:29.673149 containerd[1542]: time="2025-08-13T02:06:29.673042395Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 02:06:29.673706 containerd[1542]: time="2025-08-13T02:06:29.673671437Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:29.676298 containerd[1542]: time="2025-08-13T02:06:29.676266428Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:29.676990 containerd[1542]: time="2025-08-13T02:06:29.676955133Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.374822516s" Aug 13 02:06:29.677032 containerd[1542]: time="2025-08-13T02:06:29.676990788Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 02:06:29.677542 containerd[1542]: time="2025-08-13T02:06:29.677498717Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 02:06:30.354962 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount272595765.mount: Deactivated successfully. Aug 13 02:06:30.358819 containerd[1542]: time="2025-08-13T02:06:30.358779156Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 02:06:30.359367 containerd[1542]: time="2025-08-13T02:06:30.359349006Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 02:06:30.360610 containerd[1542]: time="2025-08-13T02:06:30.359822914Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 02:06:30.361618 containerd[1542]: time="2025-08-13T02:06:30.361579786Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 02:06:30.362382 containerd[1542]: time="2025-08-13T02:06:30.362359864Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 684.831089ms" Aug 13 02:06:30.362422 containerd[1542]: time="2025-08-13T02:06:30.362387832Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 02:06:30.362934 containerd[1542]: time="2025-08-13T02:06:30.362914423Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 02:06:31.161838 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1989096164.mount: Deactivated successfully. Aug 13 02:06:31.585127 systemd[1]: Started sshd@8-172.236.122.171:22-78.128.112.74:59048.service - OpenSSH per-connection server daemon (78.128.112.74:59048). Aug 13 02:06:32.845708 containerd[1542]: time="2025-08-13T02:06:32.845301633Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:32.846995 containerd[1542]: time="2025-08-13T02:06:32.846780757Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Aug 13 02:06:32.847653 containerd[1542]: time="2025-08-13T02:06:32.847622279Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:32.849834 containerd[1542]: time="2025-08-13T02:06:32.849803193Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:32.850670 containerd[1542]: time="2025-08-13T02:06:32.850638923Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 2.487702559s" Aug 13 02:06:32.850720 containerd[1542]: time="2025-08-13T02:06:32.850670858Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 02:06:34.317828 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 02:06:34.317971 systemd[1]: kubelet.service: Consumed 194ms CPU time, 109.3M memory peak. Aug 13 02:06:34.320388 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 02:06:34.342640 systemd[1]: Reload requested from client PID 2253 ('systemctl') (unit session-7.scope)... Aug 13 02:06:34.342652 systemd[1]: Reloading... Aug 13 02:06:34.489611 zram_generator::config[2314]: No configuration found. Aug 13 02:06:34.561104 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 02:06:34.662410 systemd[1]: Reloading finished in 319 ms. Aug 13 02:06:34.719058 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 02:06:34.719266 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 02:06:34.719676 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 02:06:34.719776 systemd[1]: kubelet.service: Consumed 127ms CPU time, 98.2M memory peak. Aug 13 02:06:34.721286 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 02:06:34.878898 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 02:06:34.882559 (kubelet)[2353]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 02:06:34.916223 kubelet[2353]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 02:06:34.916223 kubelet[2353]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 02:06:34.916223 kubelet[2353]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 02:06:34.916522 kubelet[2353]: I0813 02:06:34.916228 2353 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 02:06:35.242720 kubelet[2353]: I0813 02:06:35.241802 2353 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 02:06:35.242720 kubelet[2353]: I0813 02:06:35.241826 2353 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 02:06:35.242720 kubelet[2353]: I0813 02:06:35.242130 2353 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 02:06:35.271954 kubelet[2353]: E0813 02:06:35.271926 2353 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.236.122.171:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.236.122.171:6443: connect: connection refused" logger="UnhandledError" Aug 13 02:06:35.272785 kubelet[2353]: I0813 02:06:35.272751 2353 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 02:06:35.279862 kubelet[2353]: I0813 02:06:35.279846 2353 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 02:06:35.284069 kubelet[2353]: I0813 02:06:35.284053 2353 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 02:06:35.284277 kubelet[2353]: I0813 02:06:35.284253 2353 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 02:06:35.284443 kubelet[2353]: I0813 02:06:35.284277 2353 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-122-171","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 02:06:35.285230 kubelet[2353]: I0813 02:06:35.285206 2353 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 02:06:35.285230 kubelet[2353]: I0813 02:06:35.285225 2353 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 02:06:35.285346 kubelet[2353]: I0813 02:06:35.285323 2353 state_mem.go:36] "Initialized new in-memory state store" Aug 13 02:06:35.288969 kubelet[2353]: I0813 02:06:35.288949 2353 kubelet.go:446] "Attempting to sync node with API server" Aug 13 02:06:35.289021 kubelet[2353]: I0813 02:06:35.288980 2353 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 02:06:35.289021 kubelet[2353]: I0813 02:06:35.289000 2353 kubelet.go:352] "Adding apiserver pod source" Aug 13 02:06:35.289021 kubelet[2353]: I0813 02:06:35.289008 2353 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 02:06:35.296460 kubelet[2353]: W0813 02:06:35.296305 2353 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.236.122.171:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.236.122.171:6443: connect: connection refused Aug 13 02:06:35.296501 kubelet[2353]: E0813 02:06:35.296471 2353 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.236.122.171:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.236.122.171:6443: connect: connection refused" logger="UnhandledError" Aug 13 02:06:35.296803 kubelet[2353]: I0813 02:06:35.296788 2353 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 02:06:35.297191 kubelet[2353]: I0813 02:06:35.297178 2353 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 02:06:35.297290 kubelet[2353]: W0813 02:06:35.297279 2353 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 02:06:35.298673 kubelet[2353]: W0813 02:06:35.298492 2353 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.236.122.171:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-122-171&limit=500&resourceVersion=0": dial tcp 172.236.122.171:6443: connect: connection refused Aug 13 02:06:35.298673 kubelet[2353]: E0813 02:06:35.298522 2353 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.236.122.171:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-236-122-171&limit=500&resourceVersion=0\": dial tcp 172.236.122.171:6443: connect: connection refused" logger="UnhandledError" Aug 13 02:06:35.300002 kubelet[2353]: I0813 02:06:35.299139 2353 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 02:06:35.300002 kubelet[2353]: I0813 02:06:35.299184 2353 server.go:1287] "Started kubelet" Aug 13 02:06:35.300136 kubelet[2353]: I0813 02:06:35.300114 2353 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 02:06:35.300958 kubelet[2353]: I0813 02:06:35.300946 2353 server.go:479] "Adding debug handlers to kubelet server" Aug 13 02:06:35.304266 kubelet[2353]: I0813 02:06:35.303948 2353 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 02:06:35.304266 kubelet[2353]: I0813 02:06:35.304175 2353 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 02:06:35.304568 kubelet[2353]: I0813 02:06:35.304550 2353 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 02:06:35.305148 kubelet[2353]: I0813 02:06:35.305134 2353 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 02:06:35.306674 kubelet[2353]: E0813 02:06:35.304841 2353 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.236.122.171:6443/api/v1/namespaces/default/events\": dial tcp 172.236.122.171:6443: connect: connection refused" event="&Event{ObjectMeta:{172-236-122-171.185b316650929c80 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-236-122-171,UID:172-236-122-171,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-236-122-171,},FirstTimestamp:2025-08-13 02:06:35.299150976 +0000 UTC m=+0.411515390,LastTimestamp:2025-08-13 02:06:35.299150976 +0000 UTC m=+0.411515390,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-236-122-171,}" Aug 13 02:06:35.307399 kubelet[2353]: I0813 02:06:35.307388 2353 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 02:06:35.307637 kubelet[2353]: E0813 02:06:35.307624 2353 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-122-171\" not found" Aug 13 02:06:35.308053 kubelet[2353]: E0813 02:06:35.308034 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.122.171:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-122-171?timeout=10s\": dial tcp 172.236.122.171:6443: connect: connection refused" interval="200ms" Aug 13 02:06:35.308731 kubelet[2353]: I0813 02:06:35.308710 2353 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 02:06:35.308885 kubelet[2353]: I0813 02:06:35.308754 2353 reconciler.go:26] "Reconciler: start to sync state" Aug 13 02:06:35.309099 kubelet[2353]: W0813 02:06:35.309068 2353 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.236.122.171:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.236.122.171:6443: connect: connection refused Aug 13 02:06:35.309133 kubelet[2353]: E0813 02:06:35.309103 2353 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.236.122.171:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.236.122.171:6443: connect: connection refused" logger="UnhandledError" Aug 13 02:06:35.309262 kubelet[2353]: I0813 02:06:35.309240 2353 factory.go:221] Registration of the systemd container factory successfully Aug 13 02:06:35.309310 kubelet[2353]: I0813 02:06:35.309288 2353 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 02:06:35.310372 kubelet[2353]: E0813 02:06:35.310250 2353 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 02:06:35.310752 kubelet[2353]: I0813 02:06:35.310720 2353 factory.go:221] Registration of the containerd container factory successfully Aug 13 02:06:35.322444 kubelet[2353]: I0813 02:06:35.322427 2353 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 02:06:35.322529 kubelet[2353]: I0813 02:06:35.322519 2353 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 02:06:35.322796 kubelet[2353]: I0813 02:06:35.322568 2353 state_mem.go:36] "Initialized new in-memory state store" Aug 13 02:06:35.324215 kubelet[2353]: I0813 02:06:35.324202 2353 policy_none.go:49] "None policy: Start" Aug 13 02:06:35.324295 kubelet[2353]: I0813 02:06:35.324285 2353 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 02:06:35.324433 kubelet[2353]: I0813 02:06:35.324362 2353 state_mem.go:35] "Initializing new in-memory state store" Aug 13 02:06:35.330224 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 02:06:35.334766 kubelet[2353]: I0813 02:06:35.334718 2353 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 02:06:35.337665 kubelet[2353]: I0813 02:06:35.337652 2353 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 02:06:35.338042 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 02:06:35.339605 kubelet[2353]: I0813 02:06:35.339431 2353 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 02:06:35.339605 kubelet[2353]: I0813 02:06:35.339453 2353 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 02:06:35.339605 kubelet[2353]: I0813 02:06:35.339459 2353 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 02:06:35.339605 kubelet[2353]: E0813 02:06:35.339499 2353 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 02:06:35.341400 kubelet[2353]: W0813 02:06:35.341115 2353 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.236.122.171:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.236.122.171:6443: connect: connection refused Aug 13 02:06:35.341400 kubelet[2353]: E0813 02:06:35.341141 2353 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.236.122.171:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.236.122.171:6443: connect: connection refused" logger="UnhandledError" Aug 13 02:06:35.343652 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 02:06:35.355542 kubelet[2353]: I0813 02:06:35.355516 2353 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 02:06:35.356027 kubelet[2353]: I0813 02:06:35.356016 2353 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 02:06:35.356161 kubelet[2353]: I0813 02:06:35.356122 2353 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 02:06:35.356393 kubelet[2353]: I0813 02:06:35.356382 2353 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 02:06:35.358340 kubelet[2353]: E0813 02:06:35.358324 2353 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 02:06:35.358484 kubelet[2353]: E0813 02:06:35.358473 2353 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-236-122-171\" not found" Aug 13 02:06:35.448708 systemd[1]: Created slice kubepods-burstable-pod503bc69c8c96522ebc10e0826e62ddfe.slice - libcontainer container kubepods-burstable-pod503bc69c8c96522ebc10e0826e62ddfe.slice. Aug 13 02:06:35.457461 kubelet[2353]: E0813 02:06:35.457436 2353 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-122-171\" not found" node="172-236-122-171" Aug 13 02:06:35.458534 kubelet[2353]: I0813 02:06:35.458446 2353 kubelet_node_status.go:75] "Attempting to register node" node="172-236-122-171" Aug 13 02:06:35.459504 systemd[1]: Created slice kubepods-burstable-pod6577143aeacdd825758eeac43577d224.slice - libcontainer container kubepods-burstable-pod6577143aeacdd825758eeac43577d224.slice. Aug 13 02:06:35.459997 kubelet[2353]: E0813 02:06:35.459878 2353 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.122.171:6443/api/v1/nodes\": dial tcp 172.236.122.171:6443: connect: connection refused" node="172-236-122-171" Aug 13 02:06:35.467738 kubelet[2353]: E0813 02:06:35.467720 2353 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-122-171\" not found" node="172-236-122-171" Aug 13 02:06:35.470300 systemd[1]: Created slice kubepods-burstable-pod253653dddecfabea52a5d44b8b9604cd.slice - libcontainer container kubepods-burstable-pod253653dddecfabea52a5d44b8b9604cd.slice. Aug 13 02:06:35.471942 kubelet[2353]: E0813 02:06:35.471919 2353 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-122-171\" not found" node="172-236-122-171" Aug 13 02:06:35.509222 kubelet[2353]: E0813 02:06:35.509147 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.122.171:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-122-171?timeout=10s\": dial tcp 172.236.122.171:6443: connect: connection refused" interval="400ms" Aug 13 02:06:35.610202 kubelet[2353]: I0813 02:06:35.610135 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6577143aeacdd825758eeac43577d224-kubeconfig\") pod \"kube-controller-manager-172-236-122-171\" (UID: \"6577143aeacdd825758eeac43577d224\") " pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:06:35.610202 kubelet[2353]: I0813 02:06:35.610197 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/253653dddecfabea52a5d44b8b9604cd-kubeconfig\") pod \"kube-scheduler-172-236-122-171\" (UID: \"253653dddecfabea52a5d44b8b9604cd\") " pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:06:35.610265 kubelet[2353]: I0813 02:06:35.610217 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/503bc69c8c96522ebc10e0826e62ddfe-ca-certs\") pod \"kube-apiserver-172-236-122-171\" (UID: \"503bc69c8c96522ebc10e0826e62ddfe\") " pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:06:35.610265 kubelet[2353]: I0813 02:06:35.610235 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6577143aeacdd825758eeac43577d224-ca-certs\") pod \"kube-controller-manager-172-236-122-171\" (UID: \"6577143aeacdd825758eeac43577d224\") " pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:06:35.610265 kubelet[2353]: I0813 02:06:35.610252 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6577143aeacdd825758eeac43577d224-k8s-certs\") pod \"kube-controller-manager-172-236-122-171\" (UID: \"6577143aeacdd825758eeac43577d224\") " pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:06:35.610333 kubelet[2353]: I0813 02:06:35.610282 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6577143aeacdd825758eeac43577d224-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-122-171\" (UID: \"6577143aeacdd825758eeac43577d224\") " pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:06:35.610333 kubelet[2353]: I0813 02:06:35.610300 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/503bc69c8c96522ebc10e0826e62ddfe-k8s-certs\") pod \"kube-apiserver-172-236-122-171\" (UID: \"503bc69c8c96522ebc10e0826e62ddfe\") " pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:06:35.610333 kubelet[2353]: I0813 02:06:35.610315 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/503bc69c8c96522ebc10e0826e62ddfe-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-122-171\" (UID: \"503bc69c8c96522ebc10e0826e62ddfe\") " pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:06:35.610333 kubelet[2353]: I0813 02:06:35.610333 2353 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6577143aeacdd825758eeac43577d224-flexvolume-dir\") pod \"kube-controller-manager-172-236-122-171\" (UID: \"6577143aeacdd825758eeac43577d224\") " pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:06:35.662565 kubelet[2353]: I0813 02:06:35.662504 2353 kubelet_node_status.go:75] "Attempting to register node" node="172-236-122-171" Aug 13 02:06:35.662966 kubelet[2353]: E0813 02:06:35.662922 2353 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.122.171:6443/api/v1/nodes\": dial tcp 172.236.122.171:6443: connect: connection refused" node="172-236-122-171" Aug 13 02:06:35.759311 kubelet[2353]: E0813 02:06:35.758820 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:35.759950 containerd[1542]: time="2025-08-13T02:06:35.759636816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-122-171,Uid:503bc69c8c96522ebc10e0826e62ddfe,Namespace:kube-system,Attempt:0,}" Aug 13 02:06:35.771737 kubelet[2353]: E0813 02:06:35.771712 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:35.772338 kubelet[2353]: E0813 02:06:35.772323 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:35.772647 containerd[1542]: time="2025-08-13T02:06:35.772625104Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-122-171,Uid:253653dddecfabea52a5d44b8b9604cd,Namespace:kube-system,Attempt:0,}" Aug 13 02:06:35.775841 containerd[1542]: time="2025-08-13T02:06:35.774858444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-122-171,Uid:6577143aeacdd825758eeac43577d224,Namespace:kube-system,Attempt:0,}" Aug 13 02:06:35.779417 containerd[1542]: time="2025-08-13T02:06:35.779396448Z" level=info msg="connecting to shim 137a58b372d61626b210f0cba11b764d0abfef60ec202176f36b2812433ed26d" address="unix:///run/containerd/s/a9ae9ac2a0200877ff0840cfad8fb9abcf461289f880c05010a50dea3ed6c2f7" namespace=k8s.io protocol=ttrpc version=3 Aug 13 02:06:35.808628 containerd[1542]: time="2025-08-13T02:06:35.808563700Z" level=info msg="connecting to shim e2205c91467123e706bf2d032faf1b746ba173fac74b989cb011b2ac1b42d4cb" address="unix:///run/containerd/s/9e9c5b2d619dc774393eb3c0cd6ffd5454ecac7e792d5cc69a97406fe3406b66" namespace=k8s.io protocol=ttrpc version=3 Aug 13 02:06:35.813828 systemd[1]: Started cri-containerd-137a58b372d61626b210f0cba11b764d0abfef60ec202176f36b2812433ed26d.scope - libcontainer container 137a58b372d61626b210f0cba11b764d0abfef60ec202176f36b2812433ed26d. Aug 13 02:06:35.827668 containerd[1542]: time="2025-08-13T02:06:35.826884004Z" level=info msg="connecting to shim 557b5e0c662024645ca962a532047f30386677862fc00453dcdce47105b368d8" address="unix:///run/containerd/s/62bb0903990bfebe54b1356f52a642ac655c7acfbf4291a624085b18bd1c0840" namespace=k8s.io protocol=ttrpc version=3 Aug 13 02:06:35.866823 systemd[1]: Started cri-containerd-e2205c91467123e706bf2d032faf1b746ba173fac74b989cb011b2ac1b42d4cb.scope - libcontainer container e2205c91467123e706bf2d032faf1b746ba173fac74b989cb011b2ac1b42d4cb. Aug 13 02:06:35.871899 systemd[1]: Started cri-containerd-557b5e0c662024645ca962a532047f30386677862fc00453dcdce47105b368d8.scope - libcontainer container 557b5e0c662024645ca962a532047f30386677862fc00453dcdce47105b368d8. Aug 13 02:06:35.894117 containerd[1542]: time="2025-08-13T02:06:35.893963710Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-236-122-171,Uid:503bc69c8c96522ebc10e0826e62ddfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"137a58b372d61626b210f0cba11b764d0abfef60ec202176f36b2812433ed26d\"" Aug 13 02:06:35.897156 kubelet[2353]: E0813 02:06:35.897027 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:35.903848 containerd[1542]: time="2025-08-13T02:06:35.903016599Z" level=info msg="CreateContainer within sandbox \"137a58b372d61626b210f0cba11b764d0abfef60ec202176f36b2812433ed26d\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 02:06:35.911616 kubelet[2353]: E0813 02:06:35.910357 2353 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.236.122.171:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-236-122-171?timeout=10s\": dial tcp 172.236.122.171:6443: connect: connection refused" interval="800ms" Aug 13 02:06:35.915844 containerd[1542]: time="2025-08-13T02:06:35.915819482Z" level=info msg="Container a4c616ece71e63776f71028093692ce62128d02fc45f7b97f12f4f50f233a211: CDI devices from CRI Config.CDIDevices: []" Aug 13 02:06:35.928342 containerd[1542]: time="2025-08-13T02:06:35.928305228Z" level=info msg="CreateContainer within sandbox \"137a58b372d61626b210f0cba11b764d0abfef60ec202176f36b2812433ed26d\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a4c616ece71e63776f71028093692ce62128d02fc45f7b97f12f4f50f233a211\"" Aug 13 02:06:35.929745 containerd[1542]: time="2025-08-13T02:06:35.929725924Z" level=info msg="StartContainer for \"a4c616ece71e63776f71028093692ce62128d02fc45f7b97f12f4f50f233a211\"" Aug 13 02:06:35.936921 containerd[1542]: time="2025-08-13T02:06:35.936582597Z" level=info msg="connecting to shim a4c616ece71e63776f71028093692ce62128d02fc45f7b97f12f4f50f233a211" address="unix:///run/containerd/s/a9ae9ac2a0200877ff0840cfad8fb9abcf461289f880c05010a50dea3ed6c2f7" protocol=ttrpc version=3 Aug 13 02:06:35.943530 containerd[1542]: time="2025-08-13T02:06:35.943493488Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-236-122-171,Uid:6577143aeacdd825758eeac43577d224,Namespace:kube-system,Attempt:0,} returns sandbox id \"557b5e0c662024645ca962a532047f30386677862fc00453dcdce47105b368d8\"" Aug 13 02:06:35.944934 kubelet[2353]: E0813 02:06:35.944908 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:35.947754 containerd[1542]: time="2025-08-13T02:06:35.947723555Z" level=info msg="CreateContainer within sandbox \"557b5e0c662024645ca962a532047f30386677862fc00453dcdce47105b368d8\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 02:06:35.959697 containerd[1542]: time="2025-08-13T02:06:35.959664940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-236-122-171,Uid:253653dddecfabea52a5d44b8b9604cd,Namespace:kube-system,Attempt:0,} returns sandbox id \"e2205c91467123e706bf2d032faf1b746ba173fac74b989cb011b2ac1b42d4cb\"" Aug 13 02:06:35.960606 containerd[1542]: time="2025-08-13T02:06:35.960083241Z" level=info msg="Container 8581aadafe0537fa57df985018d153d8340ff3b2b943983a26b2e2d6513291d0: CDI devices from CRI Config.CDIDevices: []" Aug 13 02:06:35.960834 kubelet[2353]: E0813 02:06:35.960815 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:35.963678 containerd[1542]: time="2025-08-13T02:06:35.963026255Z" level=info msg="CreateContainer within sandbox \"e2205c91467123e706bf2d032faf1b746ba173fac74b989cb011b2ac1b42d4cb\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 02:06:35.963909 systemd[1]: Started cri-containerd-a4c616ece71e63776f71028093692ce62128d02fc45f7b97f12f4f50f233a211.scope - libcontainer container a4c616ece71e63776f71028093692ce62128d02fc45f7b97f12f4f50f233a211. Aug 13 02:06:35.966609 containerd[1542]: time="2025-08-13T02:06:35.966113332Z" level=info msg="CreateContainer within sandbox \"557b5e0c662024645ca962a532047f30386677862fc00453dcdce47105b368d8\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8581aadafe0537fa57df985018d153d8340ff3b2b943983a26b2e2d6513291d0\"" Aug 13 02:06:35.967616 containerd[1542]: time="2025-08-13T02:06:35.967531580Z" level=info msg="StartContainer for \"8581aadafe0537fa57df985018d153d8340ff3b2b943983a26b2e2d6513291d0\"" Aug 13 02:06:35.971640 containerd[1542]: time="2025-08-13T02:06:35.971074212Z" level=info msg="connecting to shim 8581aadafe0537fa57df985018d153d8340ff3b2b943983a26b2e2d6513291d0" address="unix:///run/containerd/s/62bb0903990bfebe54b1356f52a642ac655c7acfbf4291a624085b18bd1c0840" protocol=ttrpc version=3 Aug 13 02:06:35.974201 containerd[1542]: time="2025-08-13T02:06:35.974132247Z" level=info msg="Container 078ce7f1dd0187be5124efdf74c604c53098e028cf2ed857a52c6580a3ec7adb: CDI devices from CRI Config.CDIDevices: []" Aug 13 02:06:35.982468 containerd[1542]: time="2025-08-13T02:06:35.982435151Z" level=info msg="CreateContainer within sandbox \"e2205c91467123e706bf2d032faf1b746ba173fac74b989cb011b2ac1b42d4cb\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"078ce7f1dd0187be5124efdf74c604c53098e028cf2ed857a52c6580a3ec7adb\"" Aug 13 02:06:35.983022 containerd[1542]: time="2025-08-13T02:06:35.982995297Z" level=info msg="StartContainer for \"078ce7f1dd0187be5124efdf74c604c53098e028cf2ed857a52c6580a3ec7adb\"" Aug 13 02:06:35.986933 containerd[1542]: time="2025-08-13T02:06:35.986905309Z" level=info msg="connecting to shim 078ce7f1dd0187be5124efdf74c604c53098e028cf2ed857a52c6580a3ec7adb" address="unix:///run/containerd/s/9e9c5b2d619dc774393eb3c0cd6ffd5454ecac7e792d5cc69a97406fe3406b66" protocol=ttrpc version=3 Aug 13 02:06:35.998914 systemd[1]: Started cri-containerd-8581aadafe0537fa57df985018d153d8340ff3b2b943983a26b2e2d6513291d0.scope - libcontainer container 8581aadafe0537fa57df985018d153d8340ff3b2b943983a26b2e2d6513291d0. Aug 13 02:06:36.016853 systemd[1]: Started cri-containerd-078ce7f1dd0187be5124efdf74c604c53098e028cf2ed857a52c6580a3ec7adb.scope - libcontainer container 078ce7f1dd0187be5124efdf74c604c53098e028cf2ed857a52c6580a3ec7adb. Aug 13 02:06:36.047115 containerd[1542]: time="2025-08-13T02:06:36.047055184Z" level=info msg="StartContainer for \"a4c616ece71e63776f71028093692ce62128d02fc45f7b97f12f4f50f233a211\" returns successfully" Aug 13 02:06:36.066442 kubelet[2353]: I0813 02:06:36.066425 2353 kubelet_node_status.go:75] "Attempting to register node" node="172-236-122-171" Aug 13 02:06:36.067270 kubelet[2353]: E0813 02:06:36.067198 2353 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.236.122.171:6443/api/v1/nodes\": dial tcp 172.236.122.171:6443: connect: connection refused" node="172-236-122-171" Aug 13 02:06:36.076024 containerd[1542]: time="2025-08-13T02:06:36.076000689Z" level=info msg="StartContainer for \"8581aadafe0537fa57df985018d153d8340ff3b2b943983a26b2e2d6513291d0\" returns successfully" Aug 13 02:06:36.139648 containerd[1542]: time="2025-08-13T02:06:36.138049803Z" level=info msg="StartContainer for \"078ce7f1dd0187be5124efdf74c604c53098e028cf2ed857a52c6580a3ec7adb\" returns successfully" Aug 13 02:06:36.349839 kubelet[2353]: E0813 02:06:36.349813 2353 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-122-171\" not found" node="172-236-122-171" Aug 13 02:06:36.349930 kubelet[2353]: E0813 02:06:36.349923 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:36.350245 kubelet[2353]: E0813 02:06:36.350227 2353 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-122-171\" not found" node="172-236-122-171" Aug 13 02:06:36.350317 kubelet[2353]: E0813 02:06:36.350299 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:36.353573 kubelet[2353]: E0813 02:06:36.353554 2353 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-236-122-171\" not found" node="172-236-122-171" Aug 13 02:06:36.353687 kubelet[2353]: E0813 02:06:36.353670 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:36.870275 kubelet[2353]: I0813 02:06:36.870239 2353 kubelet_node_status.go:75] "Attempting to register node" node="172-236-122-171" Aug 13 02:06:37.292005 kubelet[2353]: I0813 02:06:37.291870 2353 apiserver.go:52] "Watching apiserver" Aug 13 02:06:37.305994 kubelet[2353]: E0813 02:06:37.304708 2353 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-236-122-171\" not found" node="172-236-122-171" Aug 13 02:06:37.340686 kubelet[2353]: I0813 02:06:37.340609 2353 kubelet_node_status.go:78] "Successfully registered node" node="172-236-122-171" Aug 13 02:06:37.354626 kubelet[2353]: I0813 02:06:37.354579 2353 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:06:37.355827 kubelet[2353]: I0813 02:06:37.355733 2353 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:06:37.369938 kubelet[2353]: E0813 02:06:37.369894 2353 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-122-171\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:06:37.370395 kubelet[2353]: E0813 02:06:37.370020 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:37.370395 kubelet[2353]: E0813 02:06:37.370112 2353 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-122-171\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:06:37.370509 kubelet[2353]: E0813 02:06:37.370487 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:37.408223 kubelet[2353]: I0813 02:06:37.408180 2353 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:06:37.409437 kubelet[2353]: I0813 02:06:37.409318 2353 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 02:06:37.410370 kubelet[2353]: E0813 02:06:37.410337 2353 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-236-122-171\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:06:37.410370 kubelet[2353]: I0813 02:06:37.410362 2353 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:06:37.411730 kubelet[2353]: E0813 02:06:37.411688 2353 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-236-122-171\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:06:37.411730 kubelet[2353]: I0813 02:06:37.411709 2353 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:06:37.412955 kubelet[2353]: E0813 02:06:37.412922 2353 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-236-122-171\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:06:37.420465 kubelet[2353]: I0813 02:06:37.420450 2353 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:06:37.421793 kubelet[2353]: E0813 02:06:37.421776 2353 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-236-122-171\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:06:37.421983 kubelet[2353]: E0813 02:06:37.421969 2353 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:39.329820 systemd[1]: Reload requested from client PID 2621 ('systemctl') (unit session-7.scope)... Aug 13 02:06:39.329838 systemd[1]: Reloading... Aug 13 02:06:39.440631 zram_generator::config[2673]: No configuration found. Aug 13 02:06:39.511463 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 02:06:39.627620 systemd[1]: Reloading finished in 297 ms. Aug 13 02:06:39.652571 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 02:06:39.656940 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 02:06:39.657210 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 02:06:39.657247 systemd[1]: kubelet.service: Consumed 762ms CPU time, 131.4M memory peak. Aug 13 02:06:39.659525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 02:06:39.833075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 02:06:39.839870 (kubelet)[2718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 02:06:39.887662 kubelet[2718]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 02:06:39.888092 kubelet[2718]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 02:06:39.888157 kubelet[2718]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 02:06:39.888263 kubelet[2718]: I0813 02:06:39.888239 2718 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 02:06:39.895119 kubelet[2718]: I0813 02:06:39.895095 2718 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 02:06:39.895119 kubelet[2718]: I0813 02:06:39.895115 2718 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 02:06:39.895411 kubelet[2718]: I0813 02:06:39.895395 2718 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 02:06:39.896721 kubelet[2718]: I0813 02:06:39.896706 2718 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 02:06:39.898479 kubelet[2718]: I0813 02:06:39.898285 2718 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 02:06:39.901235 kubelet[2718]: I0813 02:06:39.901212 2718 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 02:06:39.904970 kubelet[2718]: I0813 02:06:39.904939 2718 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 02:06:39.905224 kubelet[2718]: I0813 02:06:39.905190 2718 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 02:06:39.905349 kubelet[2718]: I0813 02:06:39.905216 2718 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-236-122-171","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 02:06:39.905349 kubelet[2718]: I0813 02:06:39.905345 2718 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 02:06:39.905450 kubelet[2718]: I0813 02:06:39.905354 2718 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 02:06:39.905450 kubelet[2718]: I0813 02:06:39.905392 2718 state_mem.go:36] "Initialized new in-memory state store" Aug 13 02:06:39.905557 kubelet[2718]: I0813 02:06:39.905533 2718 kubelet.go:446] "Attempting to sync node with API server" Aug 13 02:06:39.905603 kubelet[2718]: I0813 02:06:39.905562 2718 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 02:06:39.905964 kubelet[2718]: I0813 02:06:39.905903 2718 kubelet.go:352] "Adding apiserver pod source" Aug 13 02:06:39.905964 kubelet[2718]: I0813 02:06:39.905918 2718 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 02:06:39.907477 kubelet[2718]: I0813 02:06:39.907455 2718 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 02:06:39.908276 kubelet[2718]: I0813 02:06:39.908259 2718 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 02:06:39.909047 kubelet[2718]: I0813 02:06:39.909023 2718 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 02:06:39.909090 kubelet[2718]: I0813 02:06:39.909051 2718 server.go:1287] "Started kubelet" Aug 13 02:06:39.912640 kubelet[2718]: I0813 02:06:39.912072 2718 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 02:06:39.913430 kubelet[2718]: I0813 02:06:39.913408 2718 server.go:479] "Adding debug handlers to kubelet server" Aug 13 02:06:39.914469 kubelet[2718]: I0813 02:06:39.914413 2718 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 02:06:39.914790 kubelet[2718]: I0813 02:06:39.914735 2718 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 02:06:39.917445 kubelet[2718]: I0813 02:06:39.916046 2718 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 02:06:39.920064 kubelet[2718]: I0813 02:06:39.920049 2718 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 02:06:39.929788 kubelet[2718]: I0813 02:06:39.920625 2718 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 02:06:39.932856 kubelet[2718]: I0813 02:06:39.920634 2718 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 02:06:39.932856 kubelet[2718]: E0813 02:06:39.921728 2718 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-236-122-171\" not found" Aug 13 02:06:39.932856 kubelet[2718]: I0813 02:06:39.931862 2718 reconciler.go:26] "Reconciler: start to sync state" Aug 13 02:06:39.933412 kubelet[2718]: I0813 02:06:39.933387 2718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 02:06:39.934442 kubelet[2718]: I0813 02:06:39.934419 2718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 02:06:39.934442 kubelet[2718]: I0813 02:06:39.934444 2718 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 02:06:39.934516 kubelet[2718]: I0813 02:06:39.934460 2718 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 02:06:39.934516 kubelet[2718]: I0813 02:06:39.934466 2718 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 02:06:39.934516 kubelet[2718]: E0813 02:06:39.934505 2718 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 02:06:39.934989 kubelet[2718]: I0813 02:06:39.934861 2718 factory.go:221] Registration of the systemd container factory successfully Aug 13 02:06:39.934989 kubelet[2718]: I0813 02:06:39.934920 2718 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 02:06:39.941506 kubelet[2718]: E0813 02:06:39.941476 2718 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 02:06:39.941733 kubelet[2718]: I0813 02:06:39.941711 2718 factory.go:221] Registration of the containerd container factory successfully Aug 13 02:06:39.994034 kubelet[2718]: I0813 02:06:39.994011 2718 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 02:06:39.994246 kubelet[2718]: I0813 02:06:39.994172 2718 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 02:06:39.994371 kubelet[2718]: I0813 02:06:39.994361 2718 state_mem.go:36] "Initialized new in-memory state store" Aug 13 02:06:39.994533 kubelet[2718]: I0813 02:06:39.994520 2718 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 02:06:39.994626 kubelet[2718]: I0813 02:06:39.994583 2718 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 02:06:39.994669 kubelet[2718]: I0813 02:06:39.994661 2718 policy_none.go:49] "None policy: Start" Aug 13 02:06:39.994717 kubelet[2718]: I0813 02:06:39.994709 2718 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 02:06:39.994761 kubelet[2718]: I0813 02:06:39.994753 2718 state_mem.go:35] "Initializing new in-memory state store" Aug 13 02:06:39.994880 kubelet[2718]: I0813 02:06:39.994870 2718 state_mem.go:75] "Updated machine memory state" Aug 13 02:06:39.999407 kubelet[2718]: I0813 02:06:39.999392 2718 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 02:06:39.999797 kubelet[2718]: I0813 02:06:39.999785 2718 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 02:06:39.999876 kubelet[2718]: I0813 02:06:39.999854 2718 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 02:06:40.000119 kubelet[2718]: I0813 02:06:40.000106 2718 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 02:06:40.001021 kubelet[2718]: E0813 02:06:40.001006 2718 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 02:06:40.035672 kubelet[2718]: I0813 02:06:40.035324 2718 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:06:40.035672 kubelet[2718]: I0813 02:06:40.035427 2718 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:06:40.035672 kubelet[2718]: I0813 02:06:40.035556 2718 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:06:40.104099 kubelet[2718]: I0813 02:06:40.103000 2718 kubelet_node_status.go:75] "Attempting to register node" node="172-236-122-171" Aug 13 02:06:40.108942 kubelet[2718]: I0813 02:06:40.108896 2718 kubelet_node_status.go:124] "Node was previously registered" node="172-236-122-171" Aug 13 02:06:40.109067 kubelet[2718]: I0813 02:06:40.109040 2718 kubelet_node_status.go:78] "Successfully registered node" node="172-236-122-171" Aug 13 02:06:40.133033 kubelet[2718]: I0813 02:06:40.132997 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/503bc69c8c96522ebc10e0826e62ddfe-k8s-certs\") pod \"kube-apiserver-172-236-122-171\" (UID: \"503bc69c8c96522ebc10e0826e62ddfe\") " pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:06:40.133033 kubelet[2718]: I0813 02:06:40.133023 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/6577143aeacdd825758eeac43577d224-flexvolume-dir\") pod \"kube-controller-manager-172-236-122-171\" (UID: \"6577143aeacdd825758eeac43577d224\") " pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:06:40.133126 kubelet[2718]: I0813 02:06:40.133040 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/6577143aeacdd825758eeac43577d224-k8s-certs\") pod \"kube-controller-manager-172-236-122-171\" (UID: \"6577143aeacdd825758eeac43577d224\") " pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:06:40.133126 kubelet[2718]: I0813 02:06:40.133054 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/6577143aeacdd825758eeac43577d224-usr-share-ca-certificates\") pod \"kube-controller-manager-172-236-122-171\" (UID: \"6577143aeacdd825758eeac43577d224\") " pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:06:40.133126 kubelet[2718]: I0813 02:06:40.133070 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/503bc69c8c96522ebc10e0826e62ddfe-ca-certs\") pod \"kube-apiserver-172-236-122-171\" (UID: \"503bc69c8c96522ebc10e0826e62ddfe\") " pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:06:40.133126 kubelet[2718]: I0813 02:06:40.133083 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/503bc69c8c96522ebc10e0826e62ddfe-usr-share-ca-certificates\") pod \"kube-apiserver-172-236-122-171\" (UID: \"503bc69c8c96522ebc10e0826e62ddfe\") " pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:06:40.133126 kubelet[2718]: I0813 02:06:40.133097 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/6577143aeacdd825758eeac43577d224-ca-certs\") pod \"kube-controller-manager-172-236-122-171\" (UID: \"6577143aeacdd825758eeac43577d224\") " pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:06:40.133230 kubelet[2718]: I0813 02:06:40.133111 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/6577143aeacdd825758eeac43577d224-kubeconfig\") pod \"kube-controller-manager-172-236-122-171\" (UID: \"6577143aeacdd825758eeac43577d224\") " pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:06:40.133230 kubelet[2718]: I0813 02:06:40.133124 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/253653dddecfabea52a5d44b8b9604cd-kubeconfig\") pod \"kube-scheduler-172-236-122-171\" (UID: \"253653dddecfabea52a5d44b8b9604cd\") " pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:06:40.341997 kubelet[2718]: E0813 02:06:40.341709 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:40.342113 kubelet[2718]: E0813 02:06:40.342019 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:40.342113 kubelet[2718]: E0813 02:06:40.342108 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:40.907613 kubelet[2718]: I0813 02:06:40.907343 2718 apiserver.go:52] "Watching apiserver" Aug 13 02:06:40.932645 kubelet[2718]: I0813 02:06:40.931890 2718 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 02:06:40.975231 kubelet[2718]: E0813 02:06:40.975134 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:40.975530 kubelet[2718]: E0813 02:06:40.975513 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:40.975814 kubelet[2718]: E0813 02:06:40.975800 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:41.010904 kubelet[2718]: I0813 02:06:41.010836 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-236-122-171" podStartSLOduration=1.01082417 podStartE2EDuration="1.01082417s" podCreationTimestamp="2025-08-13 02:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 02:06:41.010515842 +0000 UTC m=+1.166406746" watchObservedRunningTime="2025-08-13 02:06:41.01082417 +0000 UTC m=+1.166715074" Aug 13 02:06:41.011181 kubelet[2718]: I0813 02:06:41.011103 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-236-122-171" podStartSLOduration=1.011097044 podStartE2EDuration="1.011097044s" podCreationTimestamp="2025-08-13 02:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 02:06:41.001322214 +0000 UTC m=+1.157213118" watchObservedRunningTime="2025-08-13 02:06:41.011097044 +0000 UTC m=+1.166987948" Aug 13 02:06:41.345347 sshd[2219]: kex_exchange_identification: read: Connection reset by peer Aug 13 02:06:41.345347 sshd[2219]: Connection reset by 78.128.112.74 port 59048 Aug 13 02:06:41.346840 systemd[1]: sshd@8-172.236.122.171:22-78.128.112.74:59048.service: Deactivated successfully. Aug 13 02:06:41.976634 kubelet[2718]: E0813 02:06:41.976527 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:41.977235 kubelet[2718]: E0813 02:06:41.977201 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:43.005656 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 02:06:46.313750 kubelet[2718]: I0813 02:06:46.313718 2718 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 02:06:46.314205 containerd[1542]: time="2025-08-13T02:06:46.314097701Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 02:06:46.314470 kubelet[2718]: I0813 02:06:46.314280 2718 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 02:06:47.169119 kubelet[2718]: I0813 02:06:47.168949 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-236-122-171" podStartSLOduration=7.168933742 podStartE2EDuration="7.168933742s" podCreationTimestamp="2025-08-13 02:06:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 02:06:41.018017774 +0000 UTC m=+1.173908678" watchObservedRunningTime="2025-08-13 02:06:47.168933742 +0000 UTC m=+7.324824646" Aug 13 02:06:47.181828 systemd[1]: Created slice kubepods-besteffort-pod2db0d64c_2389_4b0b_99fa_d2a4b73d0335.slice - libcontainer container kubepods-besteffort-pod2db0d64c_2389_4b0b_99fa_d2a4b73d0335.slice. Aug 13 02:06:47.274917 kubelet[2718]: I0813 02:06:47.274849 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2db0d64c-2389-4b0b-99fa-d2a4b73d0335-xtables-lock\") pod \"kube-proxy-s4bl4\" (UID: \"2db0d64c-2389-4b0b-99fa-d2a4b73d0335\") " pod="kube-system/kube-proxy-s4bl4" Aug 13 02:06:47.274917 kubelet[2718]: I0813 02:06:47.274893 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldz6w\" (UniqueName: \"kubernetes.io/projected/2db0d64c-2389-4b0b-99fa-d2a4b73d0335-kube-api-access-ldz6w\") pod \"kube-proxy-s4bl4\" (UID: \"2db0d64c-2389-4b0b-99fa-d2a4b73d0335\") " pod="kube-system/kube-proxy-s4bl4" Aug 13 02:06:47.274917 kubelet[2718]: I0813 02:06:47.274918 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2db0d64c-2389-4b0b-99fa-d2a4b73d0335-kube-proxy\") pod \"kube-proxy-s4bl4\" (UID: \"2db0d64c-2389-4b0b-99fa-d2a4b73d0335\") " pod="kube-system/kube-proxy-s4bl4" Aug 13 02:06:47.275150 kubelet[2718]: I0813 02:06:47.274937 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2db0d64c-2389-4b0b-99fa-d2a4b73d0335-lib-modules\") pod \"kube-proxy-s4bl4\" (UID: \"2db0d64c-2389-4b0b-99fa-d2a4b73d0335\") " pod="kube-system/kube-proxy-s4bl4" Aug 13 02:06:47.411007 systemd[1]: Created slice kubepods-besteffort-podf34ef7fc_f010_4a73_ba04_0097b359cd72.slice - libcontainer container kubepods-besteffort-podf34ef7fc_f010_4a73_ba04_0097b359cd72.slice. Aug 13 02:06:47.477130 kubelet[2718]: I0813 02:06:47.477008 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrp42\" (UniqueName: \"kubernetes.io/projected/f34ef7fc-f010-4a73-ba04-0097b359cd72-kube-api-access-xrp42\") pod \"tigera-operator-747864d56d-nxhh9\" (UID: \"f34ef7fc-f010-4a73-ba04-0097b359cd72\") " pod="tigera-operator/tigera-operator-747864d56d-nxhh9" Aug 13 02:06:47.477130 kubelet[2718]: I0813 02:06:47.477049 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f34ef7fc-f010-4a73-ba04-0097b359cd72-var-lib-calico\") pod \"tigera-operator-747864d56d-nxhh9\" (UID: \"f34ef7fc-f010-4a73-ba04-0097b359cd72\") " pod="tigera-operator/tigera-operator-747864d56d-nxhh9" Aug 13 02:06:47.489241 kubelet[2718]: E0813 02:06:47.489200 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:47.489677 containerd[1542]: time="2025-08-13T02:06:47.489630334Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s4bl4,Uid:2db0d64c-2389-4b0b-99fa-d2a4b73d0335,Namespace:kube-system,Attempt:0,}" Aug 13 02:06:47.507245 containerd[1542]: time="2025-08-13T02:06:47.507175403Z" level=info msg="connecting to shim 9bfac0bddf29d19494b26e221103c4cb7df9e726dfb25491fdb4f1eb635bfc47" address="unix:///run/containerd/s/c313b29195411354bc8c1e4efd37401379e6da1d22fa78e6c5d53bb6e7680598" namespace=k8s.io protocol=ttrpc version=3 Aug 13 02:06:47.529721 systemd[1]: Started cri-containerd-9bfac0bddf29d19494b26e221103c4cb7df9e726dfb25491fdb4f1eb635bfc47.scope - libcontainer container 9bfac0bddf29d19494b26e221103c4cb7df9e726dfb25491fdb4f1eb635bfc47. Aug 13 02:06:47.556294 containerd[1542]: time="2025-08-13T02:06:47.556267362Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-s4bl4,Uid:2db0d64c-2389-4b0b-99fa-d2a4b73d0335,Namespace:kube-system,Attempt:0,} returns sandbox id \"9bfac0bddf29d19494b26e221103c4cb7df9e726dfb25491fdb4f1eb635bfc47\"" Aug 13 02:06:47.556899 kubelet[2718]: E0813 02:06:47.556879 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:47.560975 containerd[1542]: time="2025-08-13T02:06:47.560898341Z" level=info msg="CreateContainer within sandbox \"9bfac0bddf29d19494b26e221103c4cb7df9e726dfb25491fdb4f1eb635bfc47\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 02:06:47.570613 containerd[1542]: time="2025-08-13T02:06:47.569867075Z" level=info msg="Container bf251838e1c2083246e6f389d829ca721cce59cb3b5daa5a784b931c75894f79: CDI devices from CRI Config.CDIDevices: []" Aug 13 02:06:47.575165 containerd[1542]: time="2025-08-13T02:06:47.575140876Z" level=info msg="CreateContainer within sandbox \"9bfac0bddf29d19494b26e221103c4cb7df9e726dfb25491fdb4f1eb635bfc47\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"bf251838e1c2083246e6f389d829ca721cce59cb3b5daa5a784b931c75894f79\"" Aug 13 02:06:47.575837 containerd[1542]: time="2025-08-13T02:06:47.575791169Z" level=info msg="StartContainer for \"bf251838e1c2083246e6f389d829ca721cce59cb3b5daa5a784b931c75894f79\"" Aug 13 02:06:47.578642 containerd[1542]: time="2025-08-13T02:06:47.577966602Z" level=info msg="connecting to shim bf251838e1c2083246e6f389d829ca721cce59cb3b5daa5a784b931c75894f79" address="unix:///run/containerd/s/c313b29195411354bc8c1e4efd37401379e6da1d22fa78e6c5d53bb6e7680598" protocol=ttrpc version=3 Aug 13 02:06:47.598939 systemd[1]: Started cri-containerd-bf251838e1c2083246e6f389d829ca721cce59cb3b5daa5a784b931c75894f79.scope - libcontainer container bf251838e1c2083246e6f389d829ca721cce59cb3b5daa5a784b931c75894f79. Aug 13 02:06:47.636697 containerd[1542]: time="2025-08-13T02:06:47.636620711Z" level=info msg="StartContainer for \"bf251838e1c2083246e6f389d829ca721cce59cb3b5daa5a784b931c75894f79\" returns successfully" Aug 13 02:06:47.714333 containerd[1542]: time="2025-08-13T02:06:47.714298299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-nxhh9,Uid:f34ef7fc-f010-4a73-ba04-0097b359cd72,Namespace:tigera-operator,Attempt:0,}" Aug 13 02:06:47.732166 containerd[1542]: time="2025-08-13T02:06:47.731791259Z" level=info msg="connecting to shim c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d" address="unix:///run/containerd/s/50099600a7675248fef2101e9d3f820fbcbfd1e6eb62d22a3cf63b638c6abc02" namespace=k8s.io protocol=ttrpc version=3 Aug 13 02:06:47.754737 systemd[1]: Started cri-containerd-c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d.scope - libcontainer container c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d. Aug 13 02:06:47.812067 containerd[1542]: time="2025-08-13T02:06:47.812026520Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-nxhh9,Uid:f34ef7fc-f010-4a73-ba04-0097b359cd72,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d\"" Aug 13 02:06:47.814952 containerd[1542]: time="2025-08-13T02:06:47.813510661Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 02:06:47.991232 kubelet[2718]: E0813 02:06:47.990222 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:48.242427 kubelet[2718]: E0813 02:06:48.242090 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:48.256789 kubelet[2718]: I0813 02:06:48.256737 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-s4bl4" podStartSLOduration=1.256717814 podStartE2EDuration="1.256717814s" podCreationTimestamp="2025-08-13 02:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 02:06:48.007002855 +0000 UTC m=+8.162893759" watchObservedRunningTime="2025-08-13 02:06:48.256717814 +0000 UTC m=+8.412608718" Aug 13 02:06:48.289615 kubelet[2718]: E0813 02:06:48.288743 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:48.994402 kubelet[2718]: E0813 02:06:48.993967 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:48.995487 kubelet[2718]: E0813 02:06:48.995411 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:49.951725 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4021000734.mount: Deactivated successfully. Aug 13 02:06:49.957573 kubelet[2718]: E0813 02:06:49.957518 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:50.008090 kubelet[2718]: E0813 02:06:50.008054 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:50.010828 kubelet[2718]: E0813 02:06:50.010803 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:50.011323 kubelet[2718]: E0813 02:06:50.011308 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:06:50.405157 containerd[1542]: time="2025-08-13T02:06:50.405113114Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:50.405946 containerd[1542]: time="2025-08-13T02:06:50.405767899Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 02:06:50.406471 containerd[1542]: time="2025-08-13T02:06:50.406440834Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:50.407873 containerd[1542]: time="2025-08-13T02:06:50.407843693Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:06:50.408497 containerd[1542]: time="2025-08-13T02:06:50.408468199Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.593670481s" Aug 13 02:06:50.408564 containerd[1542]: time="2025-08-13T02:06:50.408550307Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 02:06:50.411320 containerd[1542]: time="2025-08-13T02:06:50.411293876Z" level=info msg="CreateContainer within sandbox \"c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 02:06:50.420271 containerd[1542]: time="2025-08-13T02:06:50.416542430Z" level=info msg="Container 889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a: CDI devices from CRI Config.CDIDevices: []" Aug 13 02:06:50.432656 containerd[1542]: time="2025-08-13T02:06:50.432553554Z" level=info msg="CreateContainer within sandbox \"c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a\"" Aug 13 02:06:50.433805 containerd[1542]: time="2025-08-13T02:06:50.433284768Z" level=info msg="StartContainer for \"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a\"" Aug 13 02:06:50.434253 containerd[1542]: time="2025-08-13T02:06:50.434232287Z" level=info msg="connecting to shim 889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a" address="unix:///run/containerd/s/50099600a7675248fef2101e9d3f820fbcbfd1e6eb62d22a3cf63b638c6abc02" protocol=ttrpc version=3 Aug 13 02:06:50.455733 systemd[1]: Started cri-containerd-889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a.scope - libcontainer container 889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a. Aug 13 02:06:50.486194 containerd[1542]: time="2025-08-13T02:06:50.486157094Z" level=info msg="StartContainer for \"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a\" returns successfully" Aug 13 02:06:51.018326 kubelet[2718]: I0813 02:06:51.018194 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-nxhh9" podStartSLOduration=1.422270169 podStartE2EDuration="4.018180274s" podCreationTimestamp="2025-08-13 02:06:47 +0000 UTC" firstStartedPulling="2025-08-13 02:06:47.81318808 +0000 UTC m=+7.969078984" lastFinishedPulling="2025-08-13 02:06:50.409098185 +0000 UTC m=+10.564989089" observedRunningTime="2025-08-13 02:06:51.017987428 +0000 UTC m=+11.173878332" watchObservedRunningTime="2025-08-13 02:06:51.018180274 +0000 UTC m=+11.174071178" Aug 13 02:06:56.079519 sudo[1807]: pam_unix(sudo:session): session closed for user root Aug 13 02:06:56.129692 sshd[1806]: Connection closed by 147.75.109.163 port 43830 Aug 13 02:06:56.130820 sshd-session[1804]: pam_unix(sshd:session): session closed for user core Aug 13 02:06:56.134829 systemd-logind[1527]: Session 7 logged out. Waiting for processes to exit. Aug 13 02:06:56.137206 systemd[1]: sshd@7-172.236.122.171:22-147.75.109.163:43830.service: Deactivated successfully. Aug 13 02:06:56.140176 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 02:06:56.140860 systemd[1]: session-7.scope: Consumed 3.218s CPU time, 224.9M memory peak. Aug 13 02:06:56.144104 systemd-logind[1527]: Removed session 7. Aug 13 02:06:56.924008 update_engine[1529]: I20250813 02:06:56.923105 1529 update_attempter.cc:509] Updating boot flags... Aug 13 02:06:59.758746 systemd[1]: Created slice kubepods-besteffort-pod4be159d8_84a9_41ca_8f38_41613e553491.slice - libcontainer container kubepods-besteffort-pod4be159d8_84a9_41ca_8f38_41613e553491.slice. Aug 13 02:06:59.859059 kubelet[2718]: I0813 02:06:59.858980 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/4be159d8-84a9-41ca-8f38-41613e553491-typha-certs\") pod \"calico-typha-67c8447dcf-wsn77\" (UID: \"4be159d8-84a9-41ca-8f38-41613e553491\") " pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:06:59.859059 kubelet[2718]: I0813 02:06:59.859054 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4be159d8-84a9-41ca-8f38-41613e553491-tigera-ca-bundle\") pod \"calico-typha-67c8447dcf-wsn77\" (UID: \"4be159d8-84a9-41ca-8f38-41613e553491\") " pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:06:59.859653 kubelet[2718]: I0813 02:06:59.859094 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nb7lf\" (UniqueName: \"kubernetes.io/projected/4be159d8-84a9-41ca-8f38-41613e553491-kube-api-access-nb7lf\") pod \"calico-typha-67c8447dcf-wsn77\" (UID: \"4be159d8-84a9-41ca-8f38-41613e553491\") " pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:07:00.066179 kubelet[2718]: E0813 02:07:00.065924 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:07:00.066887 containerd[1542]: time="2025-08-13T02:07:00.066848057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67c8447dcf-wsn77,Uid:4be159d8-84a9-41ca-8f38-41613e553491,Namespace:calico-system,Attempt:0,}" Aug 13 02:07:00.088636 containerd[1542]: time="2025-08-13T02:07:00.088324685Z" level=info msg="connecting to shim d455610107366cbcc93bc60280d9920d0c35dfd542d4aada989959d014fae7b4" address="unix:///run/containerd/s/3071e1d95758cfce50c9e00838e8d61929ecb986f72e85b4530d3e66bfbf088f" namespace=k8s.io protocol=ttrpc version=3 Aug 13 02:07:00.117732 systemd[1]: Started cri-containerd-d455610107366cbcc93bc60280d9920d0c35dfd542d4aada989959d014fae7b4.scope - libcontainer container d455610107366cbcc93bc60280d9920d0c35dfd542d4aada989959d014fae7b4. Aug 13 02:07:00.155176 systemd[1]: Created slice kubepods-besteffort-pode8f51745_7382_4ead_96df_a31572ad4e1f.slice - libcontainer container kubepods-besteffort-pode8f51745_7382_4ead_96df_a31572ad4e1f.slice. Aug 13 02:07:00.160453 kubelet[2718]: I0813 02:07:00.160415 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e8f51745-7382-4ead-96df-a31572ad4e1f-lib-modules\") pod \"calico-node-cdfxj\" (UID: \"e8f51745-7382-4ead-96df-a31572ad4e1f\") " pod="calico-system/calico-node-cdfxj" Aug 13 02:07:00.160453 kubelet[2718]: I0813 02:07:00.160453 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e8f51745-7382-4ead-96df-a31572ad4e1f-flexvol-driver-host\") pod \"calico-node-cdfxj\" (UID: \"e8f51745-7382-4ead-96df-a31572ad4e1f\") " pod="calico-system/calico-node-cdfxj" Aug 13 02:07:00.160553 kubelet[2718]: I0813 02:07:00.160469 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e8f51745-7382-4ead-96df-a31572ad4e1f-policysync\") pod \"calico-node-cdfxj\" (UID: \"e8f51745-7382-4ead-96df-a31572ad4e1f\") " pod="calico-system/calico-node-cdfxj" Aug 13 02:07:00.160553 kubelet[2718]: I0813 02:07:00.160485 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e8f51745-7382-4ead-96df-a31572ad4e1f-cni-log-dir\") pod \"calico-node-cdfxj\" (UID: \"e8f51745-7382-4ead-96df-a31572ad4e1f\") " pod="calico-system/calico-node-cdfxj" Aug 13 02:07:00.160553 kubelet[2718]: I0813 02:07:00.160498 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e8f51745-7382-4ead-96df-a31572ad4e1f-cni-net-dir\") pod \"calico-node-cdfxj\" (UID: \"e8f51745-7382-4ead-96df-a31572ad4e1f\") " pod="calico-system/calico-node-cdfxj" Aug 13 02:07:00.160553 kubelet[2718]: I0813 02:07:00.160512 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e8f51745-7382-4ead-96df-a31572ad4e1f-cni-bin-dir\") pod \"calico-node-cdfxj\" (UID: \"e8f51745-7382-4ead-96df-a31572ad4e1f\") " pod="calico-system/calico-node-cdfxj" Aug 13 02:07:00.160553 kubelet[2718]: I0813 02:07:00.160524 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e8f51745-7382-4ead-96df-a31572ad4e1f-xtables-lock\") pod \"calico-node-cdfxj\" (UID: \"e8f51745-7382-4ead-96df-a31572ad4e1f\") " pod="calico-system/calico-node-cdfxj" Aug 13 02:07:00.160709 kubelet[2718]: I0813 02:07:00.160541 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e8f51745-7382-4ead-96df-a31572ad4e1f-node-certs\") pod \"calico-node-cdfxj\" (UID: \"e8f51745-7382-4ead-96df-a31572ad4e1f\") " pod="calico-system/calico-node-cdfxj" Aug 13 02:07:00.160709 kubelet[2718]: I0813 02:07:00.160559 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e8f51745-7382-4ead-96df-a31572ad4e1f-var-lib-calico\") pod \"calico-node-cdfxj\" (UID: \"e8f51745-7382-4ead-96df-a31572ad4e1f\") " pod="calico-system/calico-node-cdfxj" Aug 13 02:07:00.160709 kubelet[2718]: I0813 02:07:00.160574 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e8f51745-7382-4ead-96df-a31572ad4e1f-tigera-ca-bundle\") pod \"calico-node-cdfxj\" (UID: \"e8f51745-7382-4ead-96df-a31572ad4e1f\") " pod="calico-system/calico-node-cdfxj" Aug 13 02:07:00.161621 kubelet[2718]: I0813 02:07:00.161444 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e8f51745-7382-4ead-96df-a31572ad4e1f-var-run-calico\") pod \"calico-node-cdfxj\" (UID: \"e8f51745-7382-4ead-96df-a31572ad4e1f\") " pod="calico-system/calico-node-cdfxj" Aug 13 02:07:00.161621 kubelet[2718]: I0813 02:07:00.161474 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j884b\" (UniqueName: \"kubernetes.io/projected/e8f51745-7382-4ead-96df-a31572ad4e1f-kube-api-access-j884b\") pod \"calico-node-cdfxj\" (UID: \"e8f51745-7382-4ead-96df-a31572ad4e1f\") " pod="calico-system/calico-node-cdfxj" Aug 13 02:07:00.266006 kubelet[2718]: E0813 02:07:00.265970 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.266006 kubelet[2718]: W0813 02:07:00.265995 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.266313 kubelet[2718]: E0813 02:07:00.266288 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.273006 kubelet[2718]: E0813 02:07:00.272931 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.273006 kubelet[2718]: W0813 02:07:00.272950 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.273006 kubelet[2718]: E0813 02:07:00.272969 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.273549 kubelet[2718]: E0813 02:07:00.273523 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.275145 kubelet[2718]: W0813 02:07:00.274797 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.275145 kubelet[2718]: E0813 02:07:00.274822 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.297183 containerd[1542]: time="2025-08-13T02:07:00.297126667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-67c8447dcf-wsn77,Uid:4be159d8-84a9-41ca-8f38-41613e553491,Namespace:calico-system,Attempt:0,} returns sandbox id \"d455610107366cbcc93bc60280d9920d0c35dfd542d4aada989959d014fae7b4\"" Aug 13 02:07:00.298155 kubelet[2718]: E0813 02:07:00.297987 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:07:00.299473 containerd[1542]: time="2025-08-13T02:07:00.299435687Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 02:07:00.442498 kubelet[2718]: E0813 02:07:00.442372 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:07:00.452536 kubelet[2718]: E0813 02:07:00.452469 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.452536 kubelet[2718]: W0813 02:07:00.452486 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.452536 kubelet[2718]: E0813 02:07:00.452504 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.453083 kubelet[2718]: E0813 02:07:00.453001 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.453083 kubelet[2718]: W0813 02:07:00.453012 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.453083 kubelet[2718]: E0813 02:07:00.453023 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.453414 kubelet[2718]: E0813 02:07:00.453380 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.453500 kubelet[2718]: W0813 02:07:00.453463 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.453576 kubelet[2718]: E0813 02:07:00.453538 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.453978 kubelet[2718]: E0813 02:07:00.453967 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.453978 kubelet[2718]: W0813 02:07:00.454007 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.453978 kubelet[2718]: E0813 02:07:00.454016 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.454430 kubelet[2718]: E0813 02:07:00.454363 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.454430 kubelet[2718]: W0813 02:07:00.454373 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.454430 kubelet[2718]: E0813 02:07:00.454381 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.454757 kubelet[2718]: E0813 02:07:00.454746 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.454907 kubelet[2718]: W0813 02:07:00.454836 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.454907 kubelet[2718]: E0813 02:07:00.454851 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.455221 kubelet[2718]: E0813 02:07:00.455133 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.455221 kubelet[2718]: W0813 02:07:00.455176 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.455221 kubelet[2718]: E0813 02:07:00.455184 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.455625 kubelet[2718]: E0813 02:07:00.455510 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.455625 kubelet[2718]: W0813 02:07:00.455553 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.455625 kubelet[2718]: E0813 02:07:00.455563 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.455971 kubelet[2718]: E0813 02:07:00.455883 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.455971 kubelet[2718]: W0813 02:07:00.455892 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.455971 kubelet[2718]: E0813 02:07:00.455900 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.456275 kubelet[2718]: E0813 02:07:00.456200 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.456275 kubelet[2718]: W0813 02:07:00.456210 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.456275 kubelet[2718]: E0813 02:07:00.456217 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.456630 kubelet[2718]: E0813 02:07:00.456524 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.456630 kubelet[2718]: W0813 02:07:00.456554 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.456630 kubelet[2718]: E0813 02:07:00.456563 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.457013 kubelet[2718]: E0813 02:07:00.456935 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.457013 kubelet[2718]: W0813 02:07:00.456945 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.457013 kubelet[2718]: E0813 02:07:00.456953 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.457351 kubelet[2718]: E0813 02:07:00.457285 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.457351 kubelet[2718]: W0813 02:07:00.457295 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.457351 kubelet[2718]: E0813 02:07:00.457303 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.457722 kubelet[2718]: E0813 02:07:00.457650 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.457722 kubelet[2718]: W0813 02:07:00.457661 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.457722 kubelet[2718]: E0813 02:07:00.457669 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.458051 kubelet[2718]: E0813 02:07:00.457989 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.458051 kubelet[2718]: W0813 02:07:00.458000 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.458051 kubelet[2718]: E0813 02:07:00.458007 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.458406 kubelet[2718]: E0813 02:07:00.458327 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.458406 kubelet[2718]: W0813 02:07:00.458337 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.458406 kubelet[2718]: E0813 02:07:00.458344 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.458743 kubelet[2718]: E0813 02:07:00.458732 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.458853 kubelet[2718]: W0813 02:07:00.458806 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.458853 kubelet[2718]: E0813 02:07:00.458818 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.459176 kubelet[2718]: E0813 02:07:00.459073 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.459176 kubelet[2718]: W0813 02:07:00.459114 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.459176 kubelet[2718]: E0813 02:07:00.459122 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.459532 kubelet[2718]: E0813 02:07:00.459487 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.459532 kubelet[2718]: W0813 02:07:00.459499 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.459701 kubelet[2718]: E0813 02:07:00.459508 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.460100 kubelet[2718]: E0813 02:07:00.460034 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.460100 kubelet[2718]: W0813 02:07:00.460074 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.460100 kubelet[2718]: E0813 02:07:00.460083 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.462324 containerd[1542]: time="2025-08-13T02:07:00.462287051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cdfxj,Uid:e8f51745-7382-4ead-96df-a31572ad4e1f,Namespace:calico-system,Attempt:0,}" Aug 13 02:07:00.465187 kubelet[2718]: E0813 02:07:00.465148 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.465187 kubelet[2718]: W0813 02:07:00.465161 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.465187 kubelet[2718]: E0813 02:07:00.465173 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.466267 kubelet[2718]: I0813 02:07:00.465398 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/b671177d-3397-4938-853c-0cced3d0e9f5-kubelet-dir\") pod \"csi-node-driver-r6mhv\" (UID: \"b671177d-3397-4938-853c-0cced3d0e9f5\") " pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:07:00.466267 kubelet[2718]: E0813 02:07:00.465694 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.466267 kubelet[2718]: W0813 02:07:00.465717 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.466267 kubelet[2718]: E0813 02:07:00.465732 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.466267 kubelet[2718]: E0813 02:07:00.466262 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.466267 kubelet[2718]: W0813 02:07:00.466273 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.466693 kubelet[2718]: E0813 02:07:00.466294 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.467042 kubelet[2718]: E0813 02:07:00.467025 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.467042 kubelet[2718]: W0813 02:07:00.467039 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.467405 kubelet[2718]: E0813 02:07:00.467048 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.467405 kubelet[2718]: I0813 02:07:00.467070 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/b671177d-3397-4938-853c-0cced3d0e9f5-socket-dir\") pod \"csi-node-driver-r6mhv\" (UID: \"b671177d-3397-4938-853c-0cced3d0e9f5\") " pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:07:00.468012 kubelet[2718]: E0813 02:07:00.467930 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.468012 kubelet[2718]: W0813 02:07:00.467952 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.468140 kubelet[2718]: E0813 02:07:00.468094 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.469051 kubelet[2718]: E0813 02:07:00.469031 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.469051 kubelet[2718]: W0813 02:07:00.469048 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.469051 kubelet[2718]: E0813 02:07:00.469065 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.469306 kubelet[2718]: I0813 02:07:00.469084 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrkdm\" (UniqueName: \"kubernetes.io/projected/b671177d-3397-4938-853c-0cced3d0e9f5-kube-api-access-qrkdm\") pod \"csi-node-driver-r6mhv\" (UID: \"b671177d-3397-4938-853c-0cced3d0e9f5\") " pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:07:00.469681 kubelet[2718]: E0813 02:07:00.469651 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.469681 kubelet[2718]: W0813 02:07:00.469666 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.469681 kubelet[2718]: E0813 02:07:00.469675 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.470541 kubelet[2718]: E0813 02:07:00.470524 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.470541 kubelet[2718]: W0813 02:07:00.470538 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.470722 kubelet[2718]: E0813 02:07:00.470555 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.471307 kubelet[2718]: E0813 02:07:00.471289 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.471307 kubelet[2718]: W0813 02:07:00.471304 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.471478 kubelet[2718]: E0813 02:07:00.471322 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.472063 kubelet[2718]: E0813 02:07:00.472047 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.472063 kubelet[2718]: W0813 02:07:00.472061 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.472139 kubelet[2718]: E0813 02:07:00.472073 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.472360 kubelet[2718]: I0813 02:07:00.472329 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/b671177d-3397-4938-853c-0cced3d0e9f5-registration-dir\") pod \"csi-node-driver-r6mhv\" (UID: \"b671177d-3397-4938-853c-0cced3d0e9f5\") " pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:07:00.472762 kubelet[2718]: E0813 02:07:00.472640 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.472762 kubelet[2718]: W0813 02:07:00.472688 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.472762 kubelet[2718]: E0813 02:07:00.472708 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.473131 kubelet[2718]: E0813 02:07:00.473120 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.473368 kubelet[2718]: W0813 02:07:00.473203 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.473368 kubelet[2718]: E0813 02:07:00.473227 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.473804 kubelet[2718]: E0813 02:07:00.473793 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.473867 kubelet[2718]: W0813 02:07:00.473846 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.474179 kubelet[2718]: E0813 02:07:00.473952 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.474179 kubelet[2718]: I0813 02:07:00.473997 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/b671177d-3397-4938-853c-0cced3d0e9f5-varrun\") pod \"csi-node-driver-r6mhv\" (UID: \"b671177d-3397-4938-853c-0cced3d0e9f5\") " pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:07:00.474372 kubelet[2718]: E0813 02:07:00.474339 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.474462 kubelet[2718]: W0813 02:07:00.474451 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.474524 kubelet[2718]: E0813 02:07:00.474514 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.474921 kubelet[2718]: E0813 02:07:00.474873 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.474921 kubelet[2718]: W0813 02:07:00.474883 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.474921 kubelet[2718]: E0813 02:07:00.474892 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.482832 containerd[1542]: time="2025-08-13T02:07:00.482720823Z" level=info msg="connecting to shim b8fbca66d16757f0d8830ae86752b685e2ed86c07fd8628668ed375bbac8cad0" address="unix:///run/containerd/s/1134d6fedd26ca70851a85e307217bbf45d02fc285a9ed9dbeebafeb7ceefd25" namespace=k8s.io protocol=ttrpc version=3 Aug 13 02:07:00.515759 systemd[1]: Started cri-containerd-b8fbca66d16757f0d8830ae86752b685e2ed86c07fd8628668ed375bbac8cad0.scope - libcontainer container b8fbca66d16757f0d8830ae86752b685e2ed86c07fd8628668ed375bbac8cad0. Aug 13 02:07:00.560141 containerd[1542]: time="2025-08-13T02:07:00.560089999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-cdfxj,Uid:e8f51745-7382-4ead-96df-a31572ad4e1f,Namespace:calico-system,Attempt:0,} returns sandbox id \"b8fbca66d16757f0d8830ae86752b685e2ed86c07fd8628668ed375bbac8cad0\"" Aug 13 02:07:00.574749 kubelet[2718]: E0813 02:07:00.574648 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.574749 kubelet[2718]: W0813 02:07:00.574684 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.574749 kubelet[2718]: E0813 02:07:00.574703 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.575077 kubelet[2718]: E0813 02:07:00.575058 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.575121 kubelet[2718]: W0813 02:07:00.575078 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.575121 kubelet[2718]: E0813 02:07:00.575099 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.575308 kubelet[2718]: E0813 02:07:00.575295 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.575308 kubelet[2718]: W0813 02:07:00.575306 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.575368 kubelet[2718]: E0813 02:07:00.575326 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.575512 kubelet[2718]: E0813 02:07:00.575499 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.575512 kubelet[2718]: W0813 02:07:00.575510 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.575580 kubelet[2718]: E0813 02:07:00.575523 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.575895 kubelet[2718]: E0813 02:07:00.575871 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.575895 kubelet[2718]: W0813 02:07:00.575882 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.575946 kubelet[2718]: E0813 02:07:00.575903 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.576104 kubelet[2718]: E0813 02:07:00.576092 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.576104 kubelet[2718]: W0813 02:07:00.576103 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.576167 kubelet[2718]: E0813 02:07:00.576110 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.576334 kubelet[2718]: E0813 02:07:00.576320 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.576334 kubelet[2718]: W0813 02:07:00.576331 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.576397 kubelet[2718]: E0813 02:07:00.576351 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.576555 kubelet[2718]: E0813 02:07:00.576542 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.576555 kubelet[2718]: W0813 02:07:00.576553 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.576640 kubelet[2718]: E0813 02:07:00.576572 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.576823 kubelet[2718]: E0813 02:07:00.576810 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.576823 kubelet[2718]: W0813 02:07:00.576821 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.576886 kubelet[2718]: E0813 02:07:00.576844 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.577031 kubelet[2718]: E0813 02:07:00.577018 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.577031 kubelet[2718]: W0813 02:07:00.577029 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.577098 kubelet[2718]: E0813 02:07:00.577049 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.577228 kubelet[2718]: E0813 02:07:00.577216 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.577228 kubelet[2718]: W0813 02:07:00.577227 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.577281 kubelet[2718]: E0813 02:07:00.577239 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.577514 kubelet[2718]: E0813 02:07:00.577501 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.577514 kubelet[2718]: W0813 02:07:00.577513 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.577575 kubelet[2718]: E0813 02:07:00.577533 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.577728 kubelet[2718]: E0813 02:07:00.577716 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.577728 kubelet[2718]: W0813 02:07:00.577727 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.577794 kubelet[2718]: E0813 02:07:00.577746 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.577934 kubelet[2718]: E0813 02:07:00.577922 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.577934 kubelet[2718]: W0813 02:07:00.577933 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.578022 kubelet[2718]: E0813 02:07:00.578009 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.578138 kubelet[2718]: E0813 02:07:00.578126 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.578138 kubelet[2718]: W0813 02:07:00.578136 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.578225 kubelet[2718]: E0813 02:07:00.578213 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.578356 kubelet[2718]: E0813 02:07:00.578344 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.578356 kubelet[2718]: W0813 02:07:00.578354 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.578441 kubelet[2718]: E0813 02:07:00.578434 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.578562 kubelet[2718]: E0813 02:07:00.578550 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.578562 kubelet[2718]: W0813 02:07:00.578560 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.578630 kubelet[2718]: E0813 02:07:00.578576 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.578934 kubelet[2718]: E0813 02:07:00.578902 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.578934 kubelet[2718]: W0813 02:07:00.578913 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.578934 kubelet[2718]: E0813 02:07:00.578933 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.579143 kubelet[2718]: E0813 02:07:00.579130 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.579143 kubelet[2718]: W0813 02:07:00.579141 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.579206 kubelet[2718]: E0813 02:07:00.579149 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.579433 kubelet[2718]: E0813 02:07:00.579381 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.579433 kubelet[2718]: W0813 02:07:00.579429 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.579433 kubelet[2718]: E0813 02:07:00.579440 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.579708 kubelet[2718]: E0813 02:07:00.579691 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.579708 kubelet[2718]: W0813 02:07:00.579703 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.579797 kubelet[2718]: E0813 02:07:00.579763 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.580024 kubelet[2718]: E0813 02:07:00.579976 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.580024 kubelet[2718]: W0813 02:07:00.579998 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.580024 kubelet[2718]: E0813 02:07:00.580022 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.580274 kubelet[2718]: E0813 02:07:00.580230 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.580274 kubelet[2718]: W0813 02:07:00.580269 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.580327 kubelet[2718]: E0813 02:07:00.580280 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.580547 kubelet[2718]: E0813 02:07:00.580529 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.580547 kubelet[2718]: W0813 02:07:00.580541 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.580652 kubelet[2718]: E0813 02:07:00.580576 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.580876 kubelet[2718]: E0813 02:07:00.580853 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.580876 kubelet[2718]: W0813 02:07:00.580873 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.580936 kubelet[2718]: E0813 02:07:00.580881 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:00.587152 kubelet[2718]: E0813 02:07:00.587136 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:00.587152 kubelet[2718]: W0813 02:07:00.587149 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:00.587240 kubelet[2718]: E0813 02:07:00.587158 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:01.494175 containerd[1542]: time="2025-08-13T02:07:01.494134029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:07:01.494987 containerd[1542]: time="2025-08-13T02:07:01.494829351Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 02:07:01.495475 containerd[1542]: time="2025-08-13T02:07:01.495451693Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:07:01.497042 containerd[1542]: time="2025-08-13T02:07:01.497011353Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:07:01.497653 containerd[1542]: time="2025-08-13T02:07:01.497622726Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.19815361s" Aug 13 02:07:01.497738 containerd[1542]: time="2025-08-13T02:07:01.497722664Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 02:07:01.499323 containerd[1542]: time="2025-08-13T02:07:01.499256265Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 02:07:01.514563 containerd[1542]: time="2025-08-13T02:07:01.514147609Z" level=info msg="CreateContainer within sandbox \"d455610107366cbcc93bc60280d9920d0c35dfd542d4aada989959d014fae7b4\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 02:07:01.524067 containerd[1542]: time="2025-08-13T02:07:01.523687050Z" level=info msg="Container 8b02ab9bb15f144dcde9953640ebf41a5002c97efc812194a2fe4a3df71ea980: CDI devices from CRI Config.CDIDevices: []" Aug 13 02:07:01.534286 containerd[1542]: time="2025-08-13T02:07:01.534258468Z" level=info msg="CreateContainer within sandbox \"d455610107366cbcc93bc60280d9920d0c35dfd542d4aada989959d014fae7b4\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8b02ab9bb15f144dcde9953640ebf41a5002c97efc812194a2fe4a3df71ea980\"" Aug 13 02:07:01.535060 containerd[1542]: time="2025-08-13T02:07:01.535022459Z" level=info msg="StartContainer for \"8b02ab9bb15f144dcde9953640ebf41a5002c97efc812194a2fe4a3df71ea980\"" Aug 13 02:07:01.536728 containerd[1542]: time="2025-08-13T02:07:01.536677838Z" level=info msg="connecting to shim 8b02ab9bb15f144dcde9953640ebf41a5002c97efc812194a2fe4a3df71ea980" address="unix:///run/containerd/s/3071e1d95758cfce50c9e00838e8d61929ecb986f72e85b4530d3e66bfbf088f" protocol=ttrpc version=3 Aug 13 02:07:01.562734 systemd[1]: Started cri-containerd-8b02ab9bb15f144dcde9953640ebf41a5002c97efc812194a2fe4a3df71ea980.scope - libcontainer container 8b02ab9bb15f144dcde9953640ebf41a5002c97efc812194a2fe4a3df71ea980. Aug 13 02:07:01.621701 containerd[1542]: time="2025-08-13T02:07:01.621653357Z" level=info msg="StartContainer for \"8b02ab9bb15f144dcde9953640ebf41a5002c97efc812194a2fe4a3df71ea980\" returns successfully" Aug 13 02:07:01.935767 kubelet[2718]: E0813 02:07:01.935708 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:07:02.043917 kubelet[2718]: E0813 02:07:02.043877 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:07:02.057279 kubelet[2718]: I0813 02:07:02.056996 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-67c8447dcf-wsn77" podStartSLOduration=1.857142156 podStartE2EDuration="3.056980894s" podCreationTimestamp="2025-08-13 02:06:59 +0000 UTC" firstStartedPulling="2025-08-13 02:07:00.298867894 +0000 UTC m=+20.454758798" lastFinishedPulling="2025-08-13 02:07:01.498706632 +0000 UTC m=+21.654597536" observedRunningTime="2025-08-13 02:07:02.056638848 +0000 UTC m=+22.212529752" watchObservedRunningTime="2025-08-13 02:07:02.056980894 +0000 UTC m=+22.212871798" Aug 13 02:07:02.071173 kubelet[2718]: E0813 02:07:02.071141 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.071246 kubelet[2718]: W0813 02:07:02.071181 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.071246 kubelet[2718]: E0813 02:07:02.071199 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.071477 kubelet[2718]: E0813 02:07:02.071411 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.071477 kubelet[2718]: W0813 02:07:02.071448 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.071477 kubelet[2718]: E0813 02:07:02.071457 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.071832 kubelet[2718]: E0813 02:07:02.071688 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.071832 kubelet[2718]: W0813 02:07:02.071717 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.071832 kubelet[2718]: E0813 02:07:02.071726 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.072044 kubelet[2718]: E0813 02:07:02.072025 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.072044 kubelet[2718]: W0813 02:07:02.072038 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.072103 kubelet[2718]: E0813 02:07:02.072047 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.072295 kubelet[2718]: E0813 02:07:02.072273 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.072325 kubelet[2718]: W0813 02:07:02.072291 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.072325 kubelet[2718]: E0813 02:07:02.072305 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.072606 kubelet[2718]: E0813 02:07:02.072530 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.072606 kubelet[2718]: W0813 02:07:02.072540 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.072606 kubelet[2718]: E0813 02:07:02.072564 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.072772 kubelet[2718]: E0813 02:07:02.072758 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.072772 kubelet[2718]: W0813 02:07:02.072769 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.072811 kubelet[2718]: E0813 02:07:02.072777 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.072972 kubelet[2718]: E0813 02:07:02.072947 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.073010 kubelet[2718]: W0813 02:07:02.072958 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.073010 kubelet[2718]: E0813 02:07:02.072990 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.073256 kubelet[2718]: E0813 02:07:02.073200 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.073256 kubelet[2718]: W0813 02:07:02.073251 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.073315 kubelet[2718]: E0813 02:07:02.073259 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.073606 kubelet[2718]: E0813 02:07:02.073474 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.073606 kubelet[2718]: W0813 02:07:02.073485 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.073606 kubelet[2718]: E0813 02:07:02.073519 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.073731 kubelet[2718]: E0813 02:07:02.073708 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.073731 kubelet[2718]: W0813 02:07:02.073722 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.073731 kubelet[2718]: E0813 02:07:02.073730 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.073997 kubelet[2718]: E0813 02:07:02.073884 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.073997 kubelet[2718]: W0813 02:07:02.073920 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.073997 kubelet[2718]: E0813 02:07:02.073928 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.074141 kubelet[2718]: E0813 02:07:02.074122 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.074141 kubelet[2718]: W0813 02:07:02.074136 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.074264 kubelet[2718]: E0813 02:07:02.074143 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.074378 kubelet[2718]: E0813 02:07:02.074359 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.074378 kubelet[2718]: W0813 02:07:02.074373 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.074424 kubelet[2718]: E0813 02:07:02.074380 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.074573 kubelet[2718]: E0813 02:07:02.074553 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.074573 kubelet[2718]: W0813 02:07:02.074564 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.074573 kubelet[2718]: E0813 02:07:02.074572 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.087348 kubelet[2718]: E0813 02:07:02.087331 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.087618 kubelet[2718]: W0813 02:07:02.087430 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.087618 kubelet[2718]: E0813 02:07:02.087454 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.088001 kubelet[2718]: E0813 02:07:02.087943 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.088001 kubelet[2718]: W0813 02:07:02.087957 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.088097 kubelet[2718]: E0813 02:07:02.088072 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.088359 kubelet[2718]: E0813 02:07:02.088340 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.088359 kubelet[2718]: W0813 02:07:02.088354 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.088621 kubelet[2718]: E0813 02:07:02.088367 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.088621 kubelet[2718]: E0813 02:07:02.088580 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.088621 kubelet[2718]: W0813 02:07:02.088610 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.088621 kubelet[2718]: E0813 02:07:02.088619 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.089622 kubelet[2718]: E0813 02:07:02.088948 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.089622 kubelet[2718]: W0813 02:07:02.088975 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.089622 kubelet[2718]: E0813 02:07:02.089088 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.089622 kubelet[2718]: E0813 02:07:02.089550 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.089622 kubelet[2718]: W0813 02:07:02.089557 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.089622 kubelet[2718]: E0813 02:07:02.089569 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.090082 kubelet[2718]: E0813 02:07:02.090069 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.090248 kubelet[2718]: W0813 02:07:02.090235 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.090431 kubelet[2718]: E0813 02:07:02.090417 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.090766 kubelet[2718]: E0813 02:07:02.090755 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.090818 kubelet[2718]: W0813 02:07:02.090807 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.090921 kubelet[2718]: E0813 02:07:02.090910 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.091317 kubelet[2718]: E0813 02:07:02.091306 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.091378 kubelet[2718]: W0813 02:07:02.091367 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.091430 kubelet[2718]: E0813 02:07:02.091420 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.091771 kubelet[2718]: E0813 02:07:02.091760 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.091865 kubelet[2718]: W0813 02:07:02.091811 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.091865 kubelet[2718]: E0813 02:07:02.091826 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.092208 kubelet[2718]: E0813 02:07:02.092197 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.092349 kubelet[2718]: W0813 02:07:02.092261 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.092349 kubelet[2718]: E0813 02:07:02.092288 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.092680 kubelet[2718]: E0813 02:07:02.092634 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.092680 kubelet[2718]: W0813 02:07:02.092644 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.092680 kubelet[2718]: E0813 02:07:02.092656 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.093185 kubelet[2718]: E0813 02:07:02.093160 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.093185 kubelet[2718]: W0813 02:07:02.093171 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.093681 kubelet[2718]: E0813 02:07:02.093388 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.093986 kubelet[2718]: E0813 02:07:02.093975 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.094092 kubelet[2718]: W0813 02:07:02.094081 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.094140 kubelet[2718]: E0813 02:07:02.094130 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.094401 kubelet[2718]: E0813 02:07:02.094378 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.094508 kubelet[2718]: W0813 02:07:02.094496 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.094697 kubelet[2718]: E0813 02:07:02.094636 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.095196 kubelet[2718]: E0813 02:07:02.094896 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.095578 kubelet[2718]: W0813 02:07:02.095238 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.095578 kubelet[2718]: E0813 02:07:02.095325 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.095807 kubelet[2718]: E0813 02:07:02.095795 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.095872 kubelet[2718]: W0813 02:07:02.095861 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.095958 kubelet[2718]: E0813 02:07:02.095948 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.096306 kubelet[2718]: E0813 02:07:02.096296 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 02:07:02.096361 kubelet[2718]: W0813 02:07:02.096351 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 02:07:02.096421 kubelet[2718]: E0813 02:07:02.096412 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 02:07:02.126848 containerd[1542]: time="2025-08-13T02:07:02.126808443Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:07:02.127442 containerd[1542]: time="2025-08-13T02:07:02.127416956Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 02:07:02.128827 containerd[1542]: time="2025-08-13T02:07:02.127988659Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:07:02.129446 containerd[1542]: time="2025-08-13T02:07:02.129417132Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:07:02.130063 containerd[1542]: time="2025-08-13T02:07:02.130035585Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 630.7145ms" Aug 13 02:07:02.130136 containerd[1542]: time="2025-08-13T02:07:02.130121964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 02:07:02.132151 containerd[1542]: time="2025-08-13T02:07:02.132119870Z" level=info msg="CreateContainer within sandbox \"b8fbca66d16757f0d8830ae86752b685e2ed86c07fd8628668ed375bbac8cad0\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 02:07:02.140847 containerd[1542]: time="2025-08-13T02:07:02.139819328Z" level=info msg="Container e45ceaa457815629371fe359688809fb7cabeb078eb1372ffd2bd8e63d683f75: CDI devices from CRI Config.CDIDevices: []" Aug 13 02:07:02.142873 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2048733756.mount: Deactivated successfully. Aug 13 02:07:02.147175 containerd[1542]: time="2025-08-13T02:07:02.147143131Z" level=info msg="CreateContainer within sandbox \"b8fbca66d16757f0d8830ae86752b685e2ed86c07fd8628668ed375bbac8cad0\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"e45ceaa457815629371fe359688809fb7cabeb078eb1372ffd2bd8e63d683f75\"" Aug 13 02:07:02.147672 containerd[1542]: time="2025-08-13T02:07:02.147644245Z" level=info msg="StartContainer for \"e45ceaa457815629371fe359688809fb7cabeb078eb1372ffd2bd8e63d683f75\"" Aug 13 02:07:02.148931 containerd[1542]: time="2025-08-13T02:07:02.148906610Z" level=info msg="connecting to shim e45ceaa457815629371fe359688809fb7cabeb078eb1372ffd2bd8e63d683f75" address="unix:///run/containerd/s/1134d6fedd26ca70851a85e307217bbf45d02fc285a9ed9dbeebafeb7ceefd25" protocol=ttrpc version=3 Aug 13 02:07:02.180721 systemd[1]: Started cri-containerd-e45ceaa457815629371fe359688809fb7cabeb078eb1372ffd2bd8e63d683f75.scope - libcontainer container e45ceaa457815629371fe359688809fb7cabeb078eb1372ffd2bd8e63d683f75. Aug 13 02:07:02.228865 containerd[1542]: time="2025-08-13T02:07:02.228656441Z" level=info msg="StartContainer for \"e45ceaa457815629371fe359688809fb7cabeb078eb1372ffd2bd8e63d683f75\" returns successfully" Aug 13 02:07:02.237625 systemd[1]: cri-containerd-e45ceaa457815629371fe359688809fb7cabeb078eb1372ffd2bd8e63d683f75.scope: Deactivated successfully. Aug 13 02:07:02.239815 containerd[1542]: time="2025-08-13T02:07:02.239781179Z" level=info msg="received exit event container_id:\"e45ceaa457815629371fe359688809fb7cabeb078eb1372ffd2bd8e63d683f75\" id:\"e45ceaa457815629371fe359688809fb7cabeb078eb1372ffd2bd8e63d683f75\" pid:3406 exited_at:{seconds:1755050822 nanos:239348784}" Aug 13 02:07:02.240326 containerd[1542]: time="2025-08-13T02:07:02.239863058Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e45ceaa457815629371fe359688809fb7cabeb078eb1372ffd2bd8e63d683f75\" id:\"e45ceaa457815629371fe359688809fb7cabeb078eb1372ffd2bd8e63d683f75\" pid:3406 exited_at:{seconds:1755050822 nanos:239348784}" Aug 13 02:07:02.269192 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e45ceaa457815629371fe359688809fb7cabeb078eb1372ffd2bd8e63d683f75-rootfs.mount: Deactivated successfully. Aug 13 02:07:03.046814 kubelet[2718]: I0813 02:07:03.046678 2718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 02:07:03.047283 kubelet[2718]: E0813 02:07:03.047177 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:07:03.049444 containerd[1542]: time="2025-08-13T02:07:03.049414151Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 02:07:03.936876 kubelet[2718]: E0813 02:07:03.936564 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:07:04.961638 containerd[1542]: time="2025-08-13T02:07:04.961500496Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:07:04.962758 containerd[1542]: time="2025-08-13T02:07:04.962650453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 02:07:04.963320 containerd[1542]: time="2025-08-13T02:07:04.963221717Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:07:04.966079 containerd[1542]: time="2025-08-13T02:07:04.966047917Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:07:04.967213 containerd[1542]: time="2025-08-13T02:07:04.967093445Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 1.917643405s" Aug 13 02:07:04.967213 containerd[1542]: time="2025-08-13T02:07:04.967157174Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 02:07:04.969635 containerd[1542]: time="2025-08-13T02:07:04.969555548Z" level=info msg="CreateContainer within sandbox \"b8fbca66d16757f0d8830ae86752b685e2ed86c07fd8628668ed375bbac8cad0\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 02:07:04.978762 containerd[1542]: time="2025-08-13T02:07:04.977779489Z" level=info msg="Container 0dc0d0a49b528a825e6bbdc2e02aa7bfff9f45bad017d991804d8a6486a852be: CDI devices from CRI Config.CDIDevices: []" Aug 13 02:07:04.984165 containerd[1542]: time="2025-08-13T02:07:04.984120391Z" level=info msg="CreateContainer within sandbox \"b8fbca66d16757f0d8830ae86752b685e2ed86c07fd8628668ed375bbac8cad0\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0dc0d0a49b528a825e6bbdc2e02aa7bfff9f45bad017d991804d8a6486a852be\"" Aug 13 02:07:04.984935 containerd[1542]: time="2025-08-13T02:07:04.984898902Z" level=info msg="StartContainer for \"0dc0d0a49b528a825e6bbdc2e02aa7bfff9f45bad017d991804d8a6486a852be\"" Aug 13 02:07:04.986470 containerd[1542]: time="2025-08-13T02:07:04.986411756Z" level=info msg="connecting to shim 0dc0d0a49b528a825e6bbdc2e02aa7bfff9f45bad017d991804d8a6486a852be" address="unix:///run/containerd/s/1134d6fedd26ca70851a85e307217bbf45d02fc285a9ed9dbeebafeb7ceefd25" protocol=ttrpc version=3 Aug 13 02:07:05.010723 systemd[1]: Started cri-containerd-0dc0d0a49b528a825e6bbdc2e02aa7bfff9f45bad017d991804d8a6486a852be.scope - libcontainer container 0dc0d0a49b528a825e6bbdc2e02aa7bfff9f45bad017d991804d8a6486a852be. Aug 13 02:07:05.053036 containerd[1542]: time="2025-08-13T02:07:05.052988210Z" level=info msg="StartContainer for \"0dc0d0a49b528a825e6bbdc2e02aa7bfff9f45bad017d991804d8a6486a852be\" returns successfully" Aug 13 02:07:05.581483 containerd[1542]: time="2025-08-13T02:07:05.581377431Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 02:07:05.584347 systemd[1]: cri-containerd-0dc0d0a49b528a825e6bbdc2e02aa7bfff9f45bad017d991804d8a6486a852be.scope: Deactivated successfully. Aug 13 02:07:05.586154 containerd[1542]: time="2025-08-13T02:07:05.586113112Z" level=info msg="received exit event container_id:\"0dc0d0a49b528a825e6bbdc2e02aa7bfff9f45bad017d991804d8a6486a852be\" id:\"0dc0d0a49b528a825e6bbdc2e02aa7bfff9f45bad017d991804d8a6486a852be\" pid:3464 exited_at:{seconds:1755050825 nanos:585757326}" Aug 13 02:07:05.586361 containerd[1542]: time="2025-08-13T02:07:05.586338750Z" level=info msg="TaskExit event in podsandbox handler container_id:\"0dc0d0a49b528a825e6bbdc2e02aa7bfff9f45bad017d991804d8a6486a852be\" id:\"0dc0d0a49b528a825e6bbdc2e02aa7bfff9f45bad017d991804d8a6486a852be\" pid:3464 exited_at:{seconds:1755050825 nanos:585757326}" Aug 13 02:07:05.586425 systemd[1]: cri-containerd-0dc0d0a49b528a825e6bbdc2e02aa7bfff9f45bad017d991804d8a6486a852be.scope: Consumed 547ms CPU time, 196.9M memory peak, 171.2M written to disk. Aug 13 02:07:05.596631 kubelet[2718]: I0813 02:07:05.596578 2718 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 02:07:05.616299 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0dc0d0a49b528a825e6bbdc2e02aa7bfff9f45bad017d991804d8a6486a852be-rootfs.mount: Deactivated successfully. Aug 13 02:07:05.650478 systemd[1]: Created slice kubepods-besteffort-pod4c710e16_eab0_49cd_a9c5_57a63929a2ce.slice - libcontainer container kubepods-besteffort-pod4c710e16_eab0_49cd_a9c5_57a63929a2ce.slice. Aug 13 02:07:05.670487 systemd[1]: Created slice kubepods-burstable-pod26fd4059_1e9c_49a2_9bd9_181be9ad7bcb.slice - libcontainer container kubepods-burstable-pod26fd4059_1e9c_49a2_9bd9_181be9ad7bcb.slice. Aug 13 02:07:05.692830 systemd[1]: Created slice kubepods-besteffort-pod7c1d6d84_89e5_4a8a_a62c_c7b5324f50d6.slice - libcontainer container kubepods-besteffort-pod7c1d6d84_89e5_4a8a_a62c_c7b5324f50d6.slice. Aug 13 02:07:05.701716 systemd[1]: Created slice kubepods-besteffort-pod218449a1_8522_470d_afd0_760d9b801a05.slice - libcontainer container kubepods-besteffort-pod218449a1_8522_470d_afd0_760d9b801a05.slice. Aug 13 02:07:05.711867 systemd[1]: Created slice kubepods-besteffort-pod8d84c12b_cfd9_49af_bb2e_a10173126a4c.slice - libcontainer container kubepods-besteffort-pod8d84c12b_cfd9_49af_bb2e_a10173126a4c.slice. Aug 13 02:07:05.718411 kubelet[2718]: I0813 02:07:05.717476 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e73a6876-bbb3-4e11-8a33-1945cf27a944-config-volume\") pod \"coredns-668d6bf9bc-pw6gg\" (UID: \"e73a6876-bbb3-4e11-8a33-1945cf27a944\") " pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:07:05.718411 kubelet[2718]: I0813 02:07:05.717644 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wxmjg\" (UniqueName: \"kubernetes.io/projected/26fd4059-1e9c-49a2-9bd9-181be9ad7bcb-kube-api-access-wxmjg\") pod \"coredns-668d6bf9bc-p5qmw\" (UID: \"26fd4059-1e9c-49a2-9bd9-181be9ad7bcb\") " pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:07:05.718411 kubelet[2718]: I0813 02:07:05.718329 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d84c12b-cfd9-49af-bb2e-a10173126a4c-tigera-ca-bundle\") pod \"calico-kube-controllers-7c47cf6bcb-c9c87\" (UID: \"8d84c12b-cfd9-49af-bb2e-a10173126a4c\") " pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:07:05.718411 kubelet[2718]: I0813 02:07:05.718352 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/218449a1-8522-470d-afd0-760d9b801a05-whisker-backend-key-pair\") pod \"whisker-65456dc94b-mmvb5\" (UID: \"218449a1-8522-470d-afd0-760d9b801a05\") " pod="calico-system/whisker-65456dc94b-mmvb5" Aug 13 02:07:05.718411 kubelet[2718]: I0813 02:07:05.718371 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/26fd4059-1e9c-49a2-9bd9-181be9ad7bcb-config-volume\") pod \"coredns-668d6bf9bc-p5qmw\" (UID: \"26fd4059-1e9c-49a2-9bd9-181be9ad7bcb\") " pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:07:05.718633 kubelet[2718]: I0813 02:07:05.718409 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e779ed2-d5a9-40cb-95c0-450f4781223d-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-ftlj6\" (UID: \"4e779ed2-d5a9-40cb-95c0-450f4781223d\") " pod="calico-system/goldmane-768f4c5c69-ftlj6" Aug 13 02:07:05.718633 kubelet[2718]: I0813 02:07:05.718423 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6-calico-apiserver-certs\") pod \"calico-apiserver-859c474dd6-b6nnn\" (UID: \"7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6\") " pod="calico-apiserver/calico-apiserver-859c474dd6-b6nnn" Aug 13 02:07:05.718633 kubelet[2718]: I0813 02:07:05.718439 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e779ed2-d5a9-40cb-95c0-450f4781223d-config\") pod \"goldmane-768f4c5c69-ftlj6\" (UID: \"4e779ed2-d5a9-40cb-95c0-450f4781223d\") " pod="calico-system/goldmane-768f4c5c69-ftlj6" Aug 13 02:07:05.718633 kubelet[2718]: I0813 02:07:05.718497 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5bmxt\" (UniqueName: \"kubernetes.io/projected/8d84c12b-cfd9-49af-bb2e-a10173126a4c-kube-api-access-5bmxt\") pod \"calico-kube-controllers-7c47cf6bcb-c9c87\" (UID: \"8d84c12b-cfd9-49af-bb2e-a10173126a4c\") " pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:07:05.718633 kubelet[2718]: I0813 02:07:05.718514 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4xwrs\" (UniqueName: \"kubernetes.io/projected/7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6-kube-api-access-4xwrs\") pod \"calico-apiserver-859c474dd6-b6nnn\" (UID: \"7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6\") " pod="calico-apiserver/calico-apiserver-859c474dd6-b6nnn" Aug 13 02:07:05.718739 kubelet[2718]: I0813 02:07:05.718527 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/218449a1-8522-470d-afd0-760d9b801a05-whisker-ca-bundle\") pod \"whisker-65456dc94b-mmvb5\" (UID: \"218449a1-8522-470d-afd0-760d9b801a05\") " pod="calico-system/whisker-65456dc94b-mmvb5" Aug 13 02:07:05.718739 kubelet[2718]: I0813 02:07:05.718541 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4e779ed2-d5a9-40cb-95c0-450f4781223d-goldmane-key-pair\") pod \"goldmane-768f4c5c69-ftlj6\" (UID: \"4e779ed2-d5a9-40cb-95c0-450f4781223d\") " pod="calico-system/goldmane-768f4c5c69-ftlj6" Aug 13 02:07:05.718739 kubelet[2718]: I0813 02:07:05.718574 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rkvm\" (UniqueName: \"kubernetes.io/projected/4c710e16-eab0-49cd-a9c5-57a63929a2ce-kube-api-access-9rkvm\") pod \"calico-apiserver-859c474dd6-gnh2j\" (UID: \"4c710e16-eab0-49cd-a9c5-57a63929a2ce\") " pod="calico-apiserver/calico-apiserver-859c474dd6-gnh2j" Aug 13 02:07:05.718739 kubelet[2718]: I0813 02:07:05.718616 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4c710e16-eab0-49cd-a9c5-57a63929a2ce-calico-apiserver-certs\") pod \"calico-apiserver-859c474dd6-gnh2j\" (UID: \"4c710e16-eab0-49cd-a9c5-57a63929a2ce\") " pod="calico-apiserver/calico-apiserver-859c474dd6-gnh2j" Aug 13 02:07:05.718739 kubelet[2718]: I0813 02:07:05.718636 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mjvwd\" (UniqueName: \"kubernetes.io/projected/e73a6876-bbb3-4e11-8a33-1945cf27a944-kube-api-access-mjvwd\") pod \"coredns-668d6bf9bc-pw6gg\" (UID: \"e73a6876-bbb3-4e11-8a33-1945cf27a944\") " pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:07:05.718843 kubelet[2718]: I0813 02:07:05.718648 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6bwzq\" (UniqueName: \"kubernetes.io/projected/4e779ed2-d5a9-40cb-95c0-450f4781223d-kube-api-access-6bwzq\") pod \"goldmane-768f4c5c69-ftlj6\" (UID: \"4e779ed2-d5a9-40cb-95c0-450f4781223d\") " pod="calico-system/goldmane-768f4c5c69-ftlj6" Aug 13 02:07:05.718843 kubelet[2718]: I0813 02:07:05.718662 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfmnq\" (UniqueName: \"kubernetes.io/projected/218449a1-8522-470d-afd0-760d9b801a05-kube-api-access-kfmnq\") pod \"whisker-65456dc94b-mmvb5\" (UID: \"218449a1-8522-470d-afd0-760d9b801a05\") " pod="calico-system/whisker-65456dc94b-mmvb5" Aug 13 02:07:05.720556 systemd[1]: Created slice kubepods-burstable-pode73a6876_bbb3_4e11_8a33_1945cf27a944.slice - libcontainer container kubepods-burstable-pode73a6876_bbb3_4e11_8a33_1945cf27a944.slice. Aug 13 02:07:05.733048 systemd[1]: Created slice kubepods-besteffort-pod4e779ed2_d5a9_40cb_95c0_450f4781223d.slice - libcontainer container kubepods-besteffort-pod4e779ed2_d5a9_40cb_95c0_450f4781223d.slice. Aug 13 02:07:05.942495 systemd[1]: Created slice kubepods-besteffort-podb671177d_3397_4938_853c_0cced3d0e9f5.slice - libcontainer container kubepods-besteffort-podb671177d_3397_4938_853c_0cced3d0e9f5.slice. Aug 13 02:07:05.945378 containerd[1542]: time="2025-08-13T02:07:05.945342475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,}" Aug 13 02:07:05.967043 containerd[1542]: time="2025-08-13T02:07:05.966786843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-859c474dd6-gnh2j,Uid:4c710e16-eab0-49cd-a9c5-57a63929a2ce,Namespace:calico-apiserver,Attempt:0,}" Aug 13 02:07:05.986689 kubelet[2718]: E0813 02:07:05.985604 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:07:05.987526 containerd[1542]: time="2025-08-13T02:07:05.987503408Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,}" Aug 13 02:07:05.998793 containerd[1542]: time="2025-08-13T02:07:05.998767702Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-859c474dd6-b6nnn,Uid:7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6,Namespace:calico-apiserver,Attempt:0,}" Aug 13 02:07:06.014141 containerd[1542]: time="2025-08-13T02:07:06.014117099Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65456dc94b-mmvb5,Uid:218449a1-8522-470d-afd0-760d9b801a05,Namespace:calico-system,Attempt:0,}" Aug 13 02:07:06.022495 containerd[1542]: time="2025-08-13T02:07:06.022195559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,}" Aug 13 02:07:06.026480 kubelet[2718]: E0813 02:07:06.025702 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:07:06.034889 containerd[1542]: time="2025-08-13T02:07:06.034865553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,}" Aug 13 02:07:06.036828 containerd[1542]: time="2025-08-13T02:07:06.036808374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-ftlj6,Uid:4e779ed2-d5a9-40cb-95c0-450f4781223d,Namespace:calico-system,Attempt:0,}" Aug 13 02:07:06.110382 containerd[1542]: time="2025-08-13T02:07:06.110008020Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 02:07:06.195158 containerd[1542]: time="2025-08-13T02:07:06.195051078Z" level=error msg="Failed to destroy network for sandbox \"b7b5b090ab4e4a0e0ef982aec3923ed0f1de14be7c41424c17f0bbeb20ce06b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.198528 containerd[1542]: time="2025-08-13T02:07:06.198497024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7b5b090ab4e4a0e0ef982aec3923ed0f1de14be7c41424c17f0bbeb20ce06b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.202191 kubelet[2718]: E0813 02:07:06.200158 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7b5b090ab4e4a0e0ef982aec3923ed0f1de14be7c41424c17f0bbeb20ce06b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.202191 kubelet[2718]: E0813 02:07:06.200429 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7b5b090ab4e4a0e0ef982aec3923ed0f1de14be7c41424c17f0bbeb20ce06b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:07:06.202191 kubelet[2718]: E0813 02:07:06.200525 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b7b5b090ab4e4a0e0ef982aec3923ed0f1de14be7c41424c17f0bbeb20ce06b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:07:06.202380 kubelet[2718]: E0813 02:07:06.202103 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b7b5b090ab4e4a0e0ef982aec3923ed0f1de14be7c41424c17f0bbeb20ce06b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:07:06.248191 containerd[1542]: time="2025-08-13T02:07:06.248153553Z" level=error msg="Failed to destroy network for sandbox \"e1e4875e3dff54243ccf16488324615905da38936acdcaf138f87dbdf57b1f5e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.249954 containerd[1542]: time="2025-08-13T02:07:06.249923185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1e4875e3dff54243ccf16488324615905da38936acdcaf138f87dbdf57b1f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.253981 kubelet[2718]: E0813 02:07:06.253840 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1e4875e3dff54243ccf16488324615905da38936acdcaf138f87dbdf57b1f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.253981 kubelet[2718]: E0813 02:07:06.253918 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1e4875e3dff54243ccf16488324615905da38936acdcaf138f87dbdf57b1f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:07:06.253981 kubelet[2718]: E0813 02:07:06.253939 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e1e4875e3dff54243ccf16488324615905da38936acdcaf138f87dbdf57b1f5e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:07:06.254405 kubelet[2718]: E0813 02:07:06.254099 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e1e4875e3dff54243ccf16488324615905da38936acdcaf138f87dbdf57b1f5e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pw6gg" podUID="e73a6876-bbb3-4e11-8a33-1945cf27a944" Aug 13 02:07:06.274839 containerd[1542]: time="2025-08-13T02:07:06.274670470Z" level=error msg="Failed to destroy network for sandbox \"13d722a3765572819fede2dc68567df2d2b65a524054bf5f5e402810542cce4a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.275946 containerd[1542]: time="2025-08-13T02:07:06.275916038Z" level=error msg="Failed to destroy network for sandbox \"76a37bc5759af89320366bb31e2896c26a46d26e45c3d96af5e485658c2067d6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.276465 containerd[1542]: time="2025-08-13T02:07:06.276401323Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-859c474dd6-gnh2j,Uid:4c710e16-eab0-49cd-a9c5-57a63929a2ce,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"13d722a3765572819fede2dc68567df2d2b65a524054bf5f5e402810542cce4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.277767 kubelet[2718]: E0813 02:07:06.276712 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13d722a3765572819fede2dc68567df2d2b65a524054bf5f5e402810542cce4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.277767 kubelet[2718]: E0813 02:07:06.276752 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13d722a3765572819fede2dc68567df2d2b65a524054bf5f5e402810542cce4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-859c474dd6-gnh2j" Aug 13 02:07:06.277767 kubelet[2718]: E0813 02:07:06.276770 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13d722a3765572819fede2dc68567df2d2b65a524054bf5f5e402810542cce4a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-859c474dd6-gnh2j" Aug 13 02:07:06.277872 kubelet[2718]: E0813 02:07:06.276802 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-859c474dd6-gnh2j_calico-apiserver(4c710e16-eab0-49cd-a9c5-57a63929a2ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-859c474dd6-gnh2j_calico-apiserver(4c710e16-eab0-49cd-a9c5-57a63929a2ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13d722a3765572819fede2dc68567df2d2b65a524054bf5f5e402810542cce4a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-859c474dd6-gnh2j" podUID="4c710e16-eab0-49cd-a9c5-57a63929a2ce" Aug 13 02:07:06.277921 containerd[1542]: time="2025-08-13T02:07:06.277764060Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-ftlj6,Uid:4e779ed2-d5a9-40cb-95c0-450f4781223d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"76a37bc5759af89320366bb31e2896c26a46d26e45c3d96af5e485658c2067d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.278165 kubelet[2718]: E0813 02:07:06.278083 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76a37bc5759af89320366bb31e2896c26a46d26e45c3d96af5e485658c2067d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.278165 kubelet[2718]: E0813 02:07:06.278113 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76a37bc5759af89320366bb31e2896c26a46d26e45c3d96af5e485658c2067d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-ftlj6" Aug 13 02:07:06.278165 kubelet[2718]: E0813 02:07:06.278129 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"76a37bc5759af89320366bb31e2896c26a46d26e45c3d96af5e485658c2067d6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-ftlj6" Aug 13 02:07:06.278979 kubelet[2718]: E0813 02:07:06.278262 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-ftlj6_calico-system(4e779ed2-d5a9-40cb-95c0-450f4781223d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-ftlj6_calico-system(4e779ed2-d5a9-40cb-95c0-450f4781223d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"76a37bc5759af89320366bb31e2896c26a46d26e45c3d96af5e485658c2067d6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-ftlj6" podUID="4e779ed2-d5a9-40cb-95c0-450f4781223d" Aug 13 02:07:06.292808 containerd[1542]: time="2025-08-13T02:07:06.292758021Z" level=error msg="Failed to destroy network for sandbox \"0773ce9b46e5d99019f542a3b792a47f367051b6e9ec6f5435aa0e136467208b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.294625 containerd[1542]: time="2025-08-13T02:07:06.294560954Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-859c474dd6-b6nnn,Uid:7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0773ce9b46e5d99019f542a3b792a47f367051b6e9ec6f5435aa0e136467208b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.295289 kubelet[2718]: E0813 02:07:06.294756 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0773ce9b46e5d99019f542a3b792a47f367051b6e9ec6f5435aa0e136467208b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.295289 kubelet[2718]: E0813 02:07:06.294798 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0773ce9b46e5d99019f542a3b792a47f367051b6e9ec6f5435aa0e136467208b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-859c474dd6-b6nnn" Aug 13 02:07:06.295289 kubelet[2718]: E0813 02:07:06.294815 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0773ce9b46e5d99019f542a3b792a47f367051b6e9ec6f5435aa0e136467208b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-859c474dd6-b6nnn" Aug 13 02:07:06.295361 kubelet[2718]: E0813 02:07:06.294846 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-859c474dd6-b6nnn_calico-apiserver(7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-859c474dd6-b6nnn_calico-apiserver(7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0773ce9b46e5d99019f542a3b792a47f367051b6e9ec6f5435aa0e136467208b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-859c474dd6-b6nnn" podUID="7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6" Aug 13 02:07:06.301547 containerd[1542]: time="2025-08-13T02:07:06.300913661Z" level=error msg="Failed to destroy network for sandbox \"e4b8775612971227f9c1512806e36ef3e6855ea57c664f991eba3adcab5084c6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.302604 containerd[1542]: time="2025-08-13T02:07:06.302550384Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b8775612971227f9c1512806e36ef3e6855ea57c664f991eba3adcab5084c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.303105 kubelet[2718]: E0813 02:07:06.302976 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b8775612971227f9c1512806e36ef3e6855ea57c664f991eba3adcab5084c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.303497 kubelet[2718]: E0813 02:07:06.303134 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b8775612971227f9c1512806e36ef3e6855ea57c664f991eba3adcab5084c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:07:06.303497 kubelet[2718]: E0813 02:07:06.303156 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4b8775612971227f9c1512806e36ef3e6855ea57c664f991eba3adcab5084c6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:07:06.303497 kubelet[2718]: E0813 02:07:06.303218 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4b8775612971227f9c1512806e36ef3e6855ea57c664f991eba3adcab5084c6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p5qmw" podUID="26fd4059-1e9c-49a2-9bd9-181be9ad7bcb" Aug 13 02:07:06.306231 containerd[1542]: time="2025-08-13T02:07:06.305881761Z" level=error msg="Failed to destroy network for sandbox \"ef70a28e78ddfabe84c5c16eaeb01881d73916662c587fbca43d9214d09b4387\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.307337 containerd[1542]: time="2025-08-13T02:07:06.307171899Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65456dc94b-mmvb5,Uid:218449a1-8522-470d-afd0-760d9b801a05,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef70a28e78ddfabe84c5c16eaeb01881d73916662c587fbca43d9214d09b4387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.308115 kubelet[2718]: E0813 02:07:06.308015 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef70a28e78ddfabe84c5c16eaeb01881d73916662c587fbca43d9214d09b4387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.308468 kubelet[2718]: E0813 02:07:06.308439 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef70a28e78ddfabe84c5c16eaeb01881d73916662c587fbca43d9214d09b4387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65456dc94b-mmvb5" Aug 13 02:07:06.308640 kubelet[2718]: E0813 02:07:06.308549 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef70a28e78ddfabe84c5c16eaeb01881d73916662c587fbca43d9214d09b4387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65456dc94b-mmvb5" Aug 13 02:07:06.309242 kubelet[2718]: E0813 02:07:06.309157 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65456dc94b-mmvb5_calico-system(218449a1-8522-470d-afd0-760d9b801a05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65456dc94b-mmvb5_calico-system(218449a1-8522-470d-afd0-760d9b801a05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef70a28e78ddfabe84c5c16eaeb01881d73916662c587fbca43d9214d09b4387\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65456dc94b-mmvb5" podUID="218449a1-8522-470d-afd0-760d9b801a05" Aug 13 02:07:06.313287 containerd[1542]: time="2025-08-13T02:07:06.312951991Z" level=error msg="Failed to destroy network for sandbox \"f026f8c12d4330063d1e8c738bc5a7d5854bb73c1e0d3e8292eade71d9fd0374\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.314000 containerd[1542]: time="2025-08-13T02:07:06.313970721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f026f8c12d4330063d1e8c738bc5a7d5854bb73c1e0d3e8292eade71d9fd0374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.314155 kubelet[2718]: E0813 02:07:06.314107 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f026f8c12d4330063d1e8c738bc5a7d5854bb73c1e0d3e8292eade71d9fd0374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:06.314155 kubelet[2718]: E0813 02:07:06.314142 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f026f8c12d4330063d1e8c738bc5a7d5854bb73c1e0d3e8292eade71d9fd0374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:07:06.314211 kubelet[2718]: E0813 02:07:06.314159 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f026f8c12d4330063d1e8c738bc5a7d5854bb73c1e0d3e8292eade71d9fd0374\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:07:06.314211 kubelet[2718]: E0813 02:07:06.314187 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f026f8c12d4330063d1e8c738bc5a7d5854bb73c1e0d3e8292eade71d9fd0374\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:07:06.980074 systemd[1]: run-netns-cni\x2d54fd8e35\x2d42ff\x2dcf32\x2d02d2\x2d1156559c80ec.mount: Deactivated successfully. Aug 13 02:07:06.980193 systemd[1]: run-netns-cni\x2d25457f53\x2ddb77\x2d5dc9\x2df42f\x2d831fb11e1abc.mount: Deactivated successfully. Aug 13 02:07:06.980262 systemd[1]: run-netns-cni\x2df1813043\x2d3f8d\x2d53c4\x2d53c3\x2d34fee02d7418.mount: Deactivated successfully. Aug 13 02:07:06.980325 systemd[1]: run-netns-cni\x2d3523d926\x2dd61d\x2d4897\x2da4ce\x2d41d5fb353c41.mount: Deactivated successfully. Aug 13 02:07:06.980385 systemd[1]: run-netns-cni\x2dc5f71d98\x2d194e\x2d35c0\x2dac0b\x2dd39f112d3a78.mount: Deactivated successfully. Aug 13 02:07:06.980443 systemd[1]: run-netns-cni\x2df088fb3a\x2dea1f\x2d7dea\x2decfe\x2db533b6f9c521.mount: Deactivated successfully. Aug 13 02:07:06.980503 systemd[1]: run-netns-cni\x2d27d93ff8\x2d92b4\x2dc447\x2d2d69\x2d3281d2c4e815.mount: Deactivated successfully. Aug 13 02:07:08.212468 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount788478230.mount: Deactivated successfully. Aug 13 02:07:08.214573 containerd[1542]: time="2025-08-13T02:07:08.214097669Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount788478230: write /var/lib/containerd/tmpmounts/containerd-mount788478230/usr/bin/calico-node: no space left on device" Aug 13 02:07:08.214573 containerd[1542]: time="2025-08-13T02:07:08.214198358Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 02:07:08.214939 kubelet[2718]: E0813 02:07:08.214358 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount788478230: write /var/lib/containerd/tmpmounts/containerd-mount788478230/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 02:07:08.214939 kubelet[2718]: E0813 02:07:08.214407 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount788478230: write /var/lib/containerd/tmpmounts/containerd-mount788478230/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 02:07:08.215250 kubelet[2718]: E0813 02:07:08.214662 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j884b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-cdfxj_calico-system(e8f51745-7382-4ead-96df-a31572ad4e1f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount788478230: write /var/lib/containerd/tmpmounts/containerd-mount788478230/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 02:07:08.216183 kubelet[2718]: E0813 02:07:08.216124 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount788478230: write /var/lib/containerd/tmpmounts/containerd-mount788478230/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-cdfxj" podUID="e8f51745-7382-4ead-96df-a31572ad4e1f" Aug 13 02:07:09.106697 kubelet[2718]: E0813 02:07:09.106074 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount788478230: write /var/lib/containerd/tmpmounts/containerd-mount788478230/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-cdfxj" podUID="e8f51745-7382-4ead-96df-a31572ad4e1f" Aug 13 02:07:10.082978 kubelet[2718]: I0813 02:07:10.082933 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:07:10.082978 kubelet[2718]: I0813 02:07:10.082982 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:07:10.085531 kubelet[2718]: I0813 02:07:10.085490 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:07:10.095919 kubelet[2718]: I0813 02:07:10.095879 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:07:10.095985 kubelet[2718]: I0813 02:07:10.095958 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/goldmane-768f4c5c69-ftlj6","calico-apiserver/calico-apiserver-859c474dd6-gnh2j","calico-apiserver/calico-apiserver-859c474dd6-b6nnn","calico-system/whisker-65456dc94b-mmvb5","calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","kube-system/coredns-668d6bf9bc-p5qmw","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/csi-node-driver-r6mhv","calico-system/calico-node-cdfxj","tigera-operator/tigera-operator-747864d56d-nxhh9","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:07:10.101392 kubelet[2718]: I0813 02:07:10.101373 2718 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-system/goldmane-768f4c5c69-ftlj6" Aug 13 02:07:10.101620 kubelet[2718]: I0813 02:07:10.101585 2718 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/goldmane-768f4c5c69-ftlj6"] Aug 13 02:07:10.127614 kubelet[2718]: I0813 02:07:10.127440 2718 kubelet.go:2351] "Pod admission denied" podUID="7e10f584-26cd-4e29-9cf8-f3f0b5c479eb" pod="calico-system/goldmane-768f4c5c69-lhm7r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:10.149970 kubelet[2718]: I0813 02:07:10.149934 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6bwzq\" (UniqueName: \"kubernetes.io/projected/4e779ed2-d5a9-40cb-95c0-450f4781223d-kube-api-access-6bwzq\") pod \"4e779ed2-d5a9-40cb-95c0-450f4781223d\" (UID: \"4e779ed2-d5a9-40cb-95c0-450f4781223d\") " Aug 13 02:07:10.150718 kubelet[2718]: I0813 02:07:10.150577 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e779ed2-d5a9-40cb-95c0-450f4781223d-config\") pod \"4e779ed2-d5a9-40cb-95c0-450f4781223d\" (UID: \"4e779ed2-d5a9-40cb-95c0-450f4781223d\") " Aug 13 02:07:10.150718 kubelet[2718]: I0813 02:07:10.150662 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e779ed2-d5a9-40cb-95c0-450f4781223d-goldmane-ca-bundle\") pod \"4e779ed2-d5a9-40cb-95c0-450f4781223d\" (UID: \"4e779ed2-d5a9-40cb-95c0-450f4781223d\") " Aug 13 02:07:10.150988 kubelet[2718]: I0813 02:07:10.150904 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4e779ed2-d5a9-40cb-95c0-450f4781223d-goldmane-key-pair\") pod \"4e779ed2-d5a9-40cb-95c0-450f4781223d\" (UID: \"4e779ed2-d5a9-40cb-95c0-450f4781223d\") " Aug 13 02:07:10.151683 kubelet[2718]: I0813 02:07:10.151560 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e779ed2-d5a9-40cb-95c0-450f4781223d-config" (OuterVolumeSpecName: "config") pod "4e779ed2-d5a9-40cb-95c0-450f4781223d" (UID: "4e779ed2-d5a9-40cb-95c0-450f4781223d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 02:07:10.153034 kubelet[2718]: I0813 02:07:10.152920 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e779ed2-d5a9-40cb-95c0-450f4781223d-goldmane-ca-bundle" (OuterVolumeSpecName: "goldmane-ca-bundle") pod "4e779ed2-d5a9-40cb-95c0-450f4781223d" (UID: "4e779ed2-d5a9-40cb-95c0-450f4781223d"). InnerVolumeSpecName "goldmane-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 02:07:10.158168 systemd[1]: var-lib-kubelet-pods-4e779ed2\x2dd5a9\x2d40cb\x2d95c0\x2d450f4781223d-volumes-kubernetes.io\x7esecret-goldmane\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 02:07:10.163264 kubelet[2718]: I0813 02:07:10.162813 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e779ed2-d5a9-40cb-95c0-450f4781223d-kube-api-access-6bwzq" (OuterVolumeSpecName: "kube-api-access-6bwzq") pod "4e779ed2-d5a9-40cb-95c0-450f4781223d" (UID: "4e779ed2-d5a9-40cb-95c0-450f4781223d"). InnerVolumeSpecName "kube-api-access-6bwzq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 02:07:10.164132 systemd[1]: var-lib-kubelet-pods-4e779ed2\x2dd5a9\x2d40cb\x2d95c0\x2d450f4781223d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6bwzq.mount: Deactivated successfully. Aug 13 02:07:10.164330 kubelet[2718]: I0813 02:07:10.164276 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e779ed2-d5a9-40cb-95c0-450f4781223d-goldmane-key-pair" (OuterVolumeSpecName: "goldmane-key-pair") pod "4e779ed2-d5a9-40cb-95c0-450f4781223d" (UID: "4e779ed2-d5a9-40cb-95c0-450f4781223d"). InnerVolumeSpecName "goldmane-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 02:07:10.169380 kubelet[2718]: I0813 02:07:10.169326 2718 kubelet.go:2351] "Pod admission denied" podUID="3364e39a-95c5-42e7-a0d5-848b3038547a" pod="calico-system/goldmane-768f4c5c69-8rj6p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:10.211898 kubelet[2718]: I0813 02:07:10.211740 2718 kubelet.go:2351] "Pod admission denied" podUID="1461b174-52a2-4a09-aba3-e85229d56b07" pod="calico-system/goldmane-768f4c5c69-h2zr2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:10.245749 kubelet[2718]: I0813 02:07:10.245699 2718 kubelet.go:2351] "Pod admission denied" podUID="442ab17c-8332-4900-864e-69d6ac3b3389" pod="calico-system/goldmane-768f4c5c69-4zb2t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:10.252085 kubelet[2718]: I0813 02:07:10.252006 2718 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4e779ed2-d5a9-40cb-95c0-450f4781223d-config\") on node \"172-236-122-171\" DevicePath \"\"" Aug 13 02:07:10.252085 kubelet[2718]: I0813 02:07:10.252042 2718 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6bwzq\" (UniqueName: \"kubernetes.io/projected/4e779ed2-d5a9-40cb-95c0-450f4781223d-kube-api-access-6bwzq\") on node \"172-236-122-171\" DevicePath \"\"" Aug 13 02:07:10.252085 kubelet[2718]: I0813 02:07:10.252052 2718 reconciler_common.go:299] "Volume detached for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4e779ed2-d5a9-40cb-95c0-450f4781223d-goldmane-ca-bundle\") on node \"172-236-122-171\" DevicePath \"\"" Aug 13 02:07:10.252085 kubelet[2718]: I0813 02:07:10.252065 2718 reconciler_common.go:299] "Volume detached for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/4e779ed2-d5a9-40cb-95c0-450f4781223d-goldmane-key-pair\") on node \"172-236-122-171\" DevicePath \"\"" Aug 13 02:07:10.279970 kubelet[2718]: I0813 02:07:10.279281 2718 kubelet.go:2351] "Pod admission denied" podUID="5e13630b-fcd6-4c71-8c36-c513cfaef432" pod="calico-system/goldmane-768f4c5c69-dh4g7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:10.322546 kubelet[2718]: I0813 02:07:10.322402 2718 kubelet.go:2351] "Pod admission denied" podUID="8233d358-a32e-41ac-812a-d94a2b380bf8" pod="calico-system/goldmane-768f4c5c69-hjkcm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:10.363342 kubelet[2718]: I0813 02:07:10.362634 2718 kubelet.go:2351] "Pod admission denied" podUID="54977b06-7bf0-4b25-8a34-9b33ff767678" pod="calico-system/goldmane-768f4c5c69-pvm86" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:10.384510 kubelet[2718]: I0813 02:07:10.384463 2718 kubelet.go:2351] "Pod admission denied" podUID="aa62d6df-0e66-499d-afd2-387f4f2c9ebe" pod="calico-system/goldmane-768f4c5c69-swq64" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:10.404296 kubelet[2718]: I0813 02:07:10.404080 2718 kubelet.go:2351] "Pod admission denied" podUID="050d8c8f-401f-4c0c-90d1-3308c32f96b3" pod="calico-system/goldmane-768f4c5c69-dtnh5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:10.417428 systemd[1]: Removed slice kubepods-besteffort-pod4e779ed2_d5a9_40cb_95c0_450f4781223d.slice - libcontainer container kubepods-besteffort-pod4e779ed2_d5a9_40cb_95c0_450f4781223d.slice. Aug 13 02:07:10.470620 kubelet[2718]: I0813 02:07:10.470562 2718 kubelet.go:2351] "Pod admission denied" podUID="3f07b918-f960-4c06-8d4f-6eb37fb126b5" pod="calico-system/goldmane-768f4c5c69-jwxfc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:10.472030 kubelet[2718]: I0813 02:07:10.472001 2718 status_manager.go:890] "Failed to get status for pod" podUID="3f07b918-f960-4c06-8d4f-6eb37fb126b5" pod="calico-system/goldmane-768f4c5c69-jwxfc" err="pods \"goldmane-768f4c5c69-jwxfc\" is forbidden: User \"system:node:172-236-122-171\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-236-122-171' and this object" Aug 13 02:07:11.102275 kubelet[2718]: I0813 02:07:11.102193 2718 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-system/goldmane-768f4c5c69-ftlj6"] Aug 13 02:07:14.083661 kubelet[2718]: I0813 02:07:14.082964 2718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 02:07:14.083661 kubelet[2718]: E0813 02:07:14.083354 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:07:14.116802 kubelet[2718]: E0813 02:07:14.116731 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:07:17.935864 containerd[1542]: time="2025-08-13T02:07:17.935787874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65456dc94b-mmvb5,Uid:218449a1-8522-470d-afd0-760d9b801a05,Namespace:calico-system,Attempt:0,}" Aug 13 02:07:17.979586 containerd[1542]: time="2025-08-13T02:07:17.979532903Z" level=error msg="Failed to destroy network for sandbox \"2b9e832c69fd9affa4f56328f0cb6f024326b9f19480676cf22517910be79055\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:17.982229 systemd[1]: run-netns-cni\x2db90f0634\x2dd630\x2d484d\x2df6e1\x2d96bdc8d61c54.mount: Deactivated successfully. Aug 13 02:07:17.982794 containerd[1542]: time="2025-08-13T02:07:17.982553393Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-65456dc94b-mmvb5,Uid:218449a1-8522-470d-afd0-760d9b801a05,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b9e832c69fd9affa4f56328f0cb6f024326b9f19480676cf22517910be79055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:17.983657 kubelet[2718]: E0813 02:07:17.983021 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b9e832c69fd9affa4f56328f0cb6f024326b9f19480676cf22517910be79055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:17.983657 kubelet[2718]: E0813 02:07:17.983082 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b9e832c69fd9affa4f56328f0cb6f024326b9f19480676cf22517910be79055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65456dc94b-mmvb5" Aug 13 02:07:17.983657 kubelet[2718]: E0813 02:07:17.983105 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2b9e832c69fd9affa4f56328f0cb6f024326b9f19480676cf22517910be79055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-65456dc94b-mmvb5" Aug 13 02:07:17.983657 kubelet[2718]: E0813 02:07:17.983153 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-65456dc94b-mmvb5_calico-system(218449a1-8522-470d-afd0-760d9b801a05)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-65456dc94b-mmvb5_calico-system(218449a1-8522-470d-afd0-760d9b801a05)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2b9e832c69fd9affa4f56328f0cb6f024326b9f19480676cf22517910be79055\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-65456dc94b-mmvb5" podUID="218449a1-8522-470d-afd0-760d9b801a05" Aug 13 02:07:18.936515 kubelet[2718]: E0813 02:07:18.936158 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:07:18.938179 containerd[1542]: time="2025-08-13T02:07:18.937748921Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,}" Aug 13 02:07:18.938179 containerd[1542]: time="2025-08-13T02:07:18.937752181Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,}" Aug 13 02:07:19.019258 containerd[1542]: time="2025-08-13T02:07:19.019078808Z" level=error msg="Failed to destroy network for sandbox \"e4bdf123bb08af23f9cbe0cd3a5a0d8c82e57d4d9ec5da465383a60898fd6e02\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:19.022877 systemd[1]: run-netns-cni\x2df65c21e4\x2d0f53\x2dfd2a\x2d3681\x2d93802bd3b34b.mount: Deactivated successfully. Aug 13 02:07:19.024362 containerd[1542]: time="2025-08-13T02:07:19.024275716Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4bdf123bb08af23f9cbe0cd3a5a0d8c82e57d4d9ec5da465383a60898fd6e02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:19.027130 kubelet[2718]: E0813 02:07:19.024988 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4bdf123bb08af23f9cbe0cd3a5a0d8c82e57d4d9ec5da465383a60898fd6e02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:19.027130 kubelet[2718]: E0813 02:07:19.025670 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4bdf123bb08af23f9cbe0cd3a5a0d8c82e57d4d9ec5da465383a60898fd6e02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:07:19.027130 kubelet[2718]: E0813 02:07:19.025698 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4bdf123bb08af23f9cbe0cd3a5a0d8c82e57d4d9ec5da465383a60898fd6e02\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:07:19.028842 kubelet[2718]: E0813 02:07:19.028646 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4bdf123bb08af23f9cbe0cd3a5a0d8c82e57d4d9ec5da465383a60898fd6e02\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pw6gg" podUID="e73a6876-bbb3-4e11-8a33-1945cf27a944" Aug 13 02:07:19.033348 containerd[1542]: time="2025-08-13T02:07:19.033294172Z" level=error msg="Failed to destroy network for sandbox \"52620b0d68a1dddfc72b8475f803fe5f30409151ac3dc8a3a0ada812087b1620\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:19.034679 containerd[1542]: time="2025-08-13T02:07:19.034651604Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"52620b0d68a1dddfc72b8475f803fe5f30409151ac3dc8a3a0ada812087b1620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:19.034982 kubelet[2718]: E0813 02:07:19.034962 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52620b0d68a1dddfc72b8475f803fe5f30409151ac3dc8a3a0ada812087b1620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:19.035093 kubelet[2718]: E0813 02:07:19.035075 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52620b0d68a1dddfc72b8475f803fe5f30409151ac3dc8a3a0ada812087b1620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:07:19.035436 kubelet[2718]: E0813 02:07:19.035171 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"52620b0d68a1dddfc72b8475f803fe5f30409151ac3dc8a3a0ada812087b1620\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:07:19.035556 kubelet[2718]: E0813 02:07:19.035535 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"52620b0d68a1dddfc72b8475f803fe5f30409151ac3dc8a3a0ada812087b1620\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:07:19.037197 systemd[1]: run-netns-cni\x2d57c7c96d\x2d431f\x2df8e2\x2d069a\x2d696701969eb9.mount: Deactivated successfully. Aug 13 02:07:19.936788 kubelet[2718]: E0813 02:07:19.936221 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:07:19.937378 containerd[1542]: time="2025-08-13T02:07:19.937336949Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,}" Aug 13 02:07:20.001092 containerd[1542]: time="2025-08-13T02:07:20.001031215Z" level=error msg="Failed to destroy network for sandbox \"682c4af3bff382e3d3df7cb7067dd185b34b2601f05f67a62c010292148f2dc0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:20.005149 containerd[1542]: time="2025-08-13T02:07:20.004997922Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"682c4af3bff382e3d3df7cb7067dd185b34b2601f05f67a62c010292148f2dc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:20.005303 kubelet[2718]: E0813 02:07:20.005257 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"682c4af3bff382e3d3df7cb7067dd185b34b2601f05f67a62c010292148f2dc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:20.005361 kubelet[2718]: E0813 02:07:20.005324 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"682c4af3bff382e3d3df7cb7067dd185b34b2601f05f67a62c010292148f2dc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:07:20.005361 kubelet[2718]: E0813 02:07:20.005345 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"682c4af3bff382e3d3df7cb7067dd185b34b2601f05f67a62c010292148f2dc0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:07:20.005828 kubelet[2718]: E0813 02:07:20.005785 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"682c4af3bff382e3d3df7cb7067dd185b34b2601f05f67a62c010292148f2dc0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p5qmw" podUID="26fd4059-1e9c-49a2-9bd9-181be9ad7bcb" Aug 13 02:07:20.006221 systemd[1]: run-netns-cni\x2d8320511b\x2dc6ac\x2d05fd\x2de739\x2d2cd20b26e6ec.mount: Deactivated successfully. Aug 13 02:07:20.936624 containerd[1542]: time="2025-08-13T02:07:20.936529353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-859c474dd6-b6nnn,Uid:7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6,Namespace:calico-apiserver,Attempt:0,}" Aug 13 02:07:20.938153 containerd[1542]: time="2025-08-13T02:07:20.937084250Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-859c474dd6-gnh2j,Uid:4c710e16-eab0-49cd-a9c5-57a63929a2ce,Namespace:calico-apiserver,Attempt:0,}" Aug 13 02:07:20.938579 containerd[1542]: time="2025-08-13T02:07:20.938419952Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 02:07:21.025976 containerd[1542]: time="2025-08-13T02:07:21.025841556Z" level=error msg="Failed to destroy network for sandbox \"aab9ea6ec0face9b59617e8bb2bfc0f27c882fec462f4c0013bdbe22d7143ceb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:21.028854 containerd[1542]: time="2025-08-13T02:07:21.028793349Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-859c474dd6-gnh2j,Uid:4c710e16-eab0-49cd-a9c5-57a63929a2ce,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aab9ea6ec0face9b59617e8bb2bfc0f27c882fec462f4c0013bdbe22d7143ceb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:21.029265 kubelet[2718]: E0813 02:07:21.029217 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aab9ea6ec0face9b59617e8bb2bfc0f27c882fec462f4c0013bdbe22d7143ceb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:21.031277 kubelet[2718]: E0813 02:07:21.029266 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aab9ea6ec0face9b59617e8bb2bfc0f27c882fec462f4c0013bdbe22d7143ceb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-859c474dd6-gnh2j" Aug 13 02:07:21.031277 kubelet[2718]: E0813 02:07:21.029287 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aab9ea6ec0face9b59617e8bb2bfc0f27c882fec462f4c0013bdbe22d7143ceb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-859c474dd6-gnh2j" Aug 13 02:07:21.031277 kubelet[2718]: E0813 02:07:21.029334 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-859c474dd6-gnh2j_calico-apiserver(4c710e16-eab0-49cd-a9c5-57a63929a2ce)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-859c474dd6-gnh2j_calico-apiserver(4c710e16-eab0-49cd-a9c5-57a63929a2ce)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aab9ea6ec0face9b59617e8bb2bfc0f27c882fec462f4c0013bdbe22d7143ceb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-859c474dd6-gnh2j" podUID="4c710e16-eab0-49cd-a9c5-57a63929a2ce" Aug 13 02:07:21.029744 systemd[1]: run-netns-cni\x2d407c770f\x2dfb3f\x2de24b\x2d2bc5\x2df83c2138c6df.mount: Deactivated successfully. Aug 13 02:07:21.035614 containerd[1542]: time="2025-08-13T02:07:21.035553000Z" level=error msg="Failed to destroy network for sandbox \"92fd2925377059f26b1e6254013ece051738648444e0f87c38e4ff2019e8bd09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:21.038577 containerd[1542]: time="2025-08-13T02:07:21.038535444Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-859c474dd6-b6nnn,Uid:7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"92fd2925377059f26b1e6254013ece051738648444e0f87c38e4ff2019e8bd09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:21.038771 systemd[1]: run-netns-cni\x2d607fd879\x2d2619\x2deb05\x2d0c22\x2d0b3f87fd75c6.mount: Deactivated successfully. Aug 13 02:07:21.039493 kubelet[2718]: E0813 02:07:21.039469 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92fd2925377059f26b1e6254013ece051738648444e0f87c38e4ff2019e8bd09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:21.039556 kubelet[2718]: E0813 02:07:21.039504 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92fd2925377059f26b1e6254013ece051738648444e0f87c38e4ff2019e8bd09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-859c474dd6-b6nnn" Aug 13 02:07:21.039556 kubelet[2718]: E0813 02:07:21.039524 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92fd2925377059f26b1e6254013ece051738648444e0f87c38e4ff2019e8bd09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-859c474dd6-b6nnn" Aug 13 02:07:21.039640 kubelet[2718]: E0813 02:07:21.039570 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-859c474dd6-b6nnn_calico-apiserver(7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-859c474dd6-b6nnn_calico-apiserver(7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92fd2925377059f26b1e6254013ece051738648444e0f87c38e4ff2019e8bd09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-859c474dd6-b6nnn" podUID="7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6" Aug 13 02:07:21.129900 kubelet[2718]: I0813 02:07:21.129870 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:07:21.129900 kubelet[2718]: I0813 02:07:21.129908 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:07:21.132806 kubelet[2718]: I0813 02:07:21.132791 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:07:21.151647 kubelet[2718]: I0813 02:07:21.150826 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:07:21.151647 kubelet[2718]: I0813 02:07:21.150923 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-859c474dd6-gnh2j","calico-system/whisker-65456dc94b-mmvb5","calico-apiserver/calico-apiserver-859c474dd6-b6nnn","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","kube-system/coredns-668d6bf9bc-p5qmw","calico-system/calico-node-cdfxj","calico-system/csi-node-driver-r6mhv","tigera-operator/tigera-operator-747864d56d-nxhh9","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:07:21.162360 kubelet[2718]: I0813 02:07:21.162335 2718 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-859c474dd6-gnh2j" Aug 13 02:07:21.162360 kubelet[2718]: I0813 02:07:21.162355 2718 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-859c474dd6-gnh2j"] Aug 13 02:07:21.223565 kubelet[2718]: I0813 02:07:21.222914 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9rkvm\" (UniqueName: \"kubernetes.io/projected/4c710e16-eab0-49cd-a9c5-57a63929a2ce-kube-api-access-9rkvm\") pod \"4c710e16-eab0-49cd-a9c5-57a63929a2ce\" (UID: \"4c710e16-eab0-49cd-a9c5-57a63929a2ce\") " Aug 13 02:07:21.223565 kubelet[2718]: I0813 02:07:21.222958 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4c710e16-eab0-49cd-a9c5-57a63929a2ce-calico-apiserver-certs\") pod \"4c710e16-eab0-49cd-a9c5-57a63929a2ce\" (UID: \"4c710e16-eab0-49cd-a9c5-57a63929a2ce\") " Aug 13 02:07:21.228755 kubelet[2718]: I0813 02:07:21.228721 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4c710e16-eab0-49cd-a9c5-57a63929a2ce-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "4c710e16-eab0-49cd-a9c5-57a63929a2ce" (UID: "4c710e16-eab0-49cd-a9c5-57a63929a2ce"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 02:07:21.229954 systemd[1]: var-lib-kubelet-pods-4c710e16\x2deab0\x2d49cd\x2da9c5\x2d57a63929a2ce-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 02:07:21.233180 kubelet[2718]: I0813 02:07:21.233155 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c710e16-eab0-49cd-a9c5-57a63929a2ce-kube-api-access-9rkvm" (OuterVolumeSpecName: "kube-api-access-9rkvm") pod "4c710e16-eab0-49cd-a9c5-57a63929a2ce" (UID: "4c710e16-eab0-49cd-a9c5-57a63929a2ce"). InnerVolumeSpecName "kube-api-access-9rkvm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 02:07:21.233517 systemd[1]: var-lib-kubelet-pods-4c710e16\x2deab0\x2d49cd\x2da9c5\x2d57a63929a2ce-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9rkvm.mount: Deactivated successfully. Aug 13 02:07:21.324126 kubelet[2718]: I0813 02:07:21.324088 2718 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9rkvm\" (UniqueName: \"kubernetes.io/projected/4c710e16-eab0-49cd-a9c5-57a63929a2ce-kube-api-access-9rkvm\") on node \"172-236-122-171\" DevicePath \"\"" Aug 13 02:07:21.324126 kubelet[2718]: I0813 02:07:21.324110 2718 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/4c710e16-eab0-49cd-a9c5-57a63929a2ce-calico-apiserver-certs\") on node \"172-236-122-171\" DevicePath \"\"" Aug 13 02:07:21.937639 containerd[1542]: time="2025-08-13T02:07:21.937075773Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,}" Aug 13 02:07:21.957444 systemd[1]: Removed slice kubepods-besteffort-pod4c710e16_eab0_49cd_a9c5_57a63929a2ce.slice - libcontainer container kubepods-besteffort-pod4c710e16_eab0_49cd_a9c5_57a63929a2ce.slice. Aug 13 02:07:22.042426 containerd[1542]: time="2025-08-13T02:07:22.042377482Z" level=error msg="Failed to destroy network for sandbox \"d884c61fea05171e424127a195f074adf282326fd4328aebde6d2682c2438523\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:22.047112 containerd[1542]: time="2025-08-13T02:07:22.043543206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d884c61fea05171e424127a195f074adf282326fd4328aebde6d2682c2438523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:22.047179 kubelet[2718]: E0813 02:07:22.046731 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d884c61fea05171e424127a195f074adf282326fd4328aebde6d2682c2438523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:22.047179 kubelet[2718]: E0813 02:07:22.046774 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d884c61fea05171e424127a195f074adf282326fd4328aebde6d2682c2438523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:07:22.047179 kubelet[2718]: E0813 02:07:22.046794 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d884c61fea05171e424127a195f074adf282326fd4328aebde6d2682c2438523\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:07:22.046465 systemd[1]: run-netns-cni\x2d5a6282e1\x2df85c\x2d8742\x2dec88\x2d42b2306af118.mount: Deactivated successfully. Aug 13 02:07:22.049782 kubelet[2718]: E0813 02:07:22.048265 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d884c61fea05171e424127a195f074adf282326fd4328aebde6d2682c2438523\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:07:22.163126 kubelet[2718]: I0813 02:07:22.163088 2718 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-859c474dd6-gnh2j"] Aug 13 02:07:22.176621 kubelet[2718]: I0813 02:07:22.176563 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:07:22.176757 kubelet[2718]: I0813 02:07:22.176627 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:07:22.180371 kubelet[2718]: I0813 02:07:22.180353 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:07:22.199068 kubelet[2718]: I0813 02:07:22.198898 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:07:22.199754 kubelet[2718]: I0813 02:07:22.199713 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/whisker-65456dc94b-mmvb5","calico-apiserver/calico-apiserver-859c474dd6-b6nnn","kube-system/coredns-668d6bf9bc-pw6gg","kube-system/coredns-668d6bf9bc-p5qmw","calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","calico-system/csi-node-driver-r6mhv","calico-system/calico-node-cdfxj","tigera-operator/tigera-operator-747864d56d-nxhh9","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:07:22.208141 kubelet[2718]: I0813 02:07:22.208120 2718 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-system/whisker-65456dc94b-mmvb5" Aug 13 02:07:22.208141 kubelet[2718]: I0813 02:07:22.208139 2718 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/whisker-65456dc94b-mmvb5"] Aug 13 02:07:22.330049 kubelet[2718]: I0813 02:07:22.329987 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfmnq\" (UniqueName: \"kubernetes.io/projected/218449a1-8522-470d-afd0-760d9b801a05-kube-api-access-kfmnq\") pod \"218449a1-8522-470d-afd0-760d9b801a05\" (UID: \"218449a1-8522-470d-afd0-760d9b801a05\") " Aug 13 02:07:22.330049 kubelet[2718]: I0813 02:07:22.330035 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/218449a1-8522-470d-afd0-760d9b801a05-whisker-backend-key-pair\") pod \"218449a1-8522-470d-afd0-760d9b801a05\" (UID: \"218449a1-8522-470d-afd0-760d9b801a05\") " Aug 13 02:07:22.330049 kubelet[2718]: I0813 02:07:22.330053 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/218449a1-8522-470d-afd0-760d9b801a05-whisker-ca-bundle\") pod \"218449a1-8522-470d-afd0-760d9b801a05\" (UID: \"218449a1-8522-470d-afd0-760d9b801a05\") " Aug 13 02:07:22.330710 kubelet[2718]: I0813 02:07:22.330658 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/218449a1-8522-470d-afd0-760d9b801a05-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "218449a1-8522-470d-afd0-760d9b801a05" (UID: "218449a1-8522-470d-afd0-760d9b801a05"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 02:07:22.335208 kubelet[2718]: I0813 02:07:22.335175 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/218449a1-8522-470d-afd0-760d9b801a05-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "218449a1-8522-470d-afd0-760d9b801a05" (UID: "218449a1-8522-470d-afd0-760d9b801a05"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 02:07:22.339078 systemd[1]: var-lib-kubelet-pods-218449a1\x2d8522\x2d470d\x2dafd0\x2d760d9b801a05-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 02:07:22.339770 kubelet[2718]: I0813 02:07:22.339741 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/218449a1-8522-470d-afd0-760d9b801a05-kube-api-access-kfmnq" (OuterVolumeSpecName: "kube-api-access-kfmnq") pod "218449a1-8522-470d-afd0-760d9b801a05" (UID: "218449a1-8522-470d-afd0-760d9b801a05"). InnerVolumeSpecName "kube-api-access-kfmnq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 02:07:22.344220 systemd[1]: var-lib-kubelet-pods-218449a1\x2d8522\x2d470d\x2dafd0\x2d760d9b801a05-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkfmnq.mount: Deactivated successfully. Aug 13 02:07:22.431106 kubelet[2718]: I0813 02:07:22.431069 2718 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/218449a1-8522-470d-afd0-760d9b801a05-whisker-ca-bundle\") on node \"172-236-122-171\" DevicePath \"\"" Aug 13 02:07:22.431106 kubelet[2718]: I0813 02:07:22.431092 2718 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kfmnq\" (UniqueName: \"kubernetes.io/projected/218449a1-8522-470d-afd0-760d9b801a05-kube-api-access-kfmnq\") on node \"172-236-122-171\" DevicePath \"\"" Aug 13 02:07:22.431106 kubelet[2718]: I0813 02:07:22.431101 2718 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/218449a1-8522-470d-afd0-760d9b801a05-whisker-backend-key-pair\") on node \"172-236-122-171\" DevicePath \"\"" Aug 13 02:07:23.035297 containerd[1542]: time="2025-08-13T02:07:23.033224803Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1545972657: write /var/lib/containerd/tmpmounts/containerd-mount1545972657/usr/bin/calico-node: no space left on device" Aug 13 02:07:23.035297 containerd[1542]: time="2025-08-13T02:07:23.033267222Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 02:07:23.036135 kubelet[2718]: E0813 02:07:23.035659 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1545972657: write /var/lib/containerd/tmpmounts/containerd-mount1545972657/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 02:07:23.036135 kubelet[2718]: E0813 02:07:23.035895 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1545972657: write /var/lib/containerd/tmpmounts/containerd-mount1545972657/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 02:07:23.036267 kubelet[2718]: E0813 02:07:23.036062 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j884b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-cdfxj_calico-system(e8f51745-7382-4ead-96df-a31572ad4e1f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1545972657: write /var/lib/containerd/tmpmounts/containerd-mount1545972657/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 02:07:23.037689 kubelet[2718]: E0813 02:07:23.037448 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1545972657: write /var/lib/containerd/tmpmounts/containerd-mount1545972657/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-cdfxj" podUID="e8f51745-7382-4ead-96df-a31572ad4e1f" Aug 13 02:07:23.037836 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1545972657.mount: Deactivated successfully. Aug 13 02:07:23.137141 systemd[1]: Removed slice kubepods-besteffort-pod218449a1_8522_470d_afd0_760d9b801a05.slice - libcontainer container kubepods-besteffort-pod218449a1_8522_470d_afd0_760d9b801a05.slice. Aug 13 02:07:23.208562 kubelet[2718]: I0813 02:07:23.208510 2718 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-system/whisker-65456dc94b-mmvb5"] Aug 13 02:07:23.218573 kubelet[2718]: I0813 02:07:23.218533 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:07:23.218573 kubelet[2718]: I0813 02:07:23.218577 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:07:23.221509 kubelet[2718]: I0813 02:07:23.221477 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:07:23.232246 kubelet[2718]: I0813 02:07:23.232223 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:07:23.232352 kubelet[2718]: I0813 02:07:23.232303 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-859c474dd6-b6nnn","kube-system/coredns-668d6bf9bc-pw6gg","kube-system/coredns-668d6bf9bc-p5qmw","calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","calico-system/csi-node-driver-r6mhv","calico-system/calico-node-cdfxj","tigera-operator/tigera-operator-747864d56d-nxhh9","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:07:23.237701 kubelet[2718]: I0813 02:07:23.237670 2718 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-859c474dd6-b6nnn" Aug 13 02:07:23.237701 kubelet[2718]: I0813 02:07:23.237689 2718 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-859c474dd6-b6nnn"] Aug 13 02:07:23.338883 kubelet[2718]: I0813 02:07:23.337978 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4xwrs\" (UniqueName: \"kubernetes.io/projected/7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6-kube-api-access-4xwrs\") pod \"7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6\" (UID: \"7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6\") " Aug 13 02:07:23.338883 kubelet[2718]: I0813 02:07:23.338025 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6-calico-apiserver-certs\") pod \"7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6\" (UID: \"7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6\") " Aug 13 02:07:23.343118 kubelet[2718]: I0813 02:07:23.343086 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6" (UID: "7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 02:07:23.343996 systemd[1]: var-lib-kubelet-pods-7c1d6d84\x2d89e5\x2d4a8a\x2da62c\x2dc7b5324f50d6-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 02:07:23.346208 kubelet[2718]: I0813 02:07:23.346177 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6-kube-api-access-4xwrs" (OuterVolumeSpecName: "kube-api-access-4xwrs") pod "7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6" (UID: "7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6"). InnerVolumeSpecName "kube-api-access-4xwrs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 02:07:23.347313 systemd[1]: var-lib-kubelet-pods-7c1d6d84\x2d89e5\x2d4a8a\x2da62c\x2dc7b5324f50d6-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d4xwrs.mount: Deactivated successfully. Aug 13 02:07:23.439121 kubelet[2718]: I0813 02:07:23.439090 2718 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6-calico-apiserver-certs\") on node \"172-236-122-171\" DevicePath \"\"" Aug 13 02:07:23.439121 kubelet[2718]: I0813 02:07:23.439112 2718 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4xwrs\" (UniqueName: \"kubernetes.io/projected/7c1d6d84-89e5-4a8a-a62c-c7b5324f50d6-kube-api-access-4xwrs\") on node \"172-236-122-171\" DevicePath \"\"" Aug 13 02:07:23.943514 systemd[1]: Removed slice kubepods-besteffort-pod7c1d6d84_89e5_4a8a_a62c_c7b5324f50d6.slice - libcontainer container kubepods-besteffort-pod7c1d6d84_89e5_4a8a_a62c_c7b5324f50d6.slice. Aug 13 02:07:24.238788 kubelet[2718]: I0813 02:07:24.238635 2718 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-859c474dd6-b6nnn"] Aug 13 02:07:33.936883 kubelet[2718]: E0813 02:07:33.936805 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:07:33.937293 containerd[1542]: time="2025-08-13T02:07:33.936829406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,}" Aug 13 02:07:33.938522 containerd[1542]: time="2025-08-13T02:07:33.938288620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,}" Aug 13 02:07:33.938811 kubelet[2718]: E0813 02:07:33.938790 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:07:33.939088 containerd[1542]: time="2025-08-13T02:07:33.939054926Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,}" Aug 13 02:07:33.939225 containerd[1542]: time="2025-08-13T02:07:33.939182606Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,}" Aug 13 02:07:34.009831 containerd[1542]: time="2025-08-13T02:07:34.009776954Z" level=error msg="Failed to destroy network for sandbox \"8ac97dddd9157837cf1dddef5df9cd8da78e5898d207c6d9b255ecff68b64454\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:34.014441 containerd[1542]: time="2025-08-13T02:07:34.014371735Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ac97dddd9157837cf1dddef5df9cd8da78e5898d207c6d9b255ecff68b64454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:34.014843 systemd[1]: run-netns-cni\x2d987b2eec\x2d2dfb\x2da172\x2dfed1\x2d49182ed9ca2e.mount: Deactivated successfully. Aug 13 02:07:34.016482 kubelet[2718]: E0813 02:07:34.016340 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ac97dddd9157837cf1dddef5df9cd8da78e5898d207c6d9b255ecff68b64454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:34.016482 kubelet[2718]: E0813 02:07:34.016432 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ac97dddd9157837cf1dddef5df9cd8da78e5898d207c6d9b255ecff68b64454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:07:34.016482 kubelet[2718]: E0813 02:07:34.016455 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ac97dddd9157837cf1dddef5df9cd8da78e5898d207c6d9b255ecff68b64454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:07:34.017027 kubelet[2718]: E0813 02:07:34.016685 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ac97dddd9157837cf1dddef5df9cd8da78e5898d207c6d9b255ecff68b64454\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pw6gg" podUID="e73a6876-bbb3-4e11-8a33-1945cf27a944" Aug 13 02:07:34.056358 containerd[1542]: time="2025-08-13T02:07:34.056306248Z" level=error msg="Failed to destroy network for sandbox \"f5ff5c002260ba421d07178c680b4f43265d1e5b93c74b25867ba1ab41f104ce\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:34.063701 systemd[1]: run-netns-cni\x2d31031054\x2d5d0b\x2d8e01\x2dc585\x2d42fbf495d2f1.mount: Deactivated successfully. Aug 13 02:07:34.064959 containerd[1542]: time="2025-08-13T02:07:34.064256815Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5ff5c002260ba421d07178c680b4f43265d1e5b93c74b25867ba1ab41f104ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:34.065081 kubelet[2718]: E0813 02:07:34.064514 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5ff5c002260ba421d07178c680b4f43265d1e5b93c74b25867ba1ab41f104ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:34.065081 kubelet[2718]: E0813 02:07:34.064573 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5ff5c002260ba421d07178c680b4f43265d1e5b93c74b25867ba1ab41f104ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:07:34.065081 kubelet[2718]: E0813 02:07:34.064625 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5ff5c002260ba421d07178c680b4f43265d1e5b93c74b25867ba1ab41f104ce\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:07:34.065081 kubelet[2718]: E0813 02:07:34.064691 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5ff5c002260ba421d07178c680b4f43265d1e5b93c74b25867ba1ab41f104ce\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:07:34.066226 containerd[1542]: time="2025-08-13T02:07:34.066179997Z" level=error msg="Failed to destroy network for sandbox \"f71f3fc8d5ed1bd56bd876b48344c6f8afed6138ea84473d9ebe2735e87fb25f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:34.068286 containerd[1542]: time="2025-08-13T02:07:34.068156888Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f71f3fc8d5ed1bd56bd876b48344c6f8afed6138ea84473d9ebe2735e87fb25f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:34.068815 kubelet[2718]: E0813 02:07:34.068692 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f71f3fc8d5ed1bd56bd876b48344c6f8afed6138ea84473d9ebe2735e87fb25f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:34.068983 kubelet[2718]: E0813 02:07:34.068946 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f71f3fc8d5ed1bd56bd876b48344c6f8afed6138ea84473d9ebe2735e87fb25f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:07:34.069024 systemd[1]: run-netns-cni\x2d2a86744e\x2d7a36\x2d7639\x2d7a89\x2d6fe8c773892b.mount: Deactivated successfully. Aug 13 02:07:34.069192 kubelet[2718]: E0813 02:07:34.069078 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f71f3fc8d5ed1bd56bd876b48344c6f8afed6138ea84473d9ebe2735e87fb25f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:07:34.069409 kubelet[2718]: E0813 02:07:34.069124 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f71f3fc8d5ed1bd56bd876b48344c6f8afed6138ea84473d9ebe2735e87fb25f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:07:34.075582 containerd[1542]: time="2025-08-13T02:07:34.075553567Z" level=error msg="Failed to destroy network for sandbox \"ef38e39c1a02e12885ed63af22abd4197f528e5035a5c3eb8baa250136dcda38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:34.076309 containerd[1542]: time="2025-08-13T02:07:34.076287414Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef38e39c1a02e12885ed63af22abd4197f528e5035a5c3eb8baa250136dcda38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:34.076426 kubelet[2718]: E0813 02:07:34.076403 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef38e39c1a02e12885ed63af22abd4197f528e5035a5c3eb8baa250136dcda38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:34.076482 kubelet[2718]: E0813 02:07:34.076437 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef38e39c1a02e12885ed63af22abd4197f528e5035a5c3eb8baa250136dcda38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:07:34.076482 kubelet[2718]: E0813 02:07:34.076453 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef38e39c1a02e12885ed63af22abd4197f528e5035a5c3eb8baa250136dcda38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:07:34.076563 kubelet[2718]: E0813 02:07:34.076510 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef38e39c1a02e12885ed63af22abd4197f528e5035a5c3eb8baa250136dcda38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p5qmw" podUID="26fd4059-1e9c-49a2-9bd9-181be9ad7bcb" Aug 13 02:07:34.265125 kubelet[2718]: I0813 02:07:34.263776 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:07:34.265125 kubelet[2718]: I0813 02:07:34.263808 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:07:34.269324 kubelet[2718]: I0813 02:07:34.269304 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:07:34.280563 kubelet[2718]: I0813 02:07:34.280541 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:07:34.280668 kubelet[2718]: I0813 02:07:34.280638 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-p5qmw","calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/calico-node-cdfxj","calico-system/csi-node-driver-r6mhv","tigera-operator/tigera-operator-747864d56d-nxhh9","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:07:34.280668 kubelet[2718]: E0813 02:07:34.280663 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:07:34.280913 kubelet[2718]: E0813 02:07:34.280672 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:07:34.280913 kubelet[2718]: E0813 02:07:34.280679 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:07:34.280913 kubelet[2718]: E0813 02:07:34.280685 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:07:34.280913 kubelet[2718]: E0813 02:07:34.280691 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:07:34.281232 containerd[1542]: time="2025-08-13T02:07:34.281163822Z" level=info msg="StopContainer for \"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a\" with timeout 2 (s)" Aug 13 02:07:34.281474 containerd[1542]: time="2025-08-13T02:07:34.281435150Z" level=info msg="Stop container \"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a\" with signal terminated" Aug 13 02:07:34.299407 systemd[1]: cri-containerd-889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a.scope: Deactivated successfully. Aug 13 02:07:34.300315 systemd[1]: cri-containerd-889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a.scope: Consumed 3.957s CPU time, 80.8M memory peak. Aug 13 02:07:34.303021 containerd[1542]: time="2025-08-13T02:07:34.302935700Z" level=info msg="received exit event container_id:\"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a\" id:\"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a\" pid:3039 exited_at:{seconds:1755050854 nanos:302580612}" Aug 13 02:07:34.303153 containerd[1542]: time="2025-08-13T02:07:34.303128919Z" level=info msg="TaskExit event in podsandbox handler container_id:\"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a\" id:\"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a\" pid:3039 exited_at:{seconds:1755050854 nanos:302580612}" Aug 13 02:07:34.330488 containerd[1542]: time="2025-08-13T02:07:34.329654197Z" level=info msg="StopContainer for \"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a\" returns successfully" Aug 13 02:07:34.331146 containerd[1542]: time="2025-08-13T02:07:34.331048602Z" level=info msg="StopPodSandbox for \"c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d\"" Aug 13 02:07:34.331249 containerd[1542]: time="2025-08-13T02:07:34.331228721Z" level=info msg="Container to stop \"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 02:07:34.338800 systemd[1]: cri-containerd-c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d.scope: Deactivated successfully. Aug 13 02:07:34.340643 containerd[1542]: time="2025-08-13T02:07:34.340586912Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d\" id:\"c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d\" pid:2908 exit_status:137 exited_at:{seconds:1755050854 nanos:340091184}" Aug 13 02:07:34.369055 containerd[1542]: time="2025-08-13T02:07:34.369012962Z" level=info msg="shim disconnected" id=c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d namespace=k8s.io Aug 13 02:07:34.369209 containerd[1542]: time="2025-08-13T02:07:34.369040882Z" level=warning msg="cleaning up after shim disconnected" id=c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d namespace=k8s.io Aug 13 02:07:34.369209 containerd[1542]: time="2025-08-13T02:07:34.369068642Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 02:07:34.382863 containerd[1542]: time="2025-08-13T02:07:34.382832764Z" level=info msg="received exit event sandbox_id:\"c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d\" exit_status:137 exited_at:{seconds:1755050854 nanos:340091184}" Aug 13 02:07:34.383020 containerd[1542]: time="2025-08-13T02:07:34.383001243Z" level=info msg="TearDown network for sandbox \"c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d\" successfully" Aug 13 02:07:34.383087 containerd[1542]: time="2025-08-13T02:07:34.383073653Z" level=info msg="StopPodSandbox for \"c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d\" returns successfully" Aug 13 02:07:34.387880 kubelet[2718]: I0813 02:07:34.387862 2718 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-747864d56d-nxhh9" Aug 13 02:07:34.388026 kubelet[2718]: I0813 02:07:34.388015 2718 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-747864d56d-nxhh9"] Aug 13 02:07:34.410098 kubelet[2718]: I0813 02:07:34.410069 2718 kubelet.go:2351] "Pod admission denied" podUID="d51473eb-59ac-4fcd-b79e-55b99860466b" pod="tigera-operator/tigera-operator-747864d56d-jmcqf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:34.431563 kubelet[2718]: I0813 02:07:34.431426 2718 kubelet.go:2351] "Pod admission denied" podUID="7486cb92-b813-4c04-bb83-0ca54731464c" pod="tigera-operator/tigera-operator-747864d56d-nl6c8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:34.448484 kubelet[2718]: I0813 02:07:34.448430 2718 kubelet.go:2351] "Pod admission denied" podUID="848fafe2-7459-4507-b23f-d9a9e9bbb314" pod="tigera-operator/tigera-operator-747864d56d-ll2mg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:34.452807 kubelet[2718]: I0813 02:07:34.452775 2718 status_manager.go:890] "Failed to get status for pod" podUID="848fafe2-7459-4507-b23f-d9a9e9bbb314" pod="tigera-operator/tigera-operator-747864d56d-ll2mg" err="pods \"tigera-operator-747864d56d-ll2mg\" is forbidden: User \"system:node:172-236-122-171\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '172-236-122-171' and this object" Aug 13 02:07:34.508550 kubelet[2718]: I0813 02:07:34.507973 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrp42\" (UniqueName: \"kubernetes.io/projected/f34ef7fc-f010-4a73-ba04-0097b359cd72-kube-api-access-xrp42\") pod \"f34ef7fc-f010-4a73-ba04-0097b359cd72\" (UID: \"f34ef7fc-f010-4a73-ba04-0097b359cd72\") " Aug 13 02:07:34.508550 kubelet[2718]: I0813 02:07:34.508018 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f34ef7fc-f010-4a73-ba04-0097b359cd72-var-lib-calico\") pod \"f34ef7fc-f010-4a73-ba04-0097b359cd72\" (UID: \"f34ef7fc-f010-4a73-ba04-0097b359cd72\") " Aug 13 02:07:34.508550 kubelet[2718]: I0813 02:07:34.508089 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f34ef7fc-f010-4a73-ba04-0097b359cd72-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "f34ef7fc-f010-4a73-ba04-0097b359cd72" (UID: "f34ef7fc-f010-4a73-ba04-0097b359cd72"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 02:07:34.510857 kubelet[2718]: I0813 02:07:34.510815 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34ef7fc-f010-4a73-ba04-0097b359cd72-kube-api-access-xrp42" (OuterVolumeSpecName: "kube-api-access-xrp42") pod "f34ef7fc-f010-4a73-ba04-0097b359cd72" (UID: "f34ef7fc-f010-4a73-ba04-0097b359cd72"). InnerVolumeSpecName "kube-api-access-xrp42". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 02:07:34.609181 kubelet[2718]: I0813 02:07:34.609142 2718 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xrp42\" (UniqueName: \"kubernetes.io/projected/f34ef7fc-f010-4a73-ba04-0097b359cd72-kube-api-access-xrp42\") on node \"172-236-122-171\" DevicePath \"\"" Aug 13 02:07:34.609181 kubelet[2718]: I0813 02:07:34.609167 2718 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f34ef7fc-f010-4a73-ba04-0097b359cd72-var-lib-calico\") on node \"172-236-122-171\" DevicePath \"\"" Aug 13 02:07:34.942560 systemd[1]: run-netns-cni\x2dbf3a9834\x2dc7dd\x2d8b41\x2d61e0\x2df3e36874a769.mount: Deactivated successfully. Aug 13 02:07:34.943038 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a-rootfs.mount: Deactivated successfully. Aug 13 02:07:34.943182 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d-rootfs.mount: Deactivated successfully. Aug 13 02:07:34.943266 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d-shm.mount: Deactivated successfully. Aug 13 02:07:34.943332 systemd[1]: var-lib-kubelet-pods-f34ef7fc\x2df010\x2d4a73\x2dba04\x2d0097b359cd72-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dxrp42.mount: Deactivated successfully. Aug 13 02:07:35.153134 kubelet[2718]: I0813 02:07:35.153093 2718 scope.go:117] "RemoveContainer" containerID="889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a" Aug 13 02:07:35.154973 containerd[1542]: time="2025-08-13T02:07:35.154914615Z" level=info msg="RemoveContainer for \"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a\"" Aug 13 02:07:35.158263 containerd[1542]: time="2025-08-13T02:07:35.158224761Z" level=info msg="RemoveContainer for \"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a\" returns successfully" Aug 13 02:07:35.158628 kubelet[2718]: I0813 02:07:35.158550 2718 scope.go:117] "RemoveContainer" containerID="889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a" Aug 13 02:07:35.158778 containerd[1542]: time="2025-08-13T02:07:35.158753529Z" level=error msg="ContainerStatus for \"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a\": not found" Aug 13 02:07:35.159232 kubelet[2718]: E0813 02:07:35.159039 2718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a\": not found" containerID="889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a" Aug 13 02:07:35.159232 kubelet[2718]: I0813 02:07:35.159077 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a"} err="failed to get container status \"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a\": rpc error: code = NotFound desc = an error occurred when try to find container \"889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a\": not found" Aug 13 02:07:35.159237 systemd[1]: Removed slice kubepods-besteffort-podf34ef7fc_f010_4a73_ba04_0097b359cd72.slice - libcontainer container kubepods-besteffort-podf34ef7fc_f010_4a73_ba04_0097b359cd72.slice. Aug 13 02:07:35.159349 systemd[1]: kubepods-besteffort-podf34ef7fc_f010_4a73_ba04_0097b359cd72.slice: Consumed 3.985s CPU time, 81M memory peak. Aug 13 02:07:35.180523 kubelet[2718]: I0813 02:07:35.180106 2718 kubelet.go:2351] "Pod admission denied" podUID="eab3ea65-3164-4b51-bcb7-464465e61555" pod="tigera-operator/tigera-operator-747864d56d-d24mn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:35.205555 kubelet[2718]: I0813 02:07:35.205201 2718 kubelet.go:2351] "Pod admission denied" podUID="9dd73a6a-adb7-415e-a633-bae43715e5db" pod="tigera-operator/tigera-operator-747864d56d-hbz6z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:35.227700 kubelet[2718]: I0813 02:07:35.227654 2718 kubelet.go:2351] "Pod admission denied" podUID="cb5dbcf6-2645-42b6-a4a6-200718af2ecf" pod="tigera-operator/tigera-operator-747864d56d-4w5q8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:35.249624 kubelet[2718]: I0813 02:07:35.248606 2718 kubelet.go:2351] "Pod admission denied" podUID="aeb7a653-8562-4fd1-b427-36fdff53dfe4" pod="tigera-operator/tigera-operator-747864d56d-nkrdm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:35.269968 kubelet[2718]: I0813 02:07:35.269322 2718 kubelet.go:2351] "Pod admission denied" podUID="ebf60379-83e5-46e7-82db-643f00a15787" pod="tigera-operator/tigera-operator-747864d56d-vk4hj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:35.289706 kubelet[2718]: I0813 02:07:35.289655 2718 kubelet.go:2351] "Pod admission denied" podUID="f5fb47f0-8c00-434a-8f55-d5c5595e4101" pod="tigera-operator/tigera-operator-747864d56d-qp4c8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:35.307087 kubelet[2718]: I0813 02:07:35.307053 2718 kubelet.go:2351] "Pod admission denied" podUID="ce791603-c960-434c-9c8f-5b67d682f5a9" pod="tigera-operator/tigera-operator-747864d56d-2krvt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:35.325653 kubelet[2718]: I0813 02:07:35.325618 2718 kubelet.go:2351] "Pod admission denied" podUID="47d45d5e-28c7-4672-8137-37506a266458" pod="tigera-operator/tigera-operator-747864d56d-fs4c9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:35.344372 kubelet[2718]: I0813 02:07:35.344324 2718 kubelet.go:2351] "Pod admission denied" podUID="386b6ece-5912-48f4-b8af-85742836cf92" pod="tigera-operator/tigera-operator-747864d56d-dwdzv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:35.388665 kubelet[2718]: I0813 02:07:35.388639 2718 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-747864d56d-nxhh9"] Aug 13 02:07:35.479918 kubelet[2718]: I0813 02:07:35.479748 2718 kubelet.go:2351] "Pod admission denied" podUID="f0040ec5-f1cb-47d3-a392-1387df22bb4b" pod="tigera-operator/tigera-operator-747864d56d-h6q49" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:35.628997 kubelet[2718]: I0813 02:07:35.628954 2718 kubelet.go:2351] "Pod admission denied" podUID="b730a886-8833-49e3-8f7d-57a251ffa655" pod="tigera-operator/tigera-operator-747864d56d-k6jjs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:35.780287 kubelet[2718]: I0813 02:07:35.780232 2718 kubelet.go:2351] "Pod admission denied" podUID="947633ee-a7dc-4b14-9082-fb3bbf6f47d2" pod="tigera-operator/tigera-operator-747864d56d-rvp2w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:35.929437 kubelet[2718]: I0813 02:07:35.928875 2718 kubelet.go:2351] "Pod admission denied" podUID="74ebe94a-06e9-4af4-a424-882a1a7d61fc" pod="tigera-operator/tigera-operator-747864d56d-t82nl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:36.080427 kubelet[2718]: I0813 02:07:36.080251 2718 kubelet.go:2351] "Pod admission denied" podUID="fa423632-1af3-456c-8a06-2b881b611647" pod="tigera-operator/tigera-operator-747864d56d-z6kpw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:36.335832 kubelet[2718]: I0813 02:07:36.335702 2718 kubelet.go:2351] "Pod admission denied" podUID="e7d3d24c-ee13-4053-823e-0a627957a9a7" pod="tigera-operator/tigera-operator-747864d56d-8ngzg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:36.478277 kubelet[2718]: I0813 02:07:36.478019 2718 kubelet.go:2351] "Pod admission denied" podUID="bef0545c-9b37-4c2e-a045-028dfc6376ba" pod="tigera-operator/tigera-operator-747864d56d-f52kd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:36.629706 kubelet[2718]: I0813 02:07:36.629481 2718 kubelet.go:2351] "Pod admission denied" podUID="43fd2e9c-b4b0-46aa-9a3b-b99c83e6a3d3" pod="tigera-operator/tigera-operator-747864d56d-z44q4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:36.784842 kubelet[2718]: I0813 02:07:36.784757 2718 kubelet.go:2351] "Pod admission denied" podUID="663ded10-10f7-473d-9468-4927ca1a0457" pod="tigera-operator/tigera-operator-747864d56d-pbtbx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:36.929290 kubelet[2718]: I0813 02:07:36.929046 2718 kubelet.go:2351] "Pod admission denied" podUID="67680edb-c557-435e-bce7-788ddcb99fa3" pod="tigera-operator/tigera-operator-747864d56d-xpt9t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:37.081459 kubelet[2718]: I0813 02:07:37.081411 2718 kubelet.go:2351] "Pod admission denied" podUID="c420a188-7738-44cd-afff-ec8cb8a50c6b" pod="tigera-operator/tigera-operator-747864d56d-rjsv6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:37.230615 kubelet[2718]: I0813 02:07:37.229961 2718 kubelet.go:2351] "Pod admission denied" podUID="1c6bab06-0eee-4b48-b32f-89328a400ca2" pod="tigera-operator/tigera-operator-747864d56d-5drgx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:37.482194 kubelet[2718]: I0813 02:07:37.481854 2718 kubelet.go:2351] "Pod admission denied" podUID="e4efe2e0-947d-46f4-88f1-3a475efed82c" pod="tigera-operator/tigera-operator-747864d56d-kf67k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:37.580417 kubelet[2718]: I0813 02:07:37.580350 2718 kubelet.go:2351] "Pod admission denied" podUID="7309fbdd-8542-48c9-bbc4-489ee40c1fd8" pod="tigera-operator/tigera-operator-747864d56d-z7tk8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:37.681212 kubelet[2718]: I0813 02:07:37.681133 2718 kubelet.go:2351] "Pod admission denied" podUID="86bb6e88-a7ae-43ed-accb-abc2bce3246f" pod="tigera-operator/tigera-operator-747864d56d-4ln7j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:37.885180 kubelet[2718]: I0813 02:07:37.883953 2718 kubelet.go:2351] "Pod admission denied" podUID="cc88f2da-3c77-4bea-b2d2-0743c71cbc6e" pod="tigera-operator/tigera-operator-747864d56d-lnf2z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:37.938475 kubelet[2718]: E0813 02:07:37.938432 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1545972657: write /var/lib/containerd/tmpmounts/containerd-mount1545972657/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-cdfxj" podUID="e8f51745-7382-4ead-96df-a31572ad4e1f" Aug 13 02:07:37.979524 kubelet[2718]: I0813 02:07:37.979487 2718 kubelet.go:2351] "Pod admission denied" podUID="ae7d0a8d-bbbf-4209-9d30-baa8a85a997a" pod="tigera-operator/tigera-operator-747864d56d-vqgfl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:38.082366 kubelet[2718]: I0813 02:07:38.082296 2718 kubelet.go:2351] "Pod admission denied" podUID="2c7c64cb-07f6-47f6-a1ff-2bd3d2e2344b" pod="tigera-operator/tigera-operator-747864d56d-h9ws6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:38.181973 kubelet[2718]: I0813 02:07:38.181627 2718 kubelet.go:2351] "Pod admission denied" podUID="6865b821-6ea1-4f86-aa10-a6b4b5b36f7d" pod="tigera-operator/tigera-operator-747864d56d-gkh47" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:38.280532 kubelet[2718]: I0813 02:07:38.280494 2718 kubelet.go:2351] "Pod admission denied" podUID="f52cff0a-8b0d-4c66-9d8a-2c209b8c5bac" pod="tigera-operator/tigera-operator-747864d56d-2j96m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:38.381833 kubelet[2718]: I0813 02:07:38.381764 2718 kubelet.go:2351] "Pod admission denied" podUID="493aa32d-160d-488e-8c73-06103c0c41a4" pod="tigera-operator/tigera-operator-747864d56d-hbvnx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:38.431825 kubelet[2718]: I0813 02:07:38.431732 2718 kubelet.go:2351] "Pod admission denied" podUID="7ca6668f-fdb2-4298-94d6-8ad25e10a44e" pod="tigera-operator/tigera-operator-747864d56d-hkdsc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:38.531630 kubelet[2718]: I0813 02:07:38.531089 2718 kubelet.go:2351] "Pod admission denied" podUID="d5e62717-6b5e-4927-a579-6f63529a9288" pod="tigera-operator/tigera-operator-747864d56d-d9znh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:38.632959 kubelet[2718]: I0813 02:07:38.632788 2718 kubelet.go:2351] "Pod admission denied" podUID="27673154-0e70-406e-ace3-69d4a3d4a1c3" pod="tigera-operator/tigera-operator-747864d56d-mrlrf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:38.732605 kubelet[2718]: I0813 02:07:38.732533 2718 kubelet.go:2351] "Pod admission denied" podUID="2ff5b4aa-9272-481b-aea1-c9e001225358" pod="tigera-operator/tigera-operator-747864d56d-67qjl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:38.834809 kubelet[2718]: I0813 02:07:38.834691 2718 kubelet.go:2351] "Pod admission denied" podUID="314727b7-2968-4ffa-856b-9e6af56bc07c" pod="tigera-operator/tigera-operator-747864d56d-8pvxn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:38.929333 kubelet[2718]: I0813 02:07:38.929294 2718 kubelet.go:2351] "Pod admission denied" podUID="aaa12974-0784-4180-bb96-9339204424be" pod="tigera-operator/tigera-operator-747864d56d-8hjs9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:39.134088 kubelet[2718]: I0813 02:07:39.133912 2718 kubelet.go:2351] "Pod admission denied" podUID="1afac7fc-35a0-4ab3-91fb-4595d04c1f00" pod="tigera-operator/tigera-operator-747864d56d-krq6n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:39.228869 kubelet[2718]: I0813 02:07:39.228809 2718 kubelet.go:2351] "Pod admission denied" podUID="dce710e0-1876-4e74-b85f-13890ddd824d" pod="tigera-operator/tigera-operator-747864d56d-jq2rn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:39.329672 kubelet[2718]: I0813 02:07:39.329629 2718 kubelet.go:2351] "Pod admission denied" podUID="e631a680-591e-419a-ba9f-8832f8c60122" pod="tigera-operator/tigera-operator-747864d56d-klnx7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:39.429963 kubelet[2718]: I0813 02:07:39.429582 2718 kubelet.go:2351] "Pod admission denied" podUID="99b4664a-4c11-457b-9096-0f35a3266667" pod="tigera-operator/tigera-operator-747864d56d-kdhjg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:39.532091 kubelet[2718]: I0813 02:07:39.532042 2718 kubelet.go:2351] "Pod admission denied" podUID="4d18fa2b-2a06-4025-9b43-651668191dac" pod="tigera-operator/tigera-operator-747864d56d-z6wq6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:39.632031 kubelet[2718]: I0813 02:07:39.631978 2718 kubelet.go:2351] "Pod admission denied" podUID="c79cf1b8-54e7-41f6-8d31-8937eafe37ae" pod="tigera-operator/tigera-operator-747864d56d-2mqcp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:39.682341 kubelet[2718]: I0813 02:07:39.682207 2718 kubelet.go:2351] "Pod admission denied" podUID="d52188d0-ec73-44c4-b4f1-5822e405a3a9" pod="tigera-operator/tigera-operator-747864d56d-r7vz2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:39.778497 kubelet[2718]: I0813 02:07:39.778466 2718 kubelet.go:2351] "Pod admission denied" podUID="5644b97f-7036-4d17-a8a9-7515ee2693b9" pod="tigera-operator/tigera-operator-747864d56d-gpwbz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:39.879316 kubelet[2718]: I0813 02:07:39.879280 2718 kubelet.go:2351] "Pod admission denied" podUID="a56afda3-2fb7-46fe-83ff-b97fb3b25ed4" pod="tigera-operator/tigera-operator-747864d56d-plkdl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:39.943046 containerd[1542]: time="2025-08-13T02:07:39.942887037Z" level=info msg="StopPodSandbox for \"c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d\"" Aug 13 02:07:39.943046 containerd[1542]: time="2025-08-13T02:07:39.943007987Z" level=info msg="TearDown network for sandbox \"c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d\" successfully" Aug 13 02:07:39.943046 containerd[1542]: time="2025-08-13T02:07:39.943018567Z" level=info msg="StopPodSandbox for \"c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d\" returns successfully" Aug 13 02:07:39.944162 containerd[1542]: time="2025-08-13T02:07:39.944137702Z" level=info msg="RemovePodSandbox for \"c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d\"" Aug 13 02:07:39.944162 containerd[1542]: time="2025-08-13T02:07:39.944162452Z" level=info msg="Forcibly stopping sandbox \"c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d\"" Aug 13 02:07:39.944249 containerd[1542]: time="2025-08-13T02:07:39.944231322Z" level=info msg="TearDown network for sandbox \"c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d\" successfully" Aug 13 02:07:39.946417 containerd[1542]: time="2025-08-13T02:07:39.946385804Z" level=info msg="Ensure that sandbox c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d in task-service has been cleanup successfully" Aug 13 02:07:39.948501 containerd[1542]: time="2025-08-13T02:07:39.948481105Z" level=info msg="RemovePodSandbox \"c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d\" returns successfully" Aug 13 02:07:39.980867 kubelet[2718]: I0813 02:07:39.980841 2718 kubelet.go:2351] "Pod admission denied" podUID="11fdbdf4-2e69-4712-9ac7-ba268bc24e2f" pod="tigera-operator/tigera-operator-747864d56d-hfrbv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:40.079466 kubelet[2718]: I0813 02:07:40.079431 2718 kubelet.go:2351] "Pod admission denied" podUID="06ac8040-af52-4f94-aae6-d56dd324618a" pod="tigera-operator/tigera-operator-747864d56d-fqns6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:40.179987 kubelet[2718]: I0813 02:07:40.179909 2718 kubelet.go:2351] "Pod admission denied" podUID="00c10a2b-1b47-4cb5-b51f-53ec4f8e40b0" pod="tigera-operator/tigera-operator-747864d56d-4l95m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:40.283899 kubelet[2718]: I0813 02:07:40.283853 2718 kubelet.go:2351] "Pod admission denied" podUID="0ebceec8-3931-4456-83fb-59357a4d0687" pod="tigera-operator/tigera-operator-747864d56d-nkrsn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:40.382493 kubelet[2718]: I0813 02:07:40.382443 2718 kubelet.go:2351] "Pod admission denied" podUID="9dc35af5-b75e-4ab6-b393-01b4c34595ab" pod="tigera-operator/tigera-operator-747864d56d-87j4w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:40.484419 kubelet[2718]: I0813 02:07:40.484124 2718 kubelet.go:2351] "Pod admission denied" podUID="982fd6fe-c4aa-4eeb-86c9-64edc5e24ec1" pod="tigera-operator/tigera-operator-747864d56d-qmq27" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:40.530320 kubelet[2718]: I0813 02:07:40.530015 2718 kubelet.go:2351] "Pod admission denied" podUID="f84b3ffc-2596-491d-ba64-b03444caa143" pod="tigera-operator/tigera-operator-747864d56d-fc9gm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:40.633544 kubelet[2718]: I0813 02:07:40.633202 2718 kubelet.go:2351] "Pod admission denied" podUID="2832244b-b0c9-43bd-ab0d-3ac8e2cd660b" pod="tigera-operator/tigera-operator-747864d56d-szsmw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:40.733343 kubelet[2718]: I0813 02:07:40.733278 2718 kubelet.go:2351] "Pod admission denied" podUID="ba39cc30-dcac-4e95-bc91-8a873d4d08b7" pod="tigera-operator/tigera-operator-747864d56d-d7c6b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:40.830798 kubelet[2718]: I0813 02:07:40.830703 2718 kubelet.go:2351] "Pod admission denied" podUID="e5cbb9bf-3755-445e-be4d-ff483c847bc9" pod="tigera-operator/tigera-operator-747864d56d-hb88w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:40.930675 kubelet[2718]: I0813 02:07:40.930523 2718 kubelet.go:2351] "Pod admission denied" podUID="02b62829-3e66-45af-b664-62bedebcaa86" pod="tigera-operator/tigera-operator-747864d56d-gcxzj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:41.029289 kubelet[2718]: I0813 02:07:41.029226 2718 kubelet.go:2351] "Pod admission denied" podUID="7ee4b784-8a83-4f80-ba75-701e9eaca446" pod="tigera-operator/tigera-operator-747864d56d-zhhtk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:41.129774 kubelet[2718]: I0813 02:07:41.129721 2718 kubelet.go:2351] "Pod admission denied" podUID="ff60e636-32e7-4591-a0bb-d606a5f30fee" pod="tigera-operator/tigera-operator-747864d56d-t9k44" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:41.229345 kubelet[2718]: I0813 02:07:41.229046 2718 kubelet.go:2351] "Pod admission denied" podUID="1ab6fa34-d5a7-42ed-b409-d5a62ce520b0" pod="tigera-operator/tigera-operator-747864d56d-7stxl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:41.329947 kubelet[2718]: I0813 02:07:41.329891 2718 kubelet.go:2351] "Pod admission denied" podUID="b3112f6c-1306-4750-92a7-c41b09c9b44d" pod="tigera-operator/tigera-operator-747864d56d-6x7k8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:41.382199 kubelet[2718]: I0813 02:07:41.382117 2718 kubelet.go:2351] "Pod admission denied" podUID="4ea82644-e9be-4efe-9df8-33d41e82cf01" pod="tigera-operator/tigera-operator-747864d56d-j2dbh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:41.482409 kubelet[2718]: I0813 02:07:41.482320 2718 kubelet.go:2351] "Pod admission denied" podUID="58837166-5ffb-42f5-834c-3e6813e0092b" pod="tigera-operator/tigera-operator-747864d56d-8qlk9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:41.578723 kubelet[2718]: I0813 02:07:41.578668 2718 kubelet.go:2351] "Pod admission denied" podUID="05f95aab-5206-47ce-b45a-f93e17efd15a" pod="tigera-operator/tigera-operator-747864d56d-27wks" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:41.680737 kubelet[2718]: I0813 02:07:41.680703 2718 kubelet.go:2351] "Pod admission denied" podUID="e8cdcd64-d57b-4123-b7a2-844c8a1a0b8b" pod="tigera-operator/tigera-operator-747864d56d-8x577" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:41.882612 kubelet[2718]: I0813 02:07:41.882554 2718 kubelet.go:2351] "Pod admission denied" podUID="1105128f-1eae-4abe-a19b-fcd45ecdb4e1" pod="tigera-operator/tigera-operator-747864d56d-qxd7b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:41.981059 kubelet[2718]: I0813 02:07:41.981007 2718 kubelet.go:2351] "Pod admission denied" podUID="34ec9236-edf6-44f4-82c5-cb3d451777fe" pod="tigera-operator/tigera-operator-747864d56d-xcgwl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:42.083545 kubelet[2718]: I0813 02:07:42.083476 2718 kubelet.go:2351] "Pod admission denied" podUID="56111795-66ca-4a0e-a4bb-7d92eb25305c" pod="tigera-operator/tigera-operator-747864d56d-pcpkv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:42.181553 kubelet[2718]: I0813 02:07:42.181449 2718 kubelet.go:2351] "Pod admission denied" podUID="ad8f8fb7-911a-4ec5-9072-9dc22e8894e9" pod="tigera-operator/tigera-operator-747864d56d-kw8kg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:42.228855 kubelet[2718]: I0813 02:07:42.228827 2718 kubelet.go:2351] "Pod admission denied" podUID="6a47dc9f-75f1-403e-b6cc-be6cdde3431a" pod="tigera-operator/tigera-operator-747864d56d-jf7qb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:42.324698 systemd[1]: Started sshd@9-172.236.122.171:22-165.154.201.122:37282.service - OpenSSH per-connection server daemon (165.154.201.122:37282). Aug 13 02:07:42.337943 kubelet[2718]: I0813 02:07:42.337906 2718 kubelet.go:2351] "Pod admission denied" podUID="42984ecb-1dc4-413c-8ef2-517b620bd0a7" pod="tigera-operator/tigera-operator-747864d56d-x2678" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:42.432457 kubelet[2718]: I0813 02:07:42.432334 2718 kubelet.go:2351] "Pod admission denied" podUID="d28fc38f-77eb-4953-a269-9fa7ee2c2b93" pod="tigera-operator/tigera-operator-747864d56d-rp7zx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:42.479118 kubelet[2718]: I0813 02:07:42.479087 2718 kubelet.go:2351] "Pod admission denied" podUID="8c35c7e1-03e7-42ab-9be6-684a383c5628" pod="tigera-operator/tigera-operator-747864d56d-6625p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:42.584770 kubelet[2718]: I0813 02:07:42.584727 2718 kubelet.go:2351] "Pod admission denied" podUID="3533c884-9fd6-45a3-aebc-b518813aefcb" pod="tigera-operator/tigera-operator-747864d56d-fh82c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:42.685585 kubelet[2718]: I0813 02:07:42.685465 2718 kubelet.go:2351] "Pod admission denied" podUID="8d1cadf6-8df6-42d4-8051-241c3caf90ad" pod="tigera-operator/tigera-operator-747864d56d-gkmns" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:42.781098 kubelet[2718]: I0813 02:07:42.781056 2718 kubelet.go:2351] "Pod admission denied" podUID="5c9ccd3e-2c8d-4fa4-969a-2559c93eda1b" pod="tigera-operator/tigera-operator-747864d56d-hrt67" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:42.889399 kubelet[2718]: I0813 02:07:42.889358 2718 kubelet.go:2351] "Pod admission denied" podUID="8fd6c31c-c468-41a3-a48a-ac03391a80f7" pod="tigera-operator/tigera-operator-747864d56d-jhgfr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:42.982170 kubelet[2718]: I0813 02:07:42.981435 2718 kubelet.go:2351] "Pod admission denied" podUID="e393585c-eeff-40ec-aab6-417c84ae2e5f" pod="tigera-operator/tigera-operator-747864d56d-8fv92" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:43.185336 kubelet[2718]: I0813 02:07:43.185268 2718 kubelet.go:2351] "Pod admission denied" podUID="5860b434-da58-4d34-adc8-9524397e3f3b" pod="tigera-operator/tigera-operator-747864d56d-ht4jb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:43.283226 kubelet[2718]: I0813 02:07:43.283138 2718 kubelet.go:2351] "Pod admission denied" podUID="db66e250-2a76-43b0-a43b-b85e10fe6450" pod="tigera-operator/tigera-operator-747864d56d-swjsz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:43.331705 kubelet[2718]: I0813 02:07:43.331642 2718 kubelet.go:2351] "Pod admission denied" podUID="2b080776-a1fa-42b8-9478-84064d1676f1" pod="tigera-operator/tigera-operator-747864d56d-n9wdg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:43.429796 kubelet[2718]: I0813 02:07:43.429761 2718 kubelet.go:2351] "Pod admission denied" podUID="8f55a79c-ded9-4245-846c-a80a1c4c48a8" pod="tigera-operator/tigera-operator-747864d56d-ff22j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:43.532489 kubelet[2718]: I0813 02:07:43.532416 2718 kubelet.go:2351] "Pod admission denied" podUID="e65fe675-21ac-48bf-828d-f83323d8dc4c" pod="tigera-operator/tigera-operator-747864d56d-v45pn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:43.639736 kubelet[2718]: I0813 02:07:43.639493 2718 kubelet.go:2351] "Pod admission denied" podUID="805ef61e-532c-46e6-b7b1-f9c269cc6cc8" pod="tigera-operator/tigera-operator-747864d56d-cdq9d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:43.835515 kubelet[2718]: I0813 02:07:43.835409 2718 kubelet.go:2351] "Pod admission denied" podUID="a9d364fe-091b-4644-83ba-42ac15eee41b" pod="tigera-operator/tigera-operator-747864d56d-46m7t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:43.883840 sshd[4092]: Received disconnect from 165.154.201.122 port 37282:11: Bye Bye [preauth] Aug 13 02:07:43.883840 sshd[4092]: Disconnected from authenticating user root 165.154.201.122 port 37282 [preauth] Aug 13 02:07:43.887218 systemd[1]: sshd@9-172.236.122.171:22-165.154.201.122:37282.service: Deactivated successfully. Aug 13 02:07:43.938134 kubelet[2718]: I0813 02:07:43.937928 2718 kubelet.go:2351] "Pod admission denied" podUID="676d89ff-7971-4cab-8132-ae3e36a6d914" pod="tigera-operator/tigera-operator-747864d56d-b468g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:44.033456 kubelet[2718]: I0813 02:07:44.033406 2718 kubelet.go:2351] "Pod admission denied" podUID="e16886f4-f60e-4a23-8846-0b36780919a0" pod="tigera-operator/tigera-operator-747864d56d-7fv8h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:44.132638 kubelet[2718]: I0813 02:07:44.132578 2718 kubelet.go:2351] "Pod admission denied" podUID="15736653-44cb-41bc-adb2-362f48a36d40" pod="tigera-operator/tigera-operator-747864d56d-99rbc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:44.180152 kubelet[2718]: I0813 02:07:44.180116 2718 kubelet.go:2351] "Pod admission denied" podUID="59333a7a-98e0-4edb-b4ed-4da6e69c1301" pod="tigera-operator/tigera-operator-747864d56d-7dz47" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:44.279815 kubelet[2718]: I0813 02:07:44.279781 2718 kubelet.go:2351] "Pod admission denied" podUID="7ca1a485-b42c-442d-95a8-5ccdbaa84117" pod="tigera-operator/tigera-operator-747864d56d-wlgrj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:44.380413 kubelet[2718]: I0813 02:07:44.380351 2718 kubelet.go:2351] "Pod admission denied" podUID="d55b9a4c-2d4e-4276-8d34-3c80eb1eae5e" pod="tigera-operator/tigera-operator-747864d56d-fhtwr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:44.485304 kubelet[2718]: I0813 02:07:44.485234 2718 kubelet.go:2351] "Pod admission denied" podUID="b52df6fc-9691-46c1-95a6-5bd7e23db428" pod="tigera-operator/tigera-operator-747864d56d-fd8bt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:44.583671 kubelet[2718]: I0813 02:07:44.583527 2718 kubelet.go:2351] "Pod admission denied" podUID="4f5b2aa0-c63b-496c-8169-fef1d8dbcf4f" pod="tigera-operator/tigera-operator-747864d56d-zfjhh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:44.682372 kubelet[2718]: I0813 02:07:44.682329 2718 kubelet.go:2351] "Pod admission denied" podUID="d0a1e2d4-0670-4e5d-9ddb-a1b4d2ac92e3" pod="tigera-operator/tigera-operator-747864d56d-vp828" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:44.781956 kubelet[2718]: I0813 02:07:44.781908 2718 kubelet.go:2351] "Pod admission denied" podUID="b6815fde-75a8-4722-87b2-9a0b60dc74e9" pod="tigera-operator/tigera-operator-747864d56d-jqqtq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:44.880621 kubelet[2718]: I0813 02:07:44.880450 2718 kubelet.go:2351] "Pod admission denied" podUID="9826e1b1-a47f-4a33-aebd-99bd31b87ebb" pod="tigera-operator/tigera-operator-747864d56d-z6g6w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:44.935723 kubelet[2718]: E0813 02:07:44.935695 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:07:44.936659 containerd[1542]: time="2025-08-13T02:07:44.936443233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,}" Aug 13 02:07:44.937190 containerd[1542]: time="2025-08-13T02:07:44.936534493Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,}" Aug 13 02:07:44.999119 kubelet[2718]: I0813 02:07:44.999012 2718 kubelet.go:2351] "Pod admission denied" podUID="2fa3ba84-a481-42e5-9b41-dde329f3508b" pod="tigera-operator/tigera-operator-747864d56d-n9bfs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:45.010710 containerd[1542]: time="2025-08-13T02:07:45.010646970Z" level=error msg="Failed to destroy network for sandbox \"1fdd9b33f0d69debd62b8c960c743e7e04ae0eecd33b784b63799559ac729fca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:45.014416 systemd[1]: run-netns-cni\x2d4b9ff065\x2db982\x2d24c6\x2d63f8\x2de6a809b2a98b.mount: Deactivated successfully. Aug 13 02:07:45.017780 containerd[1542]: time="2025-08-13T02:07:45.017642655Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fdd9b33f0d69debd62b8c960c743e7e04ae0eecd33b784b63799559ac729fca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:45.020156 kubelet[2718]: E0813 02:07:45.020098 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fdd9b33f0d69debd62b8c960c743e7e04ae0eecd33b784b63799559ac729fca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:45.020278 kubelet[2718]: E0813 02:07:45.020256 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fdd9b33f0d69debd62b8c960c743e7e04ae0eecd33b784b63799559ac729fca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:07:45.020350 kubelet[2718]: E0813 02:07:45.020282 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1fdd9b33f0d69debd62b8c960c743e7e04ae0eecd33b784b63799559ac729fca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:07:45.020711 kubelet[2718]: E0813 02:07:45.020666 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1fdd9b33f0d69debd62b8c960c743e7e04ae0eecd33b784b63799559ac729fca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pw6gg" podUID="e73a6876-bbb3-4e11-8a33-1945cf27a944" Aug 13 02:07:45.043133 containerd[1542]: time="2025-08-13T02:07:45.043078952Z" level=error msg="Failed to destroy network for sandbox \"26ab1cef2a744cd403da8066d54dee52b489b264de3188068b559eda41d89549\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:45.045447 systemd[1]: run-netns-cni\x2d87d2bc1c\x2d456a\x2db9f8\x2d2ca9\x2d6fe1497080a8.mount: Deactivated successfully. Aug 13 02:07:45.046143 containerd[1542]: time="2025-08-13T02:07:45.045871762Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ab1cef2a744cd403da8066d54dee52b489b264de3188068b559eda41d89549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:45.047822 kubelet[2718]: E0813 02:07:45.047768 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ab1cef2a744cd403da8066d54dee52b489b264de3188068b559eda41d89549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:45.047984 kubelet[2718]: E0813 02:07:45.047849 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ab1cef2a744cd403da8066d54dee52b489b264de3188068b559eda41d89549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:07:45.047984 kubelet[2718]: E0813 02:07:45.047879 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"26ab1cef2a744cd403da8066d54dee52b489b264de3188068b559eda41d89549\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:07:45.047984 kubelet[2718]: E0813 02:07:45.047938 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"26ab1cef2a744cd403da8066d54dee52b489b264de3188068b559eda41d89549\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:07:45.082337 kubelet[2718]: I0813 02:07:45.082001 2718 kubelet.go:2351] "Pod admission denied" podUID="9cc9a305-2f79-4ef4-b78d-9c7fc905ef46" pod="tigera-operator/tigera-operator-747864d56d-hchmx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:45.186283 kubelet[2718]: I0813 02:07:45.185786 2718 kubelet.go:2351] "Pod admission denied" podUID="0de22de8-8b4e-446f-8ba0-6279a84a361f" pod="tigera-operator/tigera-operator-747864d56d-54mfn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:45.280356 kubelet[2718]: I0813 02:07:45.280307 2718 kubelet.go:2351] "Pod admission denied" podUID="3ab3290e-8a31-4fb3-8def-c353becdfa71" pod="tigera-operator/tigera-operator-747864d56d-xqsms" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:45.412689 kubelet[2718]: I0813 02:07:45.412662 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:07:45.412689 kubelet[2718]: I0813 02:07:45.412698 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:07:45.414529 kubelet[2718]: I0813 02:07:45.414496 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:07:45.425015 kubelet[2718]: I0813 02:07:45.424993 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:07:45.425101 kubelet[2718]: I0813 02:07:45.425060 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-pw6gg","kube-system/coredns-668d6bf9bc-p5qmw","calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","calico-system/csi-node-driver-r6mhv","calico-system/calico-node-cdfxj","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:07:45.425101 kubelet[2718]: E0813 02:07:45.425084 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:07:45.425101 kubelet[2718]: E0813 02:07:45.425094 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:07:45.425101 kubelet[2718]: E0813 02:07:45.425100 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:07:45.425101 kubelet[2718]: E0813 02:07:45.425106 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:07:45.425289 kubelet[2718]: E0813 02:07:45.425112 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:07:45.425289 kubelet[2718]: E0813 02:07:45.425136 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:07:45.425289 kubelet[2718]: E0813 02:07:45.425145 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:07:45.425289 kubelet[2718]: E0813 02:07:45.425153 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:07:45.425289 kubelet[2718]: E0813 02:07:45.425161 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:07:45.425289 kubelet[2718]: E0813 02:07:45.425168 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:07:45.425289 kubelet[2718]: I0813 02:07:45.425177 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:07:45.480683 kubelet[2718]: I0813 02:07:45.480340 2718 kubelet.go:2351] "Pod admission denied" podUID="709d4bf9-40d5-488c-8cd1-52807d30ba2e" pod="tigera-operator/tigera-operator-747864d56d-59whr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:45.584785 kubelet[2718]: I0813 02:07:45.584711 2718 kubelet.go:2351] "Pod admission denied" podUID="24cae640-1e10-4c62-8e7f-01796984155b" pod="tigera-operator/tigera-operator-747864d56d-cnvqw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:45.682567 kubelet[2718]: I0813 02:07:45.682531 2718 kubelet.go:2351] "Pod admission denied" podUID="83742fd5-6c5e-482c-8dd2-e5a5527b16a8" pod="tigera-operator/tigera-operator-747864d56d-nd9x9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:45.784317 kubelet[2718]: I0813 02:07:45.784251 2718 kubelet.go:2351] "Pod admission denied" podUID="d8c39208-a711-486a-a986-7a3682f3eeae" pod="tigera-operator/tigera-operator-747864d56d-n52s7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:45.884987 kubelet[2718]: I0813 02:07:45.884912 2718 kubelet.go:2351] "Pod admission denied" podUID="923aac55-2e0c-4993-922a-56bf8860206b" pod="tigera-operator/tigera-operator-747864d56d-wbrjv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:45.937166 kubelet[2718]: E0813 02:07:45.937110 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:07:45.942747 containerd[1542]: time="2025-08-13T02:07:45.941284772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,}" Aug 13 02:07:45.987202 kubelet[2718]: I0813 02:07:45.987157 2718 kubelet.go:2351] "Pod admission denied" podUID="0e5ea763-26a3-41af-9fee-1059daae70f1" pod="tigera-operator/tigera-operator-747864d56d-hj4hx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:46.010301 containerd[1542]: time="2025-08-13T02:07:46.010234001Z" level=error msg="Failed to destroy network for sandbox \"66d43e691696958b76b525cb66d189802d172bcf43a8ad0456353511af6c74e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:46.012537 systemd[1]: run-netns-cni\x2db7617314\x2da37d\x2dab0d\x2de0a5\x2d6a8e18f54fec.mount: Deactivated successfully. Aug 13 02:07:46.014788 containerd[1542]: time="2025-08-13T02:07:46.014719575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"66d43e691696958b76b525cb66d189802d172bcf43a8ad0456353511af6c74e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:46.015053 kubelet[2718]: E0813 02:07:46.014987 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66d43e691696958b76b525cb66d189802d172bcf43a8ad0456353511af6c74e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:46.015140 kubelet[2718]: E0813 02:07:46.015051 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66d43e691696958b76b525cb66d189802d172bcf43a8ad0456353511af6c74e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:07:46.015140 kubelet[2718]: E0813 02:07:46.015077 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"66d43e691696958b76b525cb66d189802d172bcf43a8ad0456353511af6c74e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:07:46.015140 kubelet[2718]: E0813 02:07:46.015120 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"66d43e691696958b76b525cb66d189802d172bcf43a8ad0456353511af6c74e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p5qmw" podUID="26fd4059-1e9c-49a2-9bd9-181be9ad7bcb" Aug 13 02:07:46.084357 kubelet[2718]: I0813 02:07:46.084223 2718 kubelet.go:2351] "Pod admission denied" podUID="f56a1a23-26f7-4596-8e1c-e6e8b331fc0c" pod="tigera-operator/tigera-operator-747864d56d-4rwmj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:46.285837 kubelet[2718]: I0813 02:07:46.285767 2718 kubelet.go:2351] "Pod admission denied" podUID="dfbbccba-ebf2-4889-bf93-4a64ccc917fd" pod="tigera-operator/tigera-operator-747864d56d-2f89m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:46.380753 kubelet[2718]: I0813 02:07:46.380612 2718 kubelet.go:2351] "Pod admission denied" podUID="d7b1f16c-d2f9-474d-a446-411273366b19" pod="tigera-operator/tigera-operator-747864d56d-8tlr9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:46.537404 kubelet[2718]: I0813 02:07:46.537332 2718 kubelet.go:2351] "Pod admission denied" podUID="fe7b19e5-ce6e-452e-bdb7-772b66a24e8c" pod="tigera-operator/tigera-operator-747864d56d-s55cb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:46.635242 kubelet[2718]: I0813 02:07:46.634858 2718 kubelet.go:2351] "Pod admission denied" podUID="5ab257a2-ad8f-44e1-a298-8175bad7993a" pod="tigera-operator/tigera-operator-747864d56d-kjj4k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:46.788779 kubelet[2718]: I0813 02:07:46.788726 2718 kubelet.go:2351] "Pod admission denied" podUID="037d22f7-89c7-48b0-b8c8-9de5ccd459d0" pod="tigera-operator/tigera-operator-747864d56d-6kfww" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:46.810710 kubelet[2718]: I0813 02:07:46.810667 2718 kubelet.go:2351] "Pod admission denied" podUID="06c5632f-3998-4429-b26c-02a0da39acc2" pod="tigera-operator/tigera-operator-747864d56d-vwrsc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:46.880609 kubelet[2718]: I0813 02:07:46.880549 2718 kubelet.go:2351] "Pod admission denied" podUID="6df8e7f2-9bca-4a4e-a3ac-b726d140ea6b" pod="tigera-operator/tigera-operator-747864d56d-7zbx7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:46.982304 kubelet[2718]: I0813 02:07:46.981841 2718 kubelet.go:2351] "Pod admission denied" podUID="18c939eb-043a-4f09-a724-2e799c23e88a" pod="tigera-operator/tigera-operator-747864d56d-w6hm6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:47.084005 kubelet[2718]: I0813 02:07:47.083696 2718 kubelet.go:2351] "Pod admission denied" podUID="5fc05473-7c0e-487c-92aa-ca21662fbc1e" pod="tigera-operator/tigera-operator-747864d56d-qxzgp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:47.183673 kubelet[2718]: I0813 02:07:47.183639 2718 kubelet.go:2351] "Pod admission denied" podUID="2aa15e51-8fc7-415f-8608-020190d45f8f" pod="tigera-operator/tigera-operator-747864d56d-7jgkm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:47.281757 kubelet[2718]: I0813 02:07:47.281720 2718 kubelet.go:2351] "Pod admission denied" podUID="f904222c-3d73-430a-a544-12bcc7d76201" pod="tigera-operator/tigera-operator-747864d56d-pm6jk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:47.380539 kubelet[2718]: I0813 02:07:47.380490 2718 kubelet.go:2351] "Pod admission denied" podUID="02fd95bd-22d5-4e14-a369-11f7f48c2f05" pod="tigera-operator/tigera-operator-747864d56d-xs4rn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:47.480096 kubelet[2718]: I0813 02:07:47.480056 2718 kubelet.go:2351] "Pod admission denied" podUID="15b61b97-b0bb-40dc-96e3-b3d09df79d21" pod="tigera-operator/tigera-operator-747864d56d-vwpks" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:47.582019 kubelet[2718]: I0813 02:07:47.581878 2718 kubelet.go:2351] "Pod admission denied" podUID="fe98fb0c-4c09-4c11-836b-648ee52a3d60" pod="tigera-operator/tigera-operator-747864d56d-kl62m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:47.683645 kubelet[2718]: I0813 02:07:47.683573 2718 kubelet.go:2351] "Pod admission denied" podUID="52a533b8-0963-4ed5-be9f-d995119fa2ab" pod="tigera-operator/tigera-operator-747864d56d-fzgp5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:47.782782 kubelet[2718]: I0813 02:07:47.782619 2718 kubelet.go:2351] "Pod admission denied" podUID="ddac625d-18ad-428a-832e-4b9ea0e6d70b" pod="tigera-operator/tigera-operator-747864d56d-cmdwv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:47.887450 kubelet[2718]: I0813 02:07:47.887289 2718 kubelet.go:2351] "Pod admission denied" podUID="4a205049-8422-420d-afef-cc71a000f33c" pod="tigera-operator/tigera-operator-747864d56d-vth7t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:47.936399 containerd[1542]: time="2025-08-13T02:07:47.936333344Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,}" Aug 13 02:07:47.986645 containerd[1542]: time="2025-08-13T02:07:47.985640768Z" level=error msg="Failed to destroy network for sandbox \"3db9b296a381803f270830d084c38ba4bd5b675cf9298ca1b33dd90f4ff9fbe1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:47.988331 containerd[1542]: time="2025-08-13T02:07:47.988196489Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db9b296a381803f270830d084c38ba4bd5b675cf9298ca1b33dd90f4ff9fbe1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:47.989348 systemd[1]: run-netns-cni\x2d260cd9b0\x2dc5ed\x2d60a8\x2de441\x2d45d941303b13.mount: Deactivated successfully. Aug 13 02:07:47.989978 kubelet[2718]: E0813 02:07:47.989812 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db9b296a381803f270830d084c38ba4bd5b675cf9298ca1b33dd90f4ff9fbe1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:47.989978 kubelet[2718]: E0813 02:07:47.989866 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db9b296a381803f270830d084c38ba4bd5b675cf9298ca1b33dd90f4ff9fbe1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:07:47.989978 kubelet[2718]: E0813 02:07:47.989886 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3db9b296a381803f270830d084c38ba4bd5b675cf9298ca1b33dd90f4ff9fbe1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:07:47.989978 kubelet[2718]: E0813 02:07:47.989923 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3db9b296a381803f270830d084c38ba4bd5b675cf9298ca1b33dd90f4ff9fbe1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:07:47.991739 kubelet[2718]: I0813 02:07:47.991716 2718 kubelet.go:2351] "Pod admission denied" podUID="0706c5ad-47a4-4b20-8a1c-18ce02d4993e" pod="tigera-operator/tigera-operator-747864d56d-9gxps" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:48.036614 kubelet[2718]: I0813 02:07:48.035958 2718 kubelet.go:2351] "Pod admission denied" podUID="654abb0e-39f2-4c98-b809-f9ad13bfd9df" pod="tigera-operator/tigera-operator-747864d56d-6l679" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:48.161131 kubelet[2718]: I0813 02:07:48.160932 2718 kubelet.go:2351] "Pod admission denied" podUID="71fd7cc1-f505-4b92-801e-c1d48d9a4b9f" pod="tigera-operator/tigera-operator-747864d56d-47k6t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:48.235723 kubelet[2718]: I0813 02:07:48.235662 2718 kubelet.go:2351] "Pod admission denied" podUID="1f1104b0-e5a8-417c-bb7c-254b81fb362f" pod="tigera-operator/tigera-operator-747864d56d-pkbk2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:48.281062 kubelet[2718]: I0813 02:07:48.281026 2718 kubelet.go:2351] "Pod admission denied" podUID="d63ec4bf-bd2b-40fd-bd28-01ec8d261ae6" pod="tigera-operator/tigera-operator-747864d56d-hm2qw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:48.382241 kubelet[2718]: I0813 02:07:48.382193 2718 kubelet.go:2351] "Pod admission denied" podUID="72f8014d-da7c-4d1a-be67-5f037535acae" pod="tigera-operator/tigera-operator-747864d56d-gt7n2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:48.482016 kubelet[2718]: I0813 02:07:48.481914 2718 kubelet.go:2351] "Pod admission denied" podUID="c58387c4-dbb8-4aaa-864b-c54aca211aa1" pod="tigera-operator/tigera-operator-747864d56d-5dgwv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:48.533179 kubelet[2718]: I0813 02:07:48.533110 2718 kubelet.go:2351] "Pod admission denied" podUID="4c23caac-3db2-4198-a127-2855cb43d9e0" pod="tigera-operator/tigera-operator-747864d56d-qljj7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:48.633017 kubelet[2718]: I0813 02:07:48.632960 2718 kubelet.go:2351] "Pod admission denied" podUID="e14e3e9c-3c0e-449b-ad73-9cf189be26f0" pod="tigera-operator/tigera-operator-747864d56d-thkjk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:48.733696 kubelet[2718]: I0813 02:07:48.733568 2718 kubelet.go:2351] "Pod admission denied" podUID="b2254c8f-00d8-46b1-9f5e-df4bb8b394e3" pod="tigera-operator/tigera-operator-747864d56d-td5hk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:48.832614 kubelet[2718]: I0813 02:07:48.832548 2718 kubelet.go:2351] "Pod admission denied" podUID="bc205e4b-e163-44a5-9f93-816799759b67" pod="tigera-operator/tigera-operator-747864d56d-gp8jl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:48.937157 containerd[1542]: time="2025-08-13T02:07:48.936756156Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 02:07:49.032695 kubelet[2718]: I0813 02:07:49.032656 2718 kubelet.go:2351] "Pod admission denied" podUID="88d2c75c-a102-4215-9537-c59521cfcca0" pod="tigera-operator/tigera-operator-747864d56d-wq968" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:49.132421 kubelet[2718]: I0813 02:07:49.132383 2718 kubelet.go:2351] "Pod admission denied" podUID="0f22500e-275e-429c-8e2b-c61f02c514ed" pod="tigera-operator/tigera-operator-747864d56d-ppm4s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:49.181320 kubelet[2718]: I0813 02:07:49.180736 2718 kubelet.go:2351] "Pod admission denied" podUID="b52db83d-8cc3-4ca3-a496-f2e18762c9f7" pod="tigera-operator/tigera-operator-747864d56d-sx26c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:49.280298 kubelet[2718]: I0813 02:07:49.280256 2718 kubelet.go:2351] "Pod admission denied" podUID="773a9c94-47d3-4edc-adbd-0c69500801e6" pod="tigera-operator/tigera-operator-747864d56d-z8kh6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:49.487036 kubelet[2718]: I0813 02:07:49.486671 2718 kubelet.go:2351] "Pod admission denied" podUID="ed1515f0-2f9c-46b8-9def-da44a0073acc" pod="tigera-operator/tigera-operator-747864d56d-c4p7k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:49.607161 kubelet[2718]: I0813 02:07:49.607114 2718 kubelet.go:2351] "Pod admission denied" podUID="8712d341-de0d-4a85-b91d-0c4f3540caad" pod="tigera-operator/tigera-operator-747864d56d-qjmqh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:49.686093 kubelet[2718]: I0813 02:07:49.685476 2718 kubelet.go:2351] "Pod admission denied" podUID="ccb1fcf8-1ce0-44d9-a754-d2577418a101" pod="tigera-operator/tigera-operator-747864d56d-wt5qc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:49.788565 kubelet[2718]: I0813 02:07:49.788529 2718 kubelet.go:2351] "Pod admission denied" podUID="3d47c251-790b-4508-9d58-d14494e1c01c" pod="tigera-operator/tigera-operator-747864d56d-6qzsb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:49.836286 kubelet[2718]: I0813 02:07:49.836248 2718 kubelet.go:2351] "Pod admission denied" podUID="115b981d-2637-4762-8439-9dd0b3909e26" pod="tigera-operator/tigera-operator-747864d56d-ff7mx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:49.936270 kubelet[2718]: I0813 02:07:49.936235 2718 kubelet.go:2351] "Pod admission denied" podUID="1fd108cd-8bc0-42e6-b6a0-0223a83031b7" pod="tigera-operator/tigera-operator-747864d56d-4s72p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:49.939392 kubelet[2718]: E0813 02:07:49.939296 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:07:50.035681 kubelet[2718]: I0813 02:07:50.035638 2718 kubelet.go:2351] "Pod admission denied" podUID="bf0d3f1a-e217-48a4-9746-45a18bc9d885" pod="tigera-operator/tigera-operator-747864d56d-jbr5x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:50.137306 kubelet[2718]: I0813 02:07:50.137198 2718 kubelet.go:2351] "Pod admission denied" podUID="2cb80a31-d27b-46be-b958-b4eb153d72e9" pod="tigera-operator/tigera-operator-747864d56d-c4rqf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:50.238847 kubelet[2718]: I0813 02:07:50.238800 2718 kubelet.go:2351] "Pod admission denied" podUID="d2000eed-037d-44fb-bbf5-ca531c2428a9" pod="tigera-operator/tigera-operator-747864d56d-5rsl5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:50.345918 kubelet[2718]: I0813 02:07:50.345597 2718 kubelet.go:2351] "Pod admission denied" podUID="ab825c31-fcf9-424d-84f0-200dbe5e1c1f" pod="tigera-operator/tigera-operator-747864d56d-p9x4f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:50.440004 kubelet[2718]: I0813 02:07:50.439921 2718 kubelet.go:2351] "Pod admission denied" podUID="a2000248-8f1f-4e8c-87b4-ca17057cf41e" pod="tigera-operator/tigera-operator-747864d56d-xdjsw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:50.542392 kubelet[2718]: I0813 02:07:50.542364 2718 kubelet.go:2351] "Pod admission denied" podUID="c1e8c870-12e0-45de-ba49-df6dd2523e85" pod="tigera-operator/tigera-operator-747864d56d-8hsfs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:50.640630 kubelet[2718]: I0813 02:07:50.640290 2718 kubelet.go:2351] "Pod admission denied" podUID="4f6eb7eb-dd32-478a-a29e-c77ac112e55a" pod="tigera-operator/tigera-operator-747864d56d-v9bzk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:50.745645 kubelet[2718]: I0813 02:07:50.745217 2718 kubelet.go:2351] "Pod admission denied" podUID="4c1dd8b1-ab18-479b-b25f-35efb71abae6" pod="tigera-operator/tigera-operator-747864d56d-77tfn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:50.836070 kubelet[2718]: I0813 02:07:50.835935 2718 kubelet.go:2351] "Pod admission denied" podUID="8772e9e1-075d-49f8-9c9b-576b5c0e9346" pod="tigera-operator/tigera-operator-747864d56d-bcvwr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:50.935132 kubelet[2718]: E0813 02:07:50.935107 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:07:50.945953 kubelet[2718]: I0813 02:07:50.945910 2718 kubelet.go:2351] "Pod admission denied" podUID="4a844363-bd03-4e9b-b983-344b83ef60ad" pod="tigera-operator/tigera-operator-747864d56d-bz6xv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:51.039950 kubelet[2718]: I0813 02:07:51.039903 2718 kubelet.go:2351] "Pod admission denied" podUID="d99e60bd-9889-48b7-a518-ac3bbd1fce0c" pod="tigera-operator/tigera-operator-747864d56d-mxjk9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:51.139404 kubelet[2718]: I0813 02:07:51.139157 2718 kubelet.go:2351] "Pod admission denied" podUID="bd84bc62-fca4-4cd0-accc-999cabb421a5" pod="tigera-operator/tigera-operator-747864d56d-v79hp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:51.203283 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3051784.mount: Deactivated successfully. Aug 13 02:07:51.205242 containerd[1542]: time="2025-08-13T02:07:51.204984979Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3051784: write /var/lib/containerd/tmpmounts/containerd-mount3051784/usr/bin/calico-node: no space left on device" Aug 13 02:07:51.205242 containerd[1542]: time="2025-08-13T02:07:51.205110107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 02:07:51.206702 kubelet[2718]: E0813 02:07:51.206032 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3051784: write /var/lib/containerd/tmpmounts/containerd-mount3051784/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 02:07:51.206702 kubelet[2718]: E0813 02:07:51.206090 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3051784: write /var/lib/containerd/tmpmounts/containerd-mount3051784/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 02:07:51.206842 kubelet[2718]: E0813 02:07:51.206243 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j884b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-cdfxj_calico-system(e8f51745-7382-4ead-96df-a31572ad4e1f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3051784: write /var/lib/containerd/tmpmounts/containerd-mount3051784/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 02:07:51.207530 kubelet[2718]: E0813 02:07:51.207497 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3051784: write /var/lib/containerd/tmpmounts/containerd-mount3051784/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-cdfxj" podUID="e8f51745-7382-4ead-96df-a31572ad4e1f" Aug 13 02:07:51.233125 kubelet[2718]: I0813 02:07:51.233090 2718 kubelet.go:2351] "Pod admission denied" podUID="14947054-5cd7-40cc-8408-6e158251ec40" pod="tigera-operator/tigera-operator-747864d56d-xs9t6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:51.332052 kubelet[2718]: I0813 02:07:51.331958 2718 kubelet.go:2351] "Pod admission denied" podUID="5e1f57ef-78c6-4248-bd1f-c884378884cc" pod="tigera-operator/tigera-operator-747864d56d-b4pjb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:51.532556 kubelet[2718]: I0813 02:07:51.532508 2718 kubelet.go:2351] "Pod admission denied" podUID="9a597249-838f-4b6b-b19a-a51da85f81fa" pod="tigera-operator/tigera-operator-747864d56d-hb8fh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:51.630446 kubelet[2718]: I0813 02:07:51.630336 2718 kubelet.go:2351] "Pod admission denied" podUID="5b4f5122-0790-4dda-9924-fb366ff7687b" pod="tigera-operator/tigera-operator-747864d56d-7bkc2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:51.735112 kubelet[2718]: I0813 02:07:51.735076 2718 kubelet.go:2351] "Pod admission denied" podUID="8db2a535-5f53-41a9-bf9e-340f7f85ea93" pod="tigera-operator/tigera-operator-747864d56d-x9zsw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:51.835676 kubelet[2718]: I0813 02:07:51.835640 2718 kubelet.go:2351] "Pod admission denied" podUID="2480b4d1-e3fb-490b-9e81-aec9a363e4fe" pod="tigera-operator/tigera-operator-747864d56d-jzxgg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:51.880882 kubelet[2718]: I0813 02:07:51.880764 2718 kubelet.go:2351] "Pod admission denied" podUID="d5aa545f-641c-47cd-a12f-b9a8c3703216" pod="tigera-operator/tigera-operator-747864d56d-58hnx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:51.935947 kubelet[2718]: E0813 02:07:51.934898 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:07:51.981390 kubelet[2718]: I0813 02:07:51.981363 2718 kubelet.go:2351] "Pod admission denied" podUID="942f910f-c7ac-4057-9076-9f852a535e04" pod="tigera-operator/tigera-operator-747864d56d-kx87p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:52.196067 kubelet[2718]: I0813 02:07:52.195470 2718 kubelet.go:2351] "Pod admission denied" podUID="1b02f5b0-069a-4a05-bd2c-2eb187b8620e" pod="tigera-operator/tigera-operator-747864d56d-88lnz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:52.283399 kubelet[2718]: I0813 02:07:52.283355 2718 kubelet.go:2351] "Pod admission denied" podUID="6e1e74b7-ccf6-4b62-ad39-b288eb9a1966" pod="tigera-operator/tigera-operator-747864d56d-q9zws" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:52.331155 kubelet[2718]: I0813 02:07:52.331118 2718 kubelet.go:2351] "Pod admission denied" podUID="e8a5dd75-9d5f-459e-a660-98689b2efbd8" pod="tigera-operator/tigera-operator-747864d56d-whrrb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:52.432680 kubelet[2718]: I0813 02:07:52.432633 2718 kubelet.go:2351] "Pod admission denied" podUID="e3d968aa-4b42-4ac6-a362-128009d694fe" pod="tigera-operator/tigera-operator-747864d56d-hl5mj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:52.637925 kubelet[2718]: I0813 02:07:52.637869 2718 kubelet.go:2351] "Pod admission denied" podUID="8276c19a-4f42-4157-b623-77a617ba94e7" pod="tigera-operator/tigera-operator-747864d56d-zr946" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:52.733341 kubelet[2718]: I0813 02:07:52.733290 2718 kubelet.go:2351] "Pod admission denied" podUID="d9d482a0-08f5-4ff3-beb9-24b2f63321d7" pod="tigera-operator/tigera-operator-747864d56d-6jdcq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:52.781668 kubelet[2718]: I0813 02:07:52.781632 2718 kubelet.go:2351] "Pod admission denied" podUID="4f7b7e2a-94c1-47a4-a016-156c1c457b90" pod="tigera-operator/tigera-operator-747864d56d-xm6ns" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:52.881800 kubelet[2718]: I0813 02:07:52.881755 2718 kubelet.go:2351] "Pod admission denied" podUID="2799e559-36a1-4a10-8d49-ced9c12f7ecd" pod="tigera-operator/tigera-operator-747864d56d-jwg74" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:52.982086 kubelet[2718]: I0813 02:07:52.981652 2718 kubelet.go:2351] "Pod admission denied" podUID="5f0cd484-8ff4-45c9-bf6f-dcfdbc477424" pod="tigera-operator/tigera-operator-747864d56d-bb9zz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:53.088245 kubelet[2718]: I0813 02:07:53.088179 2718 kubelet.go:2351] "Pod admission denied" podUID="0444d142-6f90-42f5-b239-8b0013ad3474" pod="tigera-operator/tigera-operator-747864d56d-vx4gw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:53.183286 kubelet[2718]: I0813 02:07:53.183248 2718 kubelet.go:2351] "Pod admission denied" podUID="b4509e41-a7e5-459d-ae41-4048e38ffa86" pod="tigera-operator/tigera-operator-747864d56d-c7xhv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:53.283793 kubelet[2718]: I0813 02:07:53.283756 2718 kubelet.go:2351] "Pod admission denied" podUID="2d0a19fb-2bbb-4a9a-b8ab-4064dfca382f" pod="tigera-operator/tigera-operator-747864d56d-n8qj8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:53.381526 kubelet[2718]: I0813 02:07:53.381485 2718 kubelet.go:2351] "Pod admission denied" podUID="ba183cab-2735-4642-aa85-34a67e7fbcb1" pod="tigera-operator/tigera-operator-747864d56d-v8czc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:53.480438 kubelet[2718]: I0813 02:07:53.480401 2718 kubelet.go:2351] "Pod admission denied" podUID="482ad014-c400-4a49-89f4-d4dda0ac3f02" pod="tigera-operator/tigera-operator-747864d56d-p5jz4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:53.583552 kubelet[2718]: I0813 02:07:53.583198 2718 kubelet.go:2351] "Pod admission denied" podUID="baa7226f-c08c-4407-9a48-a376dea6183a" pod="tigera-operator/tigera-operator-747864d56d-wtwft" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:53.684110 kubelet[2718]: I0813 02:07:53.684060 2718 kubelet.go:2351] "Pod admission denied" podUID="a62bb622-eeb8-4ae9-890a-d17de7a5a9eb" pod="tigera-operator/tigera-operator-747864d56d-dz865" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:53.887800 kubelet[2718]: I0813 02:07:53.887358 2718 kubelet.go:2351] "Pod admission denied" podUID="8328ee8e-2aab-4372-8e2a-528709989cf1" pod="tigera-operator/tigera-operator-747864d56d-ffsks" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:53.984415 kubelet[2718]: I0813 02:07:53.984375 2718 kubelet.go:2351] "Pod admission denied" podUID="f62d7e1c-ff90-4099-af6b-58a433b9dfef" pod="tigera-operator/tigera-operator-747864d56d-hhkbm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:54.081713 kubelet[2718]: I0813 02:07:54.081646 2718 kubelet.go:2351] "Pod admission denied" podUID="dd23e779-2e71-47ac-b2d9-d6bcfc9725d8" pod="tigera-operator/tigera-operator-747864d56d-zp7zv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:54.185454 kubelet[2718]: I0813 02:07:54.185099 2718 kubelet.go:2351] "Pod admission denied" podUID="76161f32-aa18-450b-8657-0062ad4bc0bb" pod="tigera-operator/tigera-operator-747864d56d-4x96v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:54.239618 kubelet[2718]: I0813 02:07:54.238613 2718 kubelet.go:2351] "Pod admission denied" podUID="327ad84d-02b7-47d5-b000-446c70fb5950" pod="tigera-operator/tigera-operator-747864d56d-bp9kd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:54.334328 kubelet[2718]: I0813 02:07:54.334278 2718 kubelet.go:2351] "Pod admission denied" podUID="1ee88633-aa75-4cc0-860a-212cc0684ec8" pod="tigera-operator/tigera-operator-747864d56d-ltlsc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:54.436834 kubelet[2718]: I0813 02:07:54.436710 2718 kubelet.go:2351] "Pod admission denied" podUID="6508cc1b-0de7-4bd2-9138-4deb5f6b2184" pod="tigera-operator/tigera-operator-747864d56d-9wmwq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:54.536723 kubelet[2718]: I0813 02:07:54.536675 2718 kubelet.go:2351] "Pod admission denied" podUID="a051ca6b-4cd8-444c-847b-5816f21a934a" pod="tigera-operator/tigera-operator-747864d56d-v5bvk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:54.634617 kubelet[2718]: I0813 02:07:54.634486 2718 kubelet.go:2351] "Pod admission denied" podUID="85181580-456a-4ad7-9e12-db594c82f816" pod="tigera-operator/tigera-operator-747864d56d-kq7kr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:54.736881 kubelet[2718]: I0813 02:07:54.736237 2718 kubelet.go:2351] "Pod admission denied" podUID="8650f16b-13b8-41de-a894-2ebc0c621fa4" pod="tigera-operator/tigera-operator-747864d56d-nqgp2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:54.831786 kubelet[2718]: I0813 02:07:54.831738 2718 kubelet.go:2351] "Pod admission denied" podUID="67c8d62a-0aa5-43f6-932b-018409bb85e8" pod="tigera-operator/tigera-operator-747864d56d-zsxdg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:54.933847 kubelet[2718]: I0813 02:07:54.933803 2718 kubelet.go:2351] "Pod admission denied" podUID="cdcd40da-e412-4e06-b2f5-a4d33d2ae87c" pod="tigera-operator/tigera-operator-747864d56d-lrls9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:55.032574 kubelet[2718]: I0813 02:07:55.032544 2718 kubelet.go:2351] "Pod admission denied" podUID="5192653a-f25d-40dc-b0fd-149518cadb78" pod="tigera-operator/tigera-operator-747864d56d-cj6d6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:55.131337 kubelet[2718]: I0813 02:07:55.131290 2718 kubelet.go:2351] "Pod admission denied" podUID="eec9b9ff-5e65-4b06-9962-9c736ce88342" pod="tigera-operator/tigera-operator-747864d56d-8cmzc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:55.338723 kubelet[2718]: I0813 02:07:55.337951 2718 kubelet.go:2351] "Pod admission denied" podUID="1be74568-1d09-4a9b-9112-3d8f0490c56a" pod="tigera-operator/tigera-operator-747864d56d-ncthk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:55.446552 kubelet[2718]: I0813 02:07:55.446259 2718 kubelet.go:2351] "Pod admission denied" podUID="dfbbcd60-6888-4b13-81d9-f283209f354c" pod="tigera-operator/tigera-operator-747864d56d-78b5q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:55.446552 kubelet[2718]: I0813 02:07:55.446360 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:07:55.446552 kubelet[2718]: I0813 02:07:55.446382 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:07:55.449223 kubelet[2718]: I0813 02:07:55.449208 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:07:55.460127 kubelet[2718]: I0813 02:07:55.460099 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:07:55.460187 kubelet[2718]: I0813 02:07:55.460170 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-p5qmw","calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/csi-node-driver-r6mhv","calico-system/calico-node-cdfxj","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:07:55.460262 kubelet[2718]: E0813 02:07:55.460195 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:07:55.460262 kubelet[2718]: E0813 02:07:55.460205 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:07:55.460262 kubelet[2718]: E0813 02:07:55.460211 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:07:55.460262 kubelet[2718]: E0813 02:07:55.460217 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:07:55.460262 kubelet[2718]: E0813 02:07:55.460222 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:07:55.460262 kubelet[2718]: E0813 02:07:55.460230 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:07:55.460262 kubelet[2718]: E0813 02:07:55.460237 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:07:55.460262 kubelet[2718]: E0813 02:07:55.460243 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:07:55.460262 kubelet[2718]: E0813 02:07:55.460251 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:07:55.460262 kubelet[2718]: E0813 02:07:55.460258 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:07:55.460262 kubelet[2718]: I0813 02:07:55.460265 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:07:55.482547 kubelet[2718]: I0813 02:07:55.482516 2718 kubelet.go:2351] "Pod admission denied" podUID="45371452-c54b-4f02-823b-180274cbb260" pod="tigera-operator/tigera-operator-747864d56d-fjr4r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:55.583775 kubelet[2718]: I0813 02:07:55.583729 2718 kubelet.go:2351] "Pod admission denied" podUID="413bbf88-9b5c-4db1-91e2-4b1867431300" pod="tigera-operator/tigera-operator-747864d56d-2csdb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:55.697272 kubelet[2718]: I0813 02:07:55.696048 2718 kubelet.go:2351] "Pod admission denied" podUID="196c1d03-f95c-48b9-858f-3866cbc86e12" pod="tigera-operator/tigera-operator-747864d56d-g5p9b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:55.783139 kubelet[2718]: I0813 02:07:55.783091 2718 kubelet.go:2351] "Pod admission denied" podUID="5041316a-4f27-484f-b388-6d45b145cdf7" pod="tigera-operator/tigera-operator-747864d56d-fc7pb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:55.881515 kubelet[2718]: I0813 02:07:55.881474 2718 kubelet.go:2351] "Pod admission denied" podUID="90b5f99c-1e46-48ba-a4e7-e190aa3f3b87" pod="tigera-operator/tigera-operator-747864d56d-xc227" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:55.936441 containerd[1542]: time="2025-08-13T02:07:55.936362628Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,}" Aug 13 02:07:55.981152 containerd[1542]: time="2025-08-13T02:07:55.980395065Z" level=error msg="Failed to destroy network for sandbox \"0fb421aea1996fb374f768e07285e651b24d54c0fd2bc74e7ed7d4acc9b06a52\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:55.982230 systemd[1]: run-netns-cni\x2df3eebbec\x2d4269\x2d2bb7\x2d695b\x2d055f6b69bfdf.mount: Deactivated successfully. Aug 13 02:07:55.984637 containerd[1542]: time="2025-08-13T02:07:55.984444913Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fb421aea1996fb374f768e07285e651b24d54c0fd2bc74e7ed7d4acc9b06a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:55.985389 kubelet[2718]: E0813 02:07:55.985289 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fb421aea1996fb374f768e07285e651b24d54c0fd2bc74e7ed7d4acc9b06a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:55.985389 kubelet[2718]: E0813 02:07:55.985350 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fb421aea1996fb374f768e07285e651b24d54c0fd2bc74e7ed7d4acc9b06a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:07:55.985389 kubelet[2718]: E0813 02:07:55.985369 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0fb421aea1996fb374f768e07285e651b24d54c0fd2bc74e7ed7d4acc9b06a52\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:07:55.985634 kubelet[2718]: E0813 02:07:55.985403 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0fb421aea1996fb374f768e07285e651b24d54c0fd2bc74e7ed7d4acc9b06a52\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:07:55.991527 kubelet[2718]: I0813 02:07:55.991502 2718 kubelet.go:2351] "Pod admission denied" podUID="91afe2d1-5f71-4d33-87a9-bf474b10b4db" pod="tigera-operator/tigera-operator-747864d56d-4cj5r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:56.186542 kubelet[2718]: I0813 02:07:56.186485 2718 kubelet.go:2351] "Pod admission denied" podUID="1838fd43-a189-4125-94a7-70f029a5f2b1" pod="tigera-operator/tigera-operator-747864d56d-nxqwd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:56.283025 kubelet[2718]: I0813 02:07:56.282981 2718 kubelet.go:2351] "Pod admission denied" podUID="c598b0b1-c605-457d-bddf-a82e86616ea0" pod="tigera-operator/tigera-operator-747864d56d-d7brs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:56.382581 kubelet[2718]: I0813 02:07:56.382531 2718 kubelet.go:2351] "Pod admission denied" podUID="08f0c886-09dd-41b2-8584-930b5f790ef1" pod="tigera-operator/tigera-operator-747864d56d-rplcs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:56.589066 kubelet[2718]: I0813 02:07:56.588872 2718 kubelet.go:2351] "Pod admission denied" podUID="26ee809b-ace3-4840-8489-897a9be99b5e" pod="tigera-operator/tigera-operator-747864d56d-685mp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:56.689878 kubelet[2718]: I0813 02:07:56.689810 2718 kubelet.go:2351] "Pod admission denied" podUID="3c765281-98a8-48e6-a707-ae0ba7584486" pod="tigera-operator/tigera-operator-747864d56d-r2thm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:56.786043 kubelet[2718]: I0813 02:07:56.785981 2718 kubelet.go:2351] "Pod admission denied" podUID="3a1ed2e0-2c96-45f4-98fa-a12557d3d080" pod="tigera-operator/tigera-operator-747864d56d-nl5gb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:56.884885 kubelet[2718]: I0813 02:07:56.884731 2718 kubelet.go:2351] "Pod admission denied" podUID="5e058db0-01f2-4d3b-94fc-ef3b11014687" pod="tigera-operator/tigera-operator-747864d56d-bcs9f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:56.985886 kubelet[2718]: I0813 02:07:56.985463 2718 kubelet.go:2351] "Pod admission denied" podUID="f7d11262-779f-4a64-a384-8e8e6697a4f6" pod="tigera-operator/tigera-operator-747864d56d-8gw9x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:57.085046 kubelet[2718]: I0813 02:07:57.085000 2718 kubelet.go:2351] "Pod admission denied" podUID="66561653-a0db-4ec7-a137-728197da6521" pod="tigera-operator/tigera-operator-747864d56d-btmlc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:57.184579 kubelet[2718]: I0813 02:07:57.184308 2718 kubelet.go:2351] "Pod admission denied" podUID="a7e55833-68a0-43f9-b873-d35c546b461d" pod="tigera-operator/tigera-operator-747864d56d-49xtk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:57.284681 kubelet[2718]: I0813 02:07:57.284617 2718 kubelet.go:2351] "Pod admission denied" podUID="5016456d-981f-4875-b95a-3fb3de412687" pod="tigera-operator/tigera-operator-747864d56d-qrtwr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:57.389747 kubelet[2718]: I0813 02:07:57.389666 2718 kubelet.go:2351] "Pod admission denied" podUID="613af2ed-3dc2-47b6-a2a8-4b7cc196a844" pod="tigera-operator/tigera-operator-747864d56d-lnp8f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:57.591566 kubelet[2718]: I0813 02:07:57.591295 2718 kubelet.go:2351] "Pod admission denied" podUID="2e34357c-8010-44cc-9b55-d64b664fe0b8" pod="tigera-operator/tigera-operator-747864d56d-km74q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:57.688677 kubelet[2718]: I0813 02:07:57.688606 2718 kubelet.go:2351] "Pod admission denied" podUID="433c8f07-61c9-411c-a7a8-7a7b883895d3" pod="tigera-operator/tigera-operator-747864d56d-skkvf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:57.787343 kubelet[2718]: I0813 02:07:57.787280 2718 kubelet.go:2351] "Pod admission denied" podUID="92740095-d35c-4203-ab43-a0a2ccac434d" pod="tigera-operator/tigera-operator-747864d56d-vfdl5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:57.886414 kubelet[2718]: I0813 02:07:57.886014 2718 kubelet.go:2351] "Pod admission denied" podUID="3c4907e4-ce9c-44e7-a93b-055dae99fd6b" pod="tigera-operator/tigera-operator-747864d56d-4flgn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:58.026620 kubelet[2718]: I0813 02:07:58.026454 2718 kubelet.go:2351] "Pod admission denied" podUID="e5a37e3b-e08e-4c82-834f-dfd6e70876a2" pod="tigera-operator/tigera-operator-747864d56d-ks55r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:58.084755 kubelet[2718]: I0813 02:07:58.084710 2718 kubelet.go:2351] "Pod admission denied" podUID="4f888c92-e99b-42e3-8cc9-3d6ee3df2a62" pod="tigera-operator/tigera-operator-747864d56d-6fhvw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:58.186169 kubelet[2718]: I0813 02:07:58.184969 2718 kubelet.go:2351] "Pod admission denied" podUID="ddc4ae8f-32e3-484b-908d-8fbf9e32f4fb" pod="tigera-operator/tigera-operator-747864d56d-s25k9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:58.285575 kubelet[2718]: I0813 02:07:58.285539 2718 kubelet.go:2351] "Pod admission denied" podUID="adda294b-68c1-413d-911c-f3dec23dbebf" pod="tigera-operator/tigera-operator-747864d56d-gpgjv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:58.395616 kubelet[2718]: I0813 02:07:58.393867 2718 kubelet.go:2351] "Pod admission denied" podUID="d3f0cf90-4b4c-4eb2-aaf8-2729c48e859d" pod="tigera-operator/tigera-operator-747864d56d-jg962" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:58.484799 kubelet[2718]: I0813 02:07:58.484537 2718 kubelet.go:2351] "Pod admission denied" podUID="9e66b5a7-f250-44d9-a178-96407ffa93a2" pod="tigera-operator/tigera-operator-747864d56d-csm5w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:58.584259 kubelet[2718]: I0813 02:07:58.584201 2718 kubelet.go:2351] "Pod admission denied" podUID="3ad8f1cd-c335-4ccd-9420-9694dd1c7035" pod="tigera-operator/tigera-operator-747864d56d-jkcvn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:58.784533 kubelet[2718]: I0813 02:07:58.784458 2718 kubelet.go:2351] "Pod admission denied" podUID="1e6ada6c-8849-44f3-b8a4-85dcb961c338" pod="tigera-operator/tigera-operator-747864d56d-tzmz9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:58.885405 kubelet[2718]: I0813 02:07:58.885355 2718 kubelet.go:2351] "Pod admission denied" podUID="d76195e1-0cdb-4622-8075-ebdefe7380aa" pod="tigera-operator/tigera-operator-747864d56d-kjtpx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:58.936146 kubelet[2718]: E0813 02:07:58.936075 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:07:58.937124 containerd[1542]: time="2025-08-13T02:07:58.937062424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,}" Aug 13 02:07:58.947064 kubelet[2718]: I0813 02:07:58.947028 2718 kubelet.go:2351] "Pod admission denied" podUID="db4ffe26-4c52-48c3-a0c3-98e865bd7ee9" pod="tigera-operator/tigera-operator-747864d56d-8ckkw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:59.013413 containerd[1542]: time="2025-08-13T02:07:59.013363409Z" level=error msg="Failed to destroy network for sandbox \"2f7d7536212faee51002dacbf47d9a2ba4a078f9ef4fc42c6b651e3f86862863\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:59.016097 systemd[1]: run-netns-cni\x2db65dd642\x2dff19\x2dec11\x2dac48\x2d231677706574.mount: Deactivated successfully. Aug 13 02:07:59.017075 containerd[1542]: time="2025-08-13T02:07:59.017035546Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f7d7536212faee51002dacbf47d9a2ba4a078f9ef4fc42c6b651e3f86862863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:59.017748 kubelet[2718]: E0813 02:07:59.017711 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f7d7536212faee51002dacbf47d9a2ba4a078f9ef4fc42c6b651e3f86862863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:07:59.017821 kubelet[2718]: E0813 02:07:59.017764 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f7d7536212faee51002dacbf47d9a2ba4a078f9ef4fc42c6b651e3f86862863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:07:59.017821 kubelet[2718]: E0813 02:07:59.017785 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f7d7536212faee51002dacbf47d9a2ba4a078f9ef4fc42c6b651e3f86862863\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:07:59.017878 kubelet[2718]: E0813 02:07:59.017855 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f7d7536212faee51002dacbf47d9a2ba4a078f9ef4fc42c6b651e3f86862863\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p5qmw" podUID="26fd4059-1e9c-49a2-9bd9-181be9ad7bcb" Aug 13 02:07:59.035306 kubelet[2718]: I0813 02:07:59.034788 2718 kubelet.go:2351] "Pod admission denied" podUID="c23e032a-8f1c-4df4-8ce7-da8965ae50e3" pod="tigera-operator/tigera-operator-747864d56d-h5wbh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:59.137538 kubelet[2718]: I0813 02:07:59.137470 2718 kubelet.go:2351] "Pod admission denied" podUID="e40b2f6c-5452-41f6-91cf-52742e732260" pod="tigera-operator/tigera-operator-747864d56d-b9f4g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:59.233411 kubelet[2718]: I0813 02:07:59.233360 2718 kubelet.go:2351] "Pod admission denied" podUID="ccb9af21-d38b-41d0-8fb8-c3ffc947772b" pod="tigera-operator/tigera-operator-747864d56d-8l9gp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:59.333660 kubelet[2718]: I0813 02:07:59.333531 2718 kubelet.go:2351] "Pod admission denied" podUID="5d1755e0-fe85-4beb-82d8-ca30da7aa238" pod="tigera-operator/tigera-operator-747864d56d-j22lh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:59.433552 kubelet[2718]: I0813 02:07:59.433502 2718 kubelet.go:2351] "Pod admission denied" podUID="cacb5705-1a60-409d-9178-32671ea7e19a" pod="tigera-operator/tigera-operator-747864d56d-n98kz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:59.633468 kubelet[2718]: I0813 02:07:59.633337 2718 kubelet.go:2351] "Pod admission denied" podUID="c9786802-d692-478b-aac7-bb8da3995d98" pod="tigera-operator/tigera-operator-747864d56d-nhxns" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:59.734927 kubelet[2718]: I0813 02:07:59.734868 2718 kubelet.go:2351] "Pod admission denied" podUID="c2dc1159-3381-4196-9cdc-375d06317e61" pod="tigera-operator/tigera-operator-747864d56d-zbflg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:59.792104 kubelet[2718]: I0813 02:07:59.789760 2718 kubelet.go:2351] "Pod admission denied" podUID="29dd96d3-77ae-4315-8c0c-5b1bfda6c993" pod="tigera-operator/tigera-operator-747864d56d-m76zd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:59.882535 kubelet[2718]: I0813 02:07:59.882486 2718 kubelet.go:2351] "Pod admission denied" podUID="ca3b5526-805f-4909-91bb-5b1ef1b3250d" pod="tigera-operator/tigera-operator-747864d56d-rzlwv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:07:59.938944 containerd[1542]: time="2025-08-13T02:07:59.937849379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,}" Aug 13 02:07:59.938944 containerd[1542]: time="2025-08-13T02:07:59.938373683Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,}" Aug 13 02:07:59.939354 kubelet[2718]: E0813 02:07:59.938052 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:08:00.003621 containerd[1542]: time="2025-08-13T02:08:00.003513286Z" level=error msg="Failed to destroy network for sandbox \"8d81cd82917dbcd30b4a4a4f1603a11d156e8ea4689343d5b4ee9fca75a93cb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:00.006456 systemd[1]: run-netns-cni\x2dadf7fe6c\x2d7b94\x2d2593\x2d3687\x2d9a596d839a2a.mount: Deactivated successfully. Aug 13 02:08:00.006741 containerd[1542]: time="2025-08-13T02:08:00.006670259Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d81cd82917dbcd30b4a4a4f1603a11d156e8ea4689343d5b4ee9fca75a93cb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:00.007308 kubelet[2718]: E0813 02:08:00.007092 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d81cd82917dbcd30b4a4a4f1603a11d156e8ea4689343d5b4ee9fca75a93cb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:00.007308 kubelet[2718]: E0813 02:08:00.007137 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d81cd82917dbcd30b4a4a4f1603a11d156e8ea4689343d5b4ee9fca75a93cb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:08:00.007308 kubelet[2718]: E0813 02:08:00.007156 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d81cd82917dbcd30b4a4a4f1603a11d156e8ea4689343d5b4ee9fca75a93cb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:08:00.007308 kubelet[2718]: E0813 02:08:00.007190 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d81cd82917dbcd30b4a4a4f1603a11d156e8ea4689343d5b4ee9fca75a93cb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pw6gg" podUID="e73a6876-bbb3-4e11-8a33-1945cf27a944" Aug 13 02:08:00.014130 containerd[1542]: time="2025-08-13T02:08:00.014018584Z" level=error msg="Failed to destroy network for sandbox \"85d8aecb464feb95e0f17367f0e55033eeca1caa26c731c8733c53c689ca3cf6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:00.014995 containerd[1542]: time="2025-08-13T02:08:00.014967293Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"85d8aecb464feb95e0f17367f0e55033eeca1caa26c731c8733c53c689ca3cf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:00.015808 kubelet[2718]: E0813 02:08:00.015376 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85d8aecb464feb95e0f17367f0e55033eeca1caa26c731c8733c53c689ca3cf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:00.015808 kubelet[2718]: E0813 02:08:00.015460 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85d8aecb464feb95e0f17367f0e55033eeca1caa26c731c8733c53c689ca3cf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:08:00.015808 kubelet[2718]: E0813 02:08:00.015491 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"85d8aecb464feb95e0f17367f0e55033eeca1caa26c731c8733c53c689ca3cf6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:08:00.015808 kubelet[2718]: E0813 02:08:00.015535 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"85d8aecb464feb95e0f17367f0e55033eeca1caa26c731c8733c53c689ca3cf6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:08:00.017032 systemd[1]: run-netns-cni\x2df3b3f484\x2de9f9\x2dce84\x2d6d04\x2dd5d6b332111f.mount: Deactivated successfully. Aug 13 02:08:00.085032 kubelet[2718]: I0813 02:08:00.084981 2718 kubelet.go:2351] "Pod admission denied" podUID="8dc152c2-020c-4d88-aa9a-5f3063217c0e" pod="tigera-operator/tigera-operator-747864d56d-f8nwp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:00.186461 kubelet[2718]: I0813 02:08:00.186422 2718 kubelet.go:2351] "Pod admission denied" podUID="3d0aeddb-9512-441b-854f-c51cd6504980" pod="tigera-operator/tigera-operator-747864d56d-98r79" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:00.243973 kubelet[2718]: I0813 02:08:00.243123 2718 kubelet.go:2351] "Pod admission denied" podUID="14d25fd7-11bd-4792-a37d-887d7d3e5690" pod="tigera-operator/tigera-operator-747864d56d-5prnk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:00.334613 kubelet[2718]: I0813 02:08:00.334562 2718 kubelet.go:2351] "Pod admission denied" podUID="4e25d326-6a10-43a1-b0cb-6811c8d287fa" pod="tigera-operator/tigera-operator-747864d56d-89dkx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:00.433538 kubelet[2718]: I0813 02:08:00.433508 2718 kubelet.go:2351] "Pod admission denied" podUID="59d998d2-f171-4d19-853b-05c03d219f31" pod="tigera-operator/tigera-operator-747864d56d-85hkq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:00.483602 kubelet[2718]: I0813 02:08:00.483556 2718 kubelet.go:2351] "Pod admission denied" podUID="42dd0190-bc6e-4d16-9c72-fc4cd06d61a7" pod="tigera-operator/tigera-operator-747864d56d-vzcxj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:00.584308 kubelet[2718]: I0813 02:08:00.584239 2718 kubelet.go:2351] "Pod admission denied" podUID="d738f31f-7711-4324-99ce-a6c3450173d3" pod="tigera-operator/tigera-operator-747864d56d-x2brq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:00.691058 kubelet[2718]: I0813 02:08:00.690995 2718 kubelet.go:2351] "Pod admission denied" podUID="f3dc1df5-0994-41a5-8e86-dba7cd73fa60" pod="tigera-operator/tigera-operator-747864d56d-wqlsz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:00.739518 kubelet[2718]: I0813 02:08:00.739450 2718 kubelet.go:2351] "Pod admission denied" podUID="63c42c9c-e6e8-420e-943d-69d46a46f55b" pod="tigera-operator/tigera-operator-747864d56d-t7qp8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:00.840232 kubelet[2718]: I0813 02:08:00.840042 2718 kubelet.go:2351] "Pod admission denied" podUID="94a54384-8584-4a34-a926-76d54b900d42" pod="tigera-operator/tigera-operator-747864d56d-h94kw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:00.939182 kubelet[2718]: I0813 02:08:00.939106 2718 kubelet.go:2351] "Pod admission denied" podUID="21e5554d-d8d4-4dde-8fe2-e37d3430735b" pod="tigera-operator/tigera-operator-747864d56d-rb4gc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:00.992644 kubelet[2718]: I0813 02:08:00.991852 2718 kubelet.go:2351] "Pod admission denied" podUID="67d1a1f8-14cc-409b-a68e-478aa503caa5" pod="tigera-operator/tigera-operator-747864d56d-zbptx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:01.089346 kubelet[2718]: I0813 02:08:01.089265 2718 kubelet.go:2351] "Pod admission denied" podUID="665a3b72-351b-4422-b003-04330d05cf3a" pod="tigera-operator/tigera-operator-747864d56d-wdpsf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:01.188803 kubelet[2718]: I0813 02:08:01.188499 2718 kubelet.go:2351] "Pod admission denied" podUID="71f63eca-4fc2-45f4-91fc-55d30afea386" pod="tigera-operator/tigera-operator-747864d56d-5rvlc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:01.290654 kubelet[2718]: I0813 02:08:01.289618 2718 kubelet.go:2351] "Pod admission denied" podUID="de63be27-0918-4a7f-aaca-f498f427264e" pod="tigera-operator/tigera-operator-747864d56d-vzwt8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:01.387456 kubelet[2718]: I0813 02:08:01.387382 2718 kubelet.go:2351] "Pod admission denied" podUID="8233faa9-9ae3-4b48-80fa-03757293ea48" pod="tigera-operator/tigera-operator-747864d56d-9v8tr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:01.488812 kubelet[2718]: I0813 02:08:01.488207 2718 kubelet.go:2351] "Pod admission denied" podUID="7b2ac96c-7c98-48dd-90d0-43dffbcadf10" pod="tigera-operator/tigera-operator-747864d56d-nm7qj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:01.586886 kubelet[2718]: I0813 02:08:01.586816 2718 kubelet.go:2351] "Pod admission denied" podUID="af466665-71ad-480a-b289-4797039bb4ce" pod="tigera-operator/tigera-operator-747864d56d-zrzgt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:01.704617 kubelet[2718]: I0813 02:08:01.704206 2718 kubelet.go:2351] "Pod admission denied" podUID="52a93edf-daab-4eb9-9a1e-fbb89aadd76f" pod="tigera-operator/tigera-operator-747864d56d-4qclx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:01.790100 kubelet[2718]: I0813 02:08:01.790023 2718 kubelet.go:2351] "Pod admission denied" podUID="47f4b2e3-a084-4640-9615-82c0ffa04eba" pod="tigera-operator/tigera-operator-747864d56d-zz5vp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:01.844330 kubelet[2718]: I0813 02:08:01.844268 2718 kubelet.go:2351] "Pod admission denied" podUID="ef8baf60-f45f-49ce-9664-4c8b0b8a5137" pod="tigera-operator/tigera-operator-747864d56d-w5mcb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:01.939002 kubelet[2718]: I0813 02:08:01.938947 2718 kubelet.go:2351] "Pod admission denied" podUID="10b20ffc-f71e-4234-9a0a-6bd64cd04acc" pod="tigera-operator/tigera-operator-747864d56d-6z7kg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:02.051258 kubelet[2718]: I0813 02:08:02.051117 2718 kubelet.go:2351] "Pod admission denied" podUID="a02a32cc-b1fa-4e69-b1cb-a773733934f8" pod="tigera-operator/tigera-operator-747864d56d-bqkgp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:02.140029 kubelet[2718]: I0813 02:08:02.139963 2718 kubelet.go:2351] "Pod admission denied" podUID="ce9ceaeb-f193-45d8-92ff-e5a7dfc74dc3" pod="tigera-operator/tigera-operator-747864d56d-78xpc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:02.244783 kubelet[2718]: I0813 02:08:02.244717 2718 kubelet.go:2351] "Pod admission denied" podUID="d2173787-cc32-47af-87c1-5e648f108389" pod="tigera-operator/tigera-operator-747864d56d-xn259" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:02.338708 kubelet[2718]: I0813 02:08:02.337969 2718 kubelet.go:2351] "Pod admission denied" podUID="259896f8-5965-4f73-91ba-b64446d0d7e7" pod="tigera-operator/tigera-operator-747864d56d-2hhl2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:02.548708 kubelet[2718]: I0813 02:08:02.548568 2718 kubelet.go:2351] "Pod admission denied" podUID="b587c0f5-0f03-4305-ada8-331fe0526624" pod="tigera-operator/tigera-operator-747864d56d-p9gcr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:02.638240 kubelet[2718]: I0813 02:08:02.637785 2718 kubelet.go:2351] "Pod admission denied" podUID="6269d298-6cc1-4d68-99f0-d8eb7b473893" pod="tigera-operator/tigera-operator-747864d56d-7dlcw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:02.741255 kubelet[2718]: I0813 02:08:02.741173 2718 kubelet.go:2351] "Pod admission denied" podUID="89b8efa5-1363-4c23-9afa-c02770c50554" pod="tigera-operator/tigera-operator-747864d56d-28cgn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:02.839370 kubelet[2718]: I0813 02:08:02.839106 2718 kubelet.go:2351] "Pod admission denied" podUID="1f87055d-3d15-4924-9d2f-fe11fe37282a" pod="tigera-operator/tigera-operator-747864d56d-45blv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:02.958776 kubelet[2718]: I0813 02:08:02.956710 2718 kubelet.go:2351] "Pod admission denied" podUID="b26dd5a1-2b6a-41ff-8c3e-7cbf4848f588" pod="tigera-operator/tigera-operator-747864d56d-jclfj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:03.140146 kubelet[2718]: I0813 02:08:03.140065 2718 kubelet.go:2351] "Pod admission denied" podUID="a3bfd75e-1288-4f8e-93fe-2351e8ef73e9" pod="tigera-operator/tigera-operator-747864d56d-6r7nm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:03.240409 kubelet[2718]: I0813 02:08:03.240247 2718 kubelet.go:2351] "Pod admission denied" podUID="541aaf8b-0d43-46d1-9e36-f2054e5202e9" pod="tigera-operator/tigera-operator-747864d56d-wrxlm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:03.338753 kubelet[2718]: I0813 02:08:03.338690 2718 kubelet.go:2351] "Pod admission denied" podUID="b314d7eb-d265-4fb9-93b0-556399e0db1d" pod="tigera-operator/tigera-operator-747864d56d-txb7d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:03.552622 kubelet[2718]: I0813 02:08:03.552117 2718 kubelet.go:2351] "Pod admission denied" podUID="29304918-e2a2-49a4-a3e1-aa16f7d1b4ed" pod="tigera-operator/tigera-operator-747864d56d-6djs6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:03.643501 kubelet[2718]: I0813 02:08:03.643431 2718 kubelet.go:2351] "Pod admission denied" podUID="5592dd2e-4bb6-4514-92a4-c8f90184dc69" pod="tigera-operator/tigera-operator-747864d56d-qblzq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:03.750495 kubelet[2718]: I0813 02:08:03.750036 2718 kubelet.go:2351] "Pod admission denied" podUID="67954bc0-ddb0-41c7-be25-4904c29511b4" pod="tigera-operator/tigera-operator-747864d56d-w5ffp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:03.936150 kubelet[2718]: E0813 02:08:03.936013 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:08:03.942611 kubelet[2718]: I0813 02:08:03.942360 2718 kubelet.go:2351] "Pod admission denied" podUID="44d6f3b4-6854-4bfb-84ea-157cc468cb21" pod="tigera-operator/tigera-operator-747864d56d-8ghh5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:04.040499 kubelet[2718]: I0813 02:08:04.040425 2718 kubelet.go:2351] "Pod admission denied" podUID="726ef72a-9780-4d12-845e-e01d06f85b93" pod="tigera-operator/tigera-operator-747864d56d-2pv5f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:04.138525 kubelet[2718]: I0813 02:08:04.138470 2718 kubelet.go:2351] "Pod admission denied" podUID="8c3d750a-5729-4b26-ba06-c995992c0231" pod="tigera-operator/tigera-operator-747864d56d-qg7t5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:04.348044 kubelet[2718]: I0813 02:08:04.347964 2718 kubelet.go:2351] "Pod admission denied" podUID="31c13e70-ad59-4d7e-912d-ace5cb772b62" pod="tigera-operator/tigera-operator-747864d56d-89fsr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:04.443287 kubelet[2718]: I0813 02:08:04.443209 2718 kubelet.go:2351] "Pod admission denied" podUID="1cac28cb-56b7-47d1-8b03-cc90ad916443" pod="tigera-operator/tigera-operator-747864d56d-ctm44" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:04.540340 kubelet[2718]: I0813 02:08:04.540268 2718 kubelet.go:2351] "Pod admission denied" podUID="e39a92b3-32f3-4fcb-88fa-953c951d96d2" pod="tigera-operator/tigera-operator-747864d56d-m7xdr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:04.638636 kubelet[2718]: I0813 02:08:04.637469 2718 kubelet.go:2351] "Pod admission denied" podUID="13710fc6-1b1b-41d1-a45f-8186df51d05b" pod="tigera-operator/tigera-operator-747864d56d-ltnrm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:04.744964 kubelet[2718]: I0813 02:08:04.744894 2718 kubelet.go:2351] "Pod admission denied" podUID="54c2ade3-a542-4522-a305-1718d4bc5c43" pod="tigera-operator/tigera-operator-747864d56d-lwh7t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:04.840879 kubelet[2718]: I0813 02:08:04.840799 2718 kubelet.go:2351] "Pod admission denied" podUID="b0ad8e27-0d25-4239-89bd-80c7b8a96ffb" pod="tigera-operator/tigera-operator-747864d56d-gn49x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:04.937736 kubelet[2718]: E0813 02:08:04.937351 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3051784: write /var/lib/containerd/tmpmounts/containerd-mount3051784/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-cdfxj" podUID="e8f51745-7382-4ead-96df-a31572ad4e1f" Aug 13 02:08:04.939610 kubelet[2718]: I0813 02:08:04.939232 2718 kubelet.go:2351] "Pod admission denied" podUID="1803f888-71c8-465d-a61d-749b16530303" pod="tigera-operator/tigera-operator-747864d56d-rfj2n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:04.961060 kubelet[2718]: I0813 02:08:04.960988 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-rfj2n" podStartSLOduration=0.960968854 podStartE2EDuration="960.968854ms" podCreationTimestamp="2025-08-13 02:08:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 02:08:04.959170774 +0000 UTC m=+85.115061678" watchObservedRunningTime="2025-08-13 02:08:04.960968854 +0000 UTC m=+85.116859758" Aug 13 02:08:05.140469 kubelet[2718]: I0813 02:08:05.140388 2718 kubelet.go:2351] "Pod admission denied" podUID="5aafd073-2ad3-404e-891a-f6456c255891" pod="tigera-operator/tigera-operator-747864d56d-8ln2f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:05.246774 kubelet[2718]: I0813 02:08:05.246142 2718 kubelet.go:2351] "Pod admission denied" podUID="bbf286f6-624e-4783-945f-ccd55123ceb5" pod="tigera-operator/tigera-operator-747864d56d-pzp9z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:05.339040 kubelet[2718]: I0813 02:08:05.338976 2718 kubelet.go:2351] "Pod admission denied" podUID="0357a5d9-581f-47ee-9ab5-740f1027dc6a" pod="tigera-operator/tigera-operator-747864d56d-n2ctd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:05.439820 kubelet[2718]: I0813 02:08:05.439749 2718 kubelet.go:2351] "Pod admission denied" podUID="ec8374d0-eb46-4ab1-bde6-c32909fe3070" pod="tigera-operator/tigera-operator-747864d56d-wsjfh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:05.476971 kubelet[2718]: I0813 02:08:05.476926 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:08:05.476971 kubelet[2718]: I0813 02:08:05.476969 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:08:05.480659 kubelet[2718]: I0813 02:08:05.480622 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:08:05.491833 kubelet[2718]: I0813 02:08:05.491772 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:08:05.492114 kubelet[2718]: I0813 02:08:05.491851 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","kube-system/coredns-668d6bf9bc-pw6gg","kube-system/coredns-668d6bf9bc-p5qmw","calico-system/csi-node-driver-r6mhv","calico-system/calico-node-cdfxj","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:08:05.492114 kubelet[2718]: E0813 02:08:05.491882 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:08:05.492114 kubelet[2718]: E0813 02:08:05.491892 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:08:05.492114 kubelet[2718]: E0813 02:08:05.491899 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:08:05.492114 kubelet[2718]: E0813 02:08:05.491906 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:08:05.492114 kubelet[2718]: E0813 02:08:05.491912 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:08:05.492114 kubelet[2718]: E0813 02:08:05.491923 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:08:05.492114 kubelet[2718]: E0813 02:08:05.491932 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:08:05.492114 kubelet[2718]: E0813 02:08:05.491940 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:08:05.492114 kubelet[2718]: E0813 02:08:05.491948 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:08:05.492114 kubelet[2718]: E0813 02:08:05.491956 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:08:05.492114 kubelet[2718]: I0813 02:08:05.491965 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:08:05.538946 kubelet[2718]: I0813 02:08:05.538893 2718 kubelet.go:2351] "Pod admission denied" podUID="1b00d32f-7450-4674-b84d-bfdfd6adc0ce" pod="tigera-operator/tigera-operator-747864d56d-x6c7x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:05.639020 kubelet[2718]: I0813 02:08:05.638963 2718 kubelet.go:2351] "Pod admission denied" podUID="987d3045-2e83-44bb-a049-88c4f01eeec1" pod="tigera-operator/tigera-operator-747864d56d-bl2tj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:05.740063 kubelet[2718]: I0813 02:08:05.739995 2718 kubelet.go:2351] "Pod admission denied" podUID="a4572018-4eff-4efb-90bb-c304fe91148c" pod="tigera-operator/tigera-operator-747864d56d-7mgfv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:05.842615 kubelet[2718]: I0813 02:08:05.842446 2718 kubelet.go:2351] "Pod admission denied" podUID="628db78f-426f-49bb-bee9-11134953198c" pod="tigera-operator/tigera-operator-747864d56d-65zvd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:05.886485 kubelet[2718]: I0813 02:08:05.886427 2718 kubelet.go:2351] "Pod admission denied" podUID="7a1a69bc-1da8-4c10-a68c-118d8d64fd98" pod="tigera-operator/tigera-operator-747864d56d-bhmws" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:05.999615 kubelet[2718]: I0813 02:08:05.997744 2718 kubelet.go:2351] "Pod admission denied" podUID="ba7022d8-def9-4e09-a8ea-1a2560ec8e58" pod="tigera-operator/tigera-operator-747864d56d-4jq5b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:06.189886 kubelet[2718]: I0813 02:08:06.188501 2718 kubelet.go:2351] "Pod admission denied" podUID="443d341f-6505-42e3-bcdd-4eaa1a37a82d" pod="tigera-operator/tigera-operator-747864d56d-lhjx4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:06.290035 kubelet[2718]: I0813 02:08:06.289972 2718 kubelet.go:2351] "Pod admission denied" podUID="ad1d586c-1042-4a88-9835-217fb988b010" pod="tigera-operator/tigera-operator-747864d56d-74sfs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:06.337164 kubelet[2718]: I0813 02:08:06.337099 2718 kubelet.go:2351] "Pod admission denied" podUID="47d810ad-3a67-465d-b0b7-4c5f01bab0a4" pod="tigera-operator/tigera-operator-747864d56d-ndfgm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:06.446553 kubelet[2718]: I0813 02:08:06.445153 2718 kubelet.go:2351] "Pod admission denied" podUID="2c7f697a-f343-4ce2-be49-d6473b988410" pod="tigera-operator/tigera-operator-747864d56d-vw6pw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:06.645611 kubelet[2718]: I0813 02:08:06.644222 2718 kubelet.go:2351] "Pod admission denied" podUID="bb5b8c6e-1232-4105-a84b-55d65df55752" pod="tigera-operator/tigera-operator-747864d56d-rqpp7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:06.739368 kubelet[2718]: I0813 02:08:06.738945 2718 kubelet.go:2351] "Pod admission denied" podUID="da2475b5-af68-4881-bde5-7e9952bb4a3b" pod="tigera-operator/tigera-operator-747864d56d-xzlkn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:06.837611 kubelet[2718]: I0813 02:08:06.837541 2718 kubelet.go:2351] "Pod admission denied" podUID="0f7992af-fda1-4180-b50c-783c7186dc28" pod="tigera-operator/tigera-operator-747864d56d-4dfjr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:07.052643 kubelet[2718]: I0813 02:08:07.051705 2718 kubelet.go:2351] "Pod admission denied" podUID="ab3075d8-4bd6-4490-bc95-637d8a8c60a0" pod="tigera-operator/tigera-operator-747864d56d-kr7nw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:07.139540 kubelet[2718]: I0813 02:08:07.139467 2718 kubelet.go:2351] "Pod admission denied" podUID="6ed6b759-34b4-4208-99b0-582a4587ecfa" pod="tigera-operator/tigera-operator-747864d56d-hfjrj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:07.189439 kubelet[2718]: I0813 02:08:07.189376 2718 kubelet.go:2351] "Pod admission denied" podUID="6e395656-b4aa-4f95-b4f8-1e45dfa11a6d" pod="tigera-operator/tigera-operator-747864d56d-kc2ww" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:07.290319 kubelet[2718]: I0813 02:08:07.290254 2718 kubelet.go:2351] "Pod admission denied" podUID="d77f49ac-a314-4a99-8f0b-964bc738c027" pod="tigera-operator/tigera-operator-747864d56d-dwv7g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:07.398837 kubelet[2718]: I0813 02:08:07.397557 2718 kubelet.go:2351] "Pod admission denied" podUID="5171c8b6-1136-4597-8f0c-214b8207ee7c" pod="tigera-operator/tigera-operator-747864d56d-rc7rf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:07.488991 kubelet[2718]: I0813 02:08:07.488924 2718 kubelet.go:2351] "Pod admission denied" podUID="d661626e-3201-484e-80bd-d80d4573ce42" pod="tigera-operator/tigera-operator-747864d56d-pmvph" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:07.686984 kubelet[2718]: I0813 02:08:07.686613 2718 kubelet.go:2351] "Pod admission denied" podUID="b8d328b8-0b67-431d-a06f-53520debbf24" pod="tigera-operator/tigera-operator-747864d56d-sl2c7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:07.788512 kubelet[2718]: I0813 02:08:07.788434 2718 kubelet.go:2351] "Pod admission denied" podUID="218d2251-2f54-463c-98cc-c919932d22f9" pod="tigera-operator/tigera-operator-747864d56d-gf6w2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:07.849116 kubelet[2718]: I0813 02:08:07.848771 2718 kubelet.go:2351] "Pod admission denied" podUID="04f20278-16b4-47a1-b4bb-ffadfe9735f4" pod="tigera-operator/tigera-operator-747864d56d-mj292" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:07.936760 containerd[1542]: time="2025-08-13T02:08:07.936707697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,}" Aug 13 02:08:07.944615 kubelet[2718]: I0813 02:08:07.943484 2718 kubelet.go:2351] "Pod admission denied" podUID="8d72e006-9b46-46cd-9a12-a62333a8eda6" pod="tigera-operator/tigera-operator-747864d56d-rhh9f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:08.003285 containerd[1542]: time="2025-08-13T02:08:08.003205685Z" level=error msg="Failed to destroy network for sandbox \"3607556780632227aca1e2b3e9f7557b5163a0ec2bd2dc98867ccf513d754fb5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:08.006331 containerd[1542]: time="2025-08-13T02:08:08.006195965Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3607556780632227aca1e2b3e9f7557b5163a0ec2bd2dc98867ccf513d754fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:08.006635 kubelet[2718]: E0813 02:08:08.006446 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3607556780632227aca1e2b3e9f7557b5163a0ec2bd2dc98867ccf513d754fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:08.006635 kubelet[2718]: E0813 02:08:08.006497 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3607556780632227aca1e2b3e9f7557b5163a0ec2bd2dc98867ccf513d754fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:08:08.006635 kubelet[2718]: E0813 02:08:08.006518 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3607556780632227aca1e2b3e9f7557b5163a0ec2bd2dc98867ccf513d754fb5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:08:08.006635 kubelet[2718]: E0813 02:08:08.006554 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3607556780632227aca1e2b3e9f7557b5163a0ec2bd2dc98867ccf513d754fb5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:08:08.007292 systemd[1]: run-netns-cni\x2de05a62de\x2d61cf\x2d57ba\x2d0632\x2d7f9c039dcf76.mount: Deactivated successfully. Aug 13 02:08:08.039970 kubelet[2718]: I0813 02:08:08.039868 2718 kubelet.go:2351] "Pod admission denied" podUID="72fcf816-3063-4ecb-a1b7-c702ab14b7a4" pod="tigera-operator/tigera-operator-747864d56d-gs872" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:08.138957 kubelet[2718]: I0813 02:08:08.138892 2718 kubelet.go:2351] "Pod admission denied" podUID="9c9c4088-6eb3-41be-a549-52eac6f662f3" pod="tigera-operator/tigera-operator-747864d56d-chz9k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:08.242946 kubelet[2718]: I0813 02:08:08.241469 2718 kubelet.go:2351] "Pod admission denied" podUID="e30365ef-79c1-409b-8a06-0f5a054b3843" pod="tigera-operator/tigera-operator-747864d56d-fshd5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:08.338815 kubelet[2718]: I0813 02:08:08.338746 2718 kubelet.go:2351] "Pod admission denied" podUID="d56d1256-43e8-4fc2-a43b-38c5eaae0c72" pod="tigera-operator/tigera-operator-747864d56d-q589r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:08.440434 kubelet[2718]: I0813 02:08:08.440363 2718 kubelet.go:2351] "Pod admission denied" podUID="cb357431-b4f8-4384-ac50-ee190d3a8be5" pod="tigera-operator/tigera-operator-747864d56d-ntwr2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:08.543132 kubelet[2718]: I0813 02:08:08.543100 2718 kubelet.go:2351] "Pod admission denied" podUID="0671e425-0a39-4653-9acd-5febcf0b2650" pod="tigera-operator/tigera-operator-747864d56d-zcmcr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:08.739629 kubelet[2718]: I0813 02:08:08.739557 2718 kubelet.go:2351] "Pod admission denied" podUID="790b598e-8a14-4ec0-af3a-976fd6500723" pod="tigera-operator/tigera-operator-747864d56d-99vnv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:08.838937 kubelet[2718]: I0813 02:08:08.838771 2718 kubelet.go:2351] "Pod admission denied" podUID="cac6a3b5-fbc7-425a-9822-8be87c5fb6bb" pod="tigera-operator/tigera-operator-747864d56d-xmfwm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:08.937792 kubelet[2718]: I0813 02:08:08.937728 2718 kubelet.go:2351] "Pod admission denied" podUID="3f90e15a-6286-4989-b106-eb240bf0e38f" pod="tigera-operator/tigera-operator-747864d56d-49t8n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:09.040302 kubelet[2718]: I0813 02:08:09.040230 2718 kubelet.go:2351] "Pod admission denied" podUID="c8f02b1a-59e4-45b8-b152-fc20fc692c02" pod="tigera-operator/tigera-operator-747864d56d-k4nlm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:09.145829 kubelet[2718]: I0813 02:08:09.145662 2718 kubelet.go:2351] "Pod admission denied" podUID="821c1d46-8fc6-4d44-bca3-c4ca385f9f43" pod="tigera-operator/tigera-operator-747864d56d-78qd9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:09.341634 kubelet[2718]: I0813 02:08:09.341522 2718 kubelet.go:2351] "Pod admission denied" podUID="e3383aa8-ed81-4eda-9c32-642984f6c6b1" pod="tigera-operator/tigera-operator-747864d56d-tc5gr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:09.449887 kubelet[2718]: I0813 02:08:09.447711 2718 kubelet.go:2351] "Pod admission denied" podUID="efa9bc49-f7b8-4441-a4f3-ea83511c5488" pod="tigera-operator/tigera-operator-747864d56d-s5vc9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:09.542958 kubelet[2718]: I0813 02:08:09.542896 2718 kubelet.go:2351] "Pod admission denied" podUID="6d27d530-87b8-4f76-acb6-4fe8e699aaed" pod="tigera-operator/tigera-operator-747864d56d-dtfgd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:09.745417 kubelet[2718]: I0813 02:08:09.744575 2718 kubelet.go:2351] "Pod admission denied" podUID="c2683e9a-2686-4e99-9caf-8207d9afa7d3" pod="tigera-operator/tigera-operator-747864d56d-t29v5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:09.841812 kubelet[2718]: I0813 02:08:09.841749 2718 kubelet.go:2351] "Pod admission denied" podUID="a74c46e9-bb10-4b11-87e4-9f7f8ac0a0c4" pod="tigera-operator/tigera-operator-747864d56d-4xqh8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:09.946327 kubelet[2718]: I0813 02:08:09.946239 2718 kubelet.go:2351] "Pod admission denied" podUID="49c345b6-2c1f-47dd-8d50-bdfd559d427c" pod="tigera-operator/tigera-operator-747864d56d-tdgv9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:10.044628 kubelet[2718]: I0813 02:08:10.044536 2718 kubelet.go:2351] "Pod admission denied" podUID="63c1babe-16e8-4f6a-b7aa-eec45a9b8d63" pod="tigera-operator/tigera-operator-747864d56d-mm9tx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:10.144842 kubelet[2718]: I0813 02:08:10.144773 2718 kubelet.go:2351] "Pod admission denied" podUID="1bdd2107-04ac-473a-a44a-8b6c561af788" pod="tigera-operator/tigera-operator-747864d56d-86vkl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:10.249611 kubelet[2718]: I0813 02:08:10.249135 2718 kubelet.go:2351] "Pod admission denied" podUID="95d05c95-2fb1-40de-9447-a39c0dd5bf15" pod="tigera-operator/tigera-operator-747864d56d-n8v8c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:10.341837 kubelet[2718]: I0813 02:08:10.341553 2718 kubelet.go:2351] "Pod admission denied" podUID="1db5bc8d-1538-463d-8d32-342d143a0e1b" pod="tigera-operator/tigera-operator-747864d56d-qtqgg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:10.443017 kubelet[2718]: I0813 02:08:10.442942 2718 kubelet.go:2351] "Pod admission denied" podUID="3c3bb191-742e-43a5-90f6-c69a63556192" pod="tigera-operator/tigera-operator-747864d56d-6xqxj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:10.492986 kubelet[2718]: I0813 02:08:10.492918 2718 kubelet.go:2351] "Pod admission denied" podUID="f21c01c5-702a-462a-b803-c1193a9e8ea9" pod="tigera-operator/tigera-operator-747864d56d-jrk7f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:10.606900 kubelet[2718]: I0813 02:08:10.605375 2718 kubelet.go:2351] "Pod admission denied" podUID="37e662f2-910c-4bec-88af-3a3f81e76a89" pod="tigera-operator/tigera-operator-747864d56d-hph76" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:10.793968 kubelet[2718]: I0813 02:08:10.793892 2718 kubelet.go:2351] "Pod admission denied" podUID="a9fa5a2f-adba-4764-85ef-d6e7ce7657eb" pod="tigera-operator/tigera-operator-747864d56d-sh7nx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:10.893618 kubelet[2718]: I0813 02:08:10.893121 2718 kubelet.go:2351] "Pod admission denied" podUID="c54265b2-82a4-4f6b-884f-7bca3cb2b97f" pod="tigera-operator/tigera-operator-747864d56d-w2hs8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:10.936042 kubelet[2718]: E0813 02:08:10.936000 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:08:10.937466 containerd[1542]: time="2025-08-13T02:08:10.937395896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,}" Aug 13 02:08:10.950097 kubelet[2718]: I0813 02:08:10.949549 2718 kubelet.go:2351] "Pod admission denied" podUID="72edc3c0-46eb-4066-9240-6f0f3be10add" pod="tigera-operator/tigera-operator-747864d56d-l4gzc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:11.019236 containerd[1542]: time="2025-08-13T02:08:11.019161075Z" level=error msg="Failed to destroy network for sandbox \"738e17fba064756eaa9f87630f86b5a83bd51c20db71e2aea0cae4f105bfdb82\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:11.022670 containerd[1542]: time="2025-08-13T02:08:11.020891158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"738e17fba064756eaa9f87630f86b5a83bd51c20db71e2aea0cae4f105bfdb82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:11.022768 kubelet[2718]: E0813 02:08:11.021727 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"738e17fba064756eaa9f87630f86b5a83bd51c20db71e2aea0cae4f105bfdb82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:11.022768 kubelet[2718]: E0813 02:08:11.021802 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"738e17fba064756eaa9f87630f86b5a83bd51c20db71e2aea0cae4f105bfdb82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:08:11.022768 kubelet[2718]: E0813 02:08:11.021843 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"738e17fba064756eaa9f87630f86b5a83bd51c20db71e2aea0cae4f105bfdb82\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:08:11.022768 kubelet[2718]: E0813 02:08:11.021885 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"738e17fba064756eaa9f87630f86b5a83bd51c20db71e2aea0cae4f105bfdb82\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p5qmw" podUID="26fd4059-1e9c-49a2-9bd9-181be9ad7bcb" Aug 13 02:08:11.024701 systemd[1]: run-netns-cni\x2d62c2c1a1\x2d3746\x2d6f7f\x2df023\x2d3dd3aacdf2ff.mount: Deactivated successfully. Aug 13 02:08:11.059763 kubelet[2718]: I0813 02:08:11.059698 2718 kubelet.go:2351] "Pod admission denied" podUID="4e498c54-7b48-4965-98c3-26a3e67c0172" pod="tigera-operator/tigera-operator-747864d56d-b5jg2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:11.141456 kubelet[2718]: I0813 02:08:11.141391 2718 kubelet.go:2351] "Pod admission denied" podUID="a72b6fa3-bfd7-4e1d-bf73-450662f57441" pod="tigera-operator/tigera-operator-747864d56d-hmw57" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:11.236115 kubelet[2718]: I0813 02:08:11.235962 2718 kubelet.go:2351] "Pod admission denied" podUID="3581d542-f0d9-47e6-88cc-4337c22693c8" pod="tigera-operator/tigera-operator-747864d56d-r6x9m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:11.438986 kubelet[2718]: I0813 02:08:11.438900 2718 kubelet.go:2351] "Pod admission denied" podUID="ee333830-fc33-4d37-b579-70f835b1e248" pod="tigera-operator/tigera-operator-747864d56d-k2jv8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:11.538107 kubelet[2718]: I0813 02:08:11.538059 2718 kubelet.go:2351] "Pod admission denied" podUID="1b2a973b-73dc-439d-9b81-34fa497f2711" pod="tigera-operator/tigera-operator-747864d56d-b4gbh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:11.637861 kubelet[2718]: I0813 02:08:11.637786 2718 kubelet.go:2351] "Pod admission denied" podUID="61380109-6793-4821-a8c1-22ba3461371a" pod="tigera-operator/tigera-operator-747864d56d-2wqqb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:11.743335 kubelet[2718]: I0813 02:08:11.743256 2718 kubelet.go:2351] "Pod admission denied" podUID="3ec8637f-5659-4ab3-8147-429080117294" pod="tigera-operator/tigera-operator-747864d56d-tk82t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:11.851136 kubelet[2718]: I0813 02:08:11.850425 2718 kubelet.go:2351] "Pod admission denied" podUID="ceeb3c1b-e11a-4c68-bb58-ae428f79bc33" pod="tigera-operator/tigera-operator-747864d56d-qsqxs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:12.043876 kubelet[2718]: I0813 02:08:12.043811 2718 kubelet.go:2351] "Pod admission denied" podUID="f7449cc3-457a-4037-87b8-2e91cb32d8ee" pod="tigera-operator/tigera-operator-747864d56d-npdkw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:12.140574 kubelet[2718]: I0813 02:08:12.140279 2718 kubelet.go:2351] "Pod admission denied" podUID="30b5413d-ce9a-4189-8e0d-b288078119b0" pod="tigera-operator/tigera-operator-747864d56d-c2ncn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:12.191814 kubelet[2718]: I0813 02:08:12.191730 2718 kubelet.go:2351] "Pod admission denied" podUID="f5fa946a-7838-4207-ae84-dd3eb7c85004" pod="tigera-operator/tigera-operator-747864d56d-hkh5d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:12.297618 kubelet[2718]: I0813 02:08:12.297128 2718 kubelet.go:2351] "Pod admission denied" podUID="cbcc6239-af1c-4309-8055-8bc5993156b4" pod="tigera-operator/tigera-operator-747864d56d-92527" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:12.390772 kubelet[2718]: I0813 02:08:12.389965 2718 kubelet.go:2351] "Pod admission denied" podUID="a9bdc4b4-8171-4fe0-ae96-32cb8bbb74e2" pod="tigera-operator/tigera-operator-747864d56d-8gd85" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:12.441189 kubelet[2718]: I0813 02:08:12.441127 2718 kubelet.go:2351] "Pod admission denied" podUID="79f2cf6e-f708-4276-a163-0be5364d4c76" pod="tigera-operator/tigera-operator-747864d56d-r97s8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:12.548609 kubelet[2718]: I0813 02:08:12.548534 2718 kubelet.go:2351] "Pod admission denied" podUID="4cff3bc0-ec9d-491b-ae5a-747dcfaf29c1" pod="tigera-operator/tigera-operator-747864d56d-qx5bt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:12.638472 kubelet[2718]: I0813 02:08:12.638416 2718 kubelet.go:2351] "Pod admission denied" podUID="240991b9-053c-46ca-a907-25cecdda61b2" pod="tigera-operator/tigera-operator-747864d56d-qb6tx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:12.743026 kubelet[2718]: I0813 02:08:12.742668 2718 kubelet.go:2351] "Pod admission denied" podUID="3acfba4f-e258-4ee1-951a-d080dcc6bde6" pod="tigera-operator/tigera-operator-747864d56d-bplzw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:12.840905 kubelet[2718]: I0813 02:08:12.840841 2718 kubelet.go:2351] "Pod admission denied" podUID="54a43abd-e676-4f15-afc7-a8f92beb648b" pod="tigera-operator/tigera-operator-747864d56d-42gzq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:12.936233 kubelet[2718]: E0813 02:08:12.936178 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:08:12.938088 containerd[1542]: time="2025-08-13T02:08:12.938040044Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,}" Aug 13 02:08:12.959218 kubelet[2718]: I0813 02:08:12.958939 2718 kubelet.go:2351] "Pod admission denied" podUID="7040a07e-efaa-4076-b00b-273522934653" pod="tigera-operator/tigera-operator-747864d56d-p2s65" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:13.045413 kubelet[2718]: I0813 02:08:13.044645 2718 kubelet.go:2351] "Pod admission denied" podUID="bf7c27ac-8bab-41a9-9187-7bdff37ce390" pod="tigera-operator/tigera-operator-747864d56d-79cpz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:13.045677 containerd[1542]: time="2025-08-13T02:08:13.045579794Z" level=error msg="Failed to destroy network for sandbox \"ac2edcfdf6f2ddff6ee4ce699b99794f53626a47c9761b76c6db4646055e517d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:13.048862 systemd[1]: run-netns-cni\x2d9caaa9c5\x2df4fd\x2d6c9c\x2da0ca\x2d9e7a4afe30a9.mount: Deactivated successfully. Aug 13 02:08:13.049187 containerd[1542]: time="2025-08-13T02:08:13.049127062Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2edcfdf6f2ddff6ee4ce699b99794f53626a47c9761b76c6db4646055e517d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:13.051354 kubelet[2718]: E0813 02:08:13.050258 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2edcfdf6f2ddff6ee4ce699b99794f53626a47c9761b76c6db4646055e517d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:13.052114 kubelet[2718]: E0813 02:08:13.051666 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2edcfdf6f2ddff6ee4ce699b99794f53626a47c9761b76c6db4646055e517d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:08:13.052114 kubelet[2718]: E0813 02:08:13.051699 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2edcfdf6f2ddff6ee4ce699b99794f53626a47c9761b76c6db4646055e517d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:08:13.052114 kubelet[2718]: E0813 02:08:13.051738 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac2edcfdf6f2ddff6ee4ce699b99794f53626a47c9761b76c6db4646055e517d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pw6gg" podUID="e73a6876-bbb3-4e11-8a33-1945cf27a944" Aug 13 02:08:13.089152 kubelet[2718]: I0813 02:08:13.089095 2718 kubelet.go:2351] "Pod admission denied" podUID="2bb8bfe3-7cd6-4a32-b20b-22e67f8ce250" pod="tigera-operator/tigera-operator-747864d56d-tzpmm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:13.196147 kubelet[2718]: I0813 02:08:13.196092 2718 kubelet.go:2351] "Pod admission denied" podUID="a4d40b5f-5e9c-42ac-b0df-7ecee0451c01" pod="tigera-operator/tigera-operator-747864d56d-vq7xp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:13.390347 kubelet[2718]: I0813 02:08:13.390190 2718 kubelet.go:2351] "Pod admission denied" podUID="bfd36d7c-9e56-4bed-87fc-9b4b50393cf1" pod="tigera-operator/tigera-operator-747864d56d-pnxns" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:13.491112 kubelet[2718]: I0813 02:08:13.491045 2718 kubelet.go:2351] "Pod admission denied" podUID="67ca3dcf-3212-460b-8359-10251d51fedf" pod="tigera-operator/tigera-operator-747864d56d-6xcnn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:13.594719 kubelet[2718]: I0813 02:08:13.594638 2718 kubelet.go:2351] "Pod admission denied" podUID="11bc952c-426a-4ee1-bd40-3e0b91def7e9" pod="tigera-operator/tigera-operator-747864d56d-6d6fg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:13.700584 kubelet[2718]: I0813 02:08:13.700424 2718 kubelet.go:2351] "Pod admission denied" podUID="ae99f1d8-4271-46dc-bf79-346ebd6c74f3" pod="tigera-operator/tigera-operator-747864d56d-jl95k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:13.741052 kubelet[2718]: I0813 02:08:13.740989 2718 kubelet.go:2351] "Pod admission denied" podUID="abfe9130-fd1a-40e9-8cc6-f26ef0af146e" pod="tigera-operator/tigera-operator-747864d56d-9bg7p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:13.839404 kubelet[2718]: I0813 02:08:13.839332 2718 kubelet.go:2351] "Pod admission denied" podUID="02fbf23a-4ac0-4bfe-806b-f8a4ba078c17" pod="tigera-operator/tigera-operator-747864d56d-jhq6c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:13.938871 containerd[1542]: time="2025-08-13T02:08:13.938238358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,}" Aug 13 02:08:13.972831 kubelet[2718]: I0813 02:08:13.972131 2718 kubelet.go:2351] "Pod admission denied" podUID="967dbec0-781d-4c8b-8296-0bc0dc0cfdb9" pod="tigera-operator/tigera-operator-747864d56d-7vc2p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:14.027290 containerd[1542]: time="2025-08-13T02:08:14.027247211Z" level=error msg="Failed to destroy network for sandbox \"b6b519dd002008702b053b410a48b13ee4bd2a755272321eb0bcdb931b2adb7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:14.030808 containerd[1542]: time="2025-08-13T02:08:14.030496382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6b519dd002008702b053b410a48b13ee4bd2a755272321eb0bcdb931b2adb7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:14.031871 systemd[1]: run-netns-cni\x2d92d4ee60\x2da551\x2d8a07\x2d9fd9\x2dc8d755badd61.mount: Deactivated successfully. Aug 13 02:08:14.034068 kubelet[2718]: E0813 02:08:14.033890 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6b519dd002008702b053b410a48b13ee4bd2a755272321eb0bcdb931b2adb7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:14.034068 kubelet[2718]: E0813 02:08:14.033957 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6b519dd002008702b053b410a48b13ee4bd2a755272321eb0bcdb931b2adb7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:08:14.034068 kubelet[2718]: E0813 02:08:14.033984 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6b519dd002008702b053b410a48b13ee4bd2a755272321eb0bcdb931b2adb7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:08:14.034192 kubelet[2718]: E0813 02:08:14.034032 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6b519dd002008702b053b410a48b13ee4bd2a755272321eb0bcdb931b2adb7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:08:14.096431 kubelet[2718]: I0813 02:08:14.096376 2718 kubelet.go:2351] "Pod admission denied" podUID="b28273ea-7771-4d27-bc3e-74fddb54b58d" pod="tigera-operator/tigera-operator-747864d56d-8smqf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:14.146897 kubelet[2718]: I0813 02:08:14.146839 2718 kubelet.go:2351] "Pod admission denied" podUID="f6b6fcc3-979c-48db-b3e1-eeacf21a0131" pod="tigera-operator/tigera-operator-747864d56d-pw9kd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:14.242157 kubelet[2718]: I0813 02:08:14.241984 2718 kubelet.go:2351] "Pod admission denied" podUID="99737c9b-3927-414a-87e0-8aa1b44fd7a8" pod="tigera-operator/tigera-operator-747864d56d-qtzd6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:14.444097 kubelet[2718]: I0813 02:08:14.444018 2718 kubelet.go:2351] "Pod admission denied" podUID="831d4ff8-7f92-459c-8bb1-dd501e6e0a45" pod="tigera-operator/tigera-operator-747864d56d-f97bb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:14.543389 kubelet[2718]: I0813 02:08:14.543309 2718 kubelet.go:2351] "Pod admission denied" podUID="fd6b8d92-1884-46e9-b34f-bc0440b4317d" pod="tigera-operator/tigera-operator-747864d56d-7nsr2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:14.641874 kubelet[2718]: I0813 02:08:14.641607 2718 kubelet.go:2351] "Pod admission denied" podUID="85c3e84c-be7c-40bb-87a3-e66763c33709" pod="tigera-operator/tigera-operator-747864d56d-s44qq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:14.743877 kubelet[2718]: I0813 02:08:14.742736 2718 kubelet.go:2351] "Pod admission denied" podUID="4047f200-4fcd-4184-aa83-f6bb4fe70c61" pod="tigera-operator/tigera-operator-747864d56d-79n4h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:14.840839 kubelet[2718]: I0813 02:08:14.840678 2718 kubelet.go:2351] "Pod admission denied" podUID="576fc193-82da-4ce7-abaa-239ea9540304" pod="tigera-operator/tigera-operator-747864d56d-tx2vl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:14.940850 kubelet[2718]: I0813 02:08:14.940795 2718 kubelet.go:2351] "Pod admission denied" podUID="2210c4fa-2647-453f-a0c4-c42ce0c8779c" pod="tigera-operator/tigera-operator-747864d56d-2dd7w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:14.988975 kubelet[2718]: I0813 02:08:14.988930 2718 kubelet.go:2351] "Pod admission denied" podUID="eef69b2f-3d20-43cb-b04e-0059fbcae68d" pod="tigera-operator/tigera-operator-747864d56d-m4q5f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:15.088884 kubelet[2718]: I0813 02:08:15.088825 2718 kubelet.go:2351] "Pod admission denied" podUID="ea54f63a-65f8-44fe-ad8d-7ca8af99324c" pod="tigera-operator/tigera-operator-747864d56d-vmqv5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:15.192467 kubelet[2718]: I0813 02:08:15.192109 2718 kubelet.go:2351] "Pod admission denied" podUID="d4d8233e-228f-4132-bfcd-33a541d06f51" pod="tigera-operator/tigera-operator-747864d56d-whmq9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:15.238530 kubelet[2718]: I0813 02:08:15.238450 2718 kubelet.go:2351] "Pod admission denied" podUID="f50f7fb6-3427-44a7-bae7-0b975ae21621" pod="tigera-operator/tigera-operator-747864d56d-2lxdk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:15.346619 kubelet[2718]: I0813 02:08:15.346454 2718 kubelet.go:2351] "Pod admission denied" podUID="c401be2c-ea19-47e7-8f1a-110ff417cf2c" pod="tigera-operator/tigera-operator-747864d56d-kbw7t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:15.439104 kubelet[2718]: I0813 02:08:15.439050 2718 kubelet.go:2351] "Pod admission denied" podUID="d1ccc976-01cd-4491-beb1-f248493ebefc" pod="tigera-operator/tigera-operator-747864d56d-967kk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:15.513227 kubelet[2718]: I0813 02:08:15.512459 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:08:15.513227 kubelet[2718]: I0813 02:08:15.512515 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:08:15.515571 kubelet[2718]: I0813 02:08:15.515544 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:08:15.532334 kubelet[2718]: I0813 02:08:15.532285 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:08:15.533448 kubelet[2718]: I0813 02:08:15.533367 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-p5qmw","calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/csi-node-driver-r6mhv","calico-system/calico-node-cdfxj","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:08:15.533899 kubelet[2718]: E0813 02:08:15.533687 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:08:15.533899 kubelet[2718]: E0813 02:08:15.533706 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:08:15.533899 kubelet[2718]: E0813 02:08:15.533716 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:08:15.533899 kubelet[2718]: E0813 02:08:15.533725 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:08:15.533899 kubelet[2718]: E0813 02:08:15.533733 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:08:15.533899 kubelet[2718]: E0813 02:08:15.533745 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:08:15.533899 kubelet[2718]: E0813 02:08:15.533755 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:08:15.533899 kubelet[2718]: E0813 02:08:15.533764 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:08:15.533899 kubelet[2718]: E0813 02:08:15.533773 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:08:15.533899 kubelet[2718]: E0813 02:08:15.533782 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:08:15.533899 kubelet[2718]: I0813 02:08:15.533793 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:08:15.548130 kubelet[2718]: I0813 02:08:15.548077 2718 kubelet.go:2351] "Pod admission denied" podUID="9435540f-9cbd-48b9-8b48-547d87179cc7" pod="tigera-operator/tigera-operator-747864d56d-76wwg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:15.644307 kubelet[2718]: I0813 02:08:15.644241 2718 kubelet.go:2351] "Pod admission denied" podUID="33abb641-0173-40d3-8786-002ac4dc5cca" pod="tigera-operator/tigera-operator-747864d56d-8bzkw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:15.736606 kubelet[2718]: I0813 02:08:15.736533 2718 kubelet.go:2351] "Pod admission denied" podUID="2c5f75f1-0690-4d23-bb31-d00784749650" pod="tigera-operator/tigera-operator-747864d56d-wbcnq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:15.848125 kubelet[2718]: I0813 02:08:15.848064 2718 kubelet.go:2351] "Pod admission denied" podUID="1e50e7c8-8509-4776-9497-d9a65f270f73" pod="tigera-operator/tigera-operator-747864d56d-zdk5h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:15.891486 kubelet[2718]: I0813 02:08:15.891423 2718 kubelet.go:2351] "Pod admission denied" podUID="dde852b8-b4dd-44df-87b9-b6b50cd6b8cf" pod="tigera-operator/tigera-operator-747864d56d-mqhtb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:16.006244 kubelet[2718]: I0813 02:08:16.002194 2718 kubelet.go:2351] "Pod admission denied" podUID="98f26a70-88bf-4a20-99a4-d2d97a5f006a" pod="tigera-operator/tigera-operator-747864d56d-kl5n7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:16.098675 kubelet[2718]: I0813 02:08:16.098075 2718 kubelet.go:2351] "Pod admission denied" podUID="e605166d-ffa3-45c9-9bc0-834bdc116b65" pod="tigera-operator/tigera-operator-747864d56d-pltws" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:16.194744 kubelet[2718]: I0813 02:08:16.194678 2718 kubelet.go:2351] "Pod admission denied" podUID="aed2123b-81c2-4408-8077-2c4742b54402" pod="tigera-operator/tigera-operator-747864d56d-4qjbn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:16.303969 kubelet[2718]: I0813 02:08:16.303814 2718 kubelet.go:2351] "Pod admission denied" podUID="658cad38-7556-4c02-b1ea-056f046cea68" pod="tigera-operator/tigera-operator-747864d56d-qkt7x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:16.396871 kubelet[2718]: I0813 02:08:16.396543 2718 kubelet.go:2351] "Pod admission denied" podUID="aeea4d84-9631-432f-8454-af5f50336967" pod="tigera-operator/tigera-operator-747864d56d-fm6fn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:16.505476 kubelet[2718]: I0813 02:08:16.505404 2718 kubelet.go:2351] "Pod admission denied" podUID="682ee1ed-907e-42b2-8eef-4cbe0b53a17f" pod="tigera-operator/tigera-operator-747864d56d-rmf6q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:16.574209 kubelet[2718]: I0813 02:08:16.574134 2718 kubelet.go:2351] "Pod admission denied" podUID="243691f3-15f2-43fd-bb42-21041e8b1b2d" pod="tigera-operator/tigera-operator-747864d56d-r6hpg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:16.703706 kubelet[2718]: I0813 02:08:16.702654 2718 kubelet.go:2351] "Pod admission denied" podUID="f0f9191b-4987-4277-9cfc-25340b9bc67d" pod="tigera-operator/tigera-operator-747864d56d-p6fxh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:16.805932 kubelet[2718]: I0813 02:08:16.805855 2718 kubelet.go:2351] "Pod admission denied" podUID="940b5780-f2de-4c29-a9ae-9974d5ce973a" pod="tigera-operator/tigera-operator-747864d56d-475f9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:16.907617 kubelet[2718]: I0813 02:08:16.907205 2718 kubelet.go:2351] "Pod admission denied" podUID="6ee63d41-b93d-4430-b0b7-de8f2b81fbd7" pod="tigera-operator/tigera-operator-747864d56d-fvtzt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:16.972933 kubelet[2718]: I0813 02:08:16.972759 2718 kubelet.go:2351] "Pod admission denied" podUID="9e89e10f-18e6-49e1-b6c3-aec24cc2beb7" pod="tigera-operator/tigera-operator-747864d56d-4hbqt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:17.102302 kubelet[2718]: I0813 02:08:17.102230 2718 kubelet.go:2351] "Pod admission denied" podUID="607d1d55-8816-448e-a0fc-c298af3b8f8f" pod="tigera-operator/tigera-operator-747864d56d-d4mvj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:17.250661 kubelet[2718]: I0813 02:08:17.249517 2718 kubelet.go:2351] "Pod admission denied" podUID="be9fc9db-6f4f-4874-98b8-b0f960aa9ad0" pod="tigera-operator/tigera-operator-747864d56d-vpsxs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:17.409789 kubelet[2718]: I0813 02:08:17.409729 2718 kubelet.go:2351] "Pod admission denied" podUID="d2960504-6562-47fb-8c81-9439c6b678cd" pod="tigera-operator/tigera-operator-747864d56d-rr6br" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:17.535492 kubelet[2718]: I0813 02:08:17.535424 2718 kubelet.go:2351] "Pod admission denied" podUID="d76519e8-51e1-46fa-9fc7-90f461432ffd" pod="tigera-operator/tigera-operator-747864d56d-zjbpf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:17.651889 kubelet[2718]: I0813 02:08:17.651818 2718 kubelet.go:2351] "Pod admission denied" podUID="bda262a3-97c7-49af-bae0-b9a89b37bb01" pod="tigera-operator/tigera-operator-747864d56d-z9dfg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:17.772255 kubelet[2718]: I0813 02:08:17.772177 2718 kubelet.go:2351] "Pod admission denied" podUID="db1908fe-78a0-444c-9b4d-34ebaf6118c2" pod="tigera-operator/tigera-operator-747864d56d-nn244" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:17.871835 kubelet[2718]: I0813 02:08:17.871225 2718 kubelet.go:2351] "Pod admission denied" podUID="1406538d-fe3f-4ba6-a7e4-da5bb6bd2a86" pod="tigera-operator/tigera-operator-747864d56d-ncvbh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:17.941722 kubelet[2718]: E0813 02:08:17.939721 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3051784: write /var/lib/containerd/tmpmounts/containerd-mount3051784/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-cdfxj" podUID="e8f51745-7382-4ead-96df-a31572ad4e1f" Aug 13 02:08:18.015887 kubelet[2718]: I0813 02:08:18.015840 2718 kubelet.go:2351] "Pod admission denied" podUID="afbcd05f-1bf8-4c65-8d3f-6028c4b6993a" pod="tigera-operator/tigera-operator-747864d56d-zqwp8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:18.155641 kubelet[2718]: I0813 02:08:18.155089 2718 kubelet.go:2351] "Pod admission denied" podUID="5ea4bf7f-a1a0-44fd-a50f-0dda8c3a44dc" pod="tigera-operator/tigera-operator-747864d56d-gq7vf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:18.294550 kubelet[2718]: I0813 02:08:18.293746 2718 kubelet.go:2351] "Pod admission denied" podUID="361744ab-1663-4b83-b61d-1692624fb20e" pod="tigera-operator/tigera-operator-747864d56d-r9klw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:18.361612 kubelet[2718]: I0813 02:08:18.361316 2718 kubelet.go:2351] "Pod admission denied" podUID="e3a83924-5f3c-40f5-97df-f5c355250058" pod="tigera-operator/tigera-operator-747864d56d-fb8lg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:18.524354 kubelet[2718]: I0813 02:08:18.524289 2718 kubelet.go:2351] "Pod admission denied" podUID="2e7d1078-a1b8-4825-bde3-bafbefd1e424" pod="tigera-operator/tigera-operator-747864d56d-vgchv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:18.664163 kubelet[2718]: I0813 02:08:18.664089 2718 kubelet.go:2351] "Pod admission denied" podUID="1451eca5-0223-42ec-8b6a-b5b9d30b5230" pod="tigera-operator/tigera-operator-747864d56d-7srtl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:18.800704 kubelet[2718]: I0813 02:08:18.800321 2718 kubelet.go:2351] "Pod admission denied" podUID="ce64aded-b6f4-4c70-9a0b-2acf5608078c" pod="tigera-operator/tigera-operator-747864d56d-65rsh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:18.879569 kubelet[2718]: I0813 02:08:18.879315 2718 kubelet.go:2351] "Pod admission denied" podUID="2a2119e9-2fa4-4edf-86b0-051342bf27f2" pod="tigera-operator/tigera-operator-747864d56d-stcv8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:19.012556 kubelet[2718]: I0813 02:08:19.012489 2718 kubelet.go:2351] "Pod admission denied" podUID="d6c8a682-278e-4334-872e-23c00d6cd268" pod="tigera-operator/tigera-operator-747864d56d-999j9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:19.149875 kubelet[2718]: I0813 02:08:19.149697 2718 kubelet.go:2351] "Pod admission denied" podUID="a6730ddb-0261-419f-b310-9e34879856b2" pod="tigera-operator/tigera-operator-747864d56d-j2782" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:19.312437 kubelet[2718]: I0813 02:08:19.312256 2718 kubelet.go:2351] "Pod admission denied" podUID="71241a94-b388-4c16-bedc-20288ad4f4ef" pod="tigera-operator/tigera-operator-747864d56d-fgx9l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:19.392942 kubelet[2718]: I0813 02:08:19.392827 2718 kubelet.go:2351] "Pod admission denied" podUID="5007cfbf-54e4-4ee2-b923-8afa480ebfb6" pod="tigera-operator/tigera-operator-747864d56d-gchgz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:19.500812 kubelet[2718]: I0813 02:08:19.500632 2718 kubelet.go:2351] "Pod admission denied" podUID="47c4ae40-0fad-4fc5-9400-2bcc73fb1af8" pod="tigera-operator/tigera-operator-747864d56d-m2z7n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:19.613415 kubelet[2718]: I0813 02:08:19.613347 2718 kubelet.go:2351] "Pod admission denied" podUID="fd5b14e6-ec0d-4f0c-a828-0d66c16751ac" pod="tigera-operator/tigera-operator-747864d56d-8k96j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:19.705917 kubelet[2718]: I0813 02:08:19.705841 2718 kubelet.go:2351] "Pod admission denied" podUID="c30084a2-54ee-4292-abf2-2bcb3cd6054a" pod="tigera-operator/tigera-operator-747864d56d-qcwcw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:19.812166 kubelet[2718]: I0813 02:08:19.812089 2718 kubelet.go:2351] "Pod admission denied" podUID="8a629c06-b23c-499c-9955-e2fbb45cbd42" pod="tigera-operator/tigera-operator-747864d56d-n6cp4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:19.922630 kubelet[2718]: I0813 02:08:19.922513 2718 kubelet.go:2351] "Pod admission denied" podUID="56547421-d70d-4d7f-acec-4749b447c7aa" pod="tigera-operator/tigera-operator-747864d56d-72vkq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:20.110066 kubelet[2718]: I0813 02:08:20.109890 2718 kubelet.go:2351] "Pod admission denied" podUID="f7734736-98b3-4156-8ec8-83b3291de822" pod="tigera-operator/tigera-operator-747864d56d-hbj67" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:20.197389 kubelet[2718]: I0813 02:08:20.197317 2718 kubelet.go:2351] "Pod admission denied" podUID="dc48d79e-9da1-4b84-8685-7117e3991aec" pod="tigera-operator/tigera-operator-747864d56d-wd5hw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:20.323364 kubelet[2718]: I0813 02:08:20.323293 2718 kubelet.go:2351] "Pod admission denied" podUID="1b58ec7b-f1b0-4fcd-b2c0-e8d305c5bb4d" pod="tigera-operator/tigera-operator-747864d56d-bs4c5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:20.394143 kubelet[2718]: I0813 02:08:20.393970 2718 kubelet.go:2351] "Pod admission denied" podUID="0c15bb2a-a8f0-446d-acf4-c776aaacd91c" pod="tigera-operator/tigera-operator-747864d56d-jf5th" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:20.527722 kubelet[2718]: I0813 02:08:20.527649 2718 kubelet.go:2351] "Pod admission denied" podUID="bffe2eaa-77a3-4b82-94d9-3b98d9541699" pod="tigera-operator/tigera-operator-747864d56d-zwhkk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:20.709581 kubelet[2718]: I0813 02:08:20.709388 2718 kubelet.go:2351] "Pod admission denied" podUID="5d6c200e-58d0-4947-9cae-b38d0ecdec93" pod="tigera-operator/tigera-operator-747864d56d-b4j52" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:20.813225 kubelet[2718]: I0813 02:08:20.813154 2718 kubelet.go:2351] "Pod admission denied" podUID="458158ff-973b-4dfd-800c-3c9fbab36503" pod="tigera-operator/tigera-operator-747864d56d-wwjcf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:20.936400 containerd[1542]: time="2025-08-13T02:08:20.936325667Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,}" Aug 13 02:08:20.967262 kubelet[2718]: I0813 02:08:20.967137 2718 kubelet.go:2351] "Pod admission denied" podUID="9d49a0a4-d50b-4122-b369-55bd530aee29" pod="tigera-operator/tigera-operator-747864d56d-v6xjd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:21.011942 containerd[1542]: time="2025-08-13T02:08:21.011840202Z" level=error msg="Failed to destroy network for sandbox \"3727ede65b5b7c81b6d7f0ec79bcf2cea4396a6944dd97b5c35bfe653d9d9964\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:21.014446 systemd[1]: run-netns-cni\x2df4d85e96\x2d92db\x2dc307\x2dd755\x2d624dc7267ad7.mount: Deactivated successfully. Aug 13 02:08:21.017916 containerd[1542]: time="2025-08-13T02:08:21.017828724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3727ede65b5b7c81b6d7f0ec79bcf2cea4396a6944dd97b5c35bfe653d9d9964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:21.018962 kubelet[2718]: E0813 02:08:21.018912 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3727ede65b5b7c81b6d7f0ec79bcf2cea4396a6944dd97b5c35bfe653d9d9964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:21.019022 kubelet[2718]: E0813 02:08:21.018973 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3727ede65b5b7c81b6d7f0ec79bcf2cea4396a6944dd97b5c35bfe653d9d9964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:08:21.019022 kubelet[2718]: E0813 02:08:21.018995 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3727ede65b5b7c81b6d7f0ec79bcf2cea4396a6944dd97b5c35bfe653d9d9964\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:08:21.019068 kubelet[2718]: E0813 02:08:21.019038 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3727ede65b5b7c81b6d7f0ec79bcf2cea4396a6944dd97b5c35bfe653d9d9964\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:08:21.111866 kubelet[2718]: I0813 02:08:21.110729 2718 kubelet.go:2351] "Pod admission denied" podUID="e350a23b-272e-4c2c-9c9d-2d6007acd906" pod="tigera-operator/tigera-operator-747864d56d-dv2ff" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:21.193031 kubelet[2718]: I0813 02:08:21.192974 2718 kubelet.go:2351] "Pod admission denied" podUID="398ca26a-13c7-4e3e-8ba6-30901b283d97" pod="tigera-operator/tigera-operator-747864d56d-mbb5x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:21.293627 kubelet[2718]: I0813 02:08:21.293564 2718 kubelet.go:2351] "Pod admission denied" podUID="2e644711-0c8b-433a-98cb-4eec01d6d5f3" pod="tigera-operator/tigera-operator-747864d56d-lmgfv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:21.390790 kubelet[2718]: I0813 02:08:21.390666 2718 kubelet.go:2351] "Pod admission denied" podUID="12fb9fc4-fd6b-4f81-8d50-bd0f760390c8" pod="tigera-operator/tigera-operator-747864d56d-cvdjh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:21.491883 kubelet[2718]: I0813 02:08:21.491815 2718 kubelet.go:2351] "Pod admission denied" podUID="092b151a-998f-4de6-a5a3-8a46749dc353" pod="tigera-operator/tigera-operator-747864d56d-9qqdv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:21.549320 kubelet[2718]: I0813 02:08:21.549155 2718 kubelet.go:2351] "Pod admission denied" podUID="09b8c3ef-ff8e-4736-98ea-0ba0fdc36252" pod="tigera-operator/tigera-operator-747864d56d-mm2gd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:21.644297 kubelet[2718]: I0813 02:08:21.644234 2718 kubelet.go:2351] "Pod admission denied" podUID="86519e35-e80c-49c4-b8a9-6f0a6a3a079d" pod="tigera-operator/tigera-operator-747864d56d-647hj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:21.741932 kubelet[2718]: I0813 02:08:21.741872 2718 kubelet.go:2351] "Pod admission denied" podUID="bc5987f1-eff8-462f-9706-f4d68e4f1722" pod="tigera-operator/tigera-operator-747864d56d-r8lrz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:21.792801 kubelet[2718]: I0813 02:08:21.792731 2718 kubelet.go:2351] "Pod admission denied" podUID="ec398f5b-c040-4329-b67c-a79f02addfa6" pod="tigera-operator/tigera-operator-747864d56d-8h772" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:21.903629 kubelet[2718]: I0813 02:08:21.903011 2718 kubelet.go:2351] "Pod admission denied" podUID="3b532896-13f9-4300-8363-1c218ee2bd86" pod="tigera-operator/tigera-operator-747864d56d-247z9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:21.993141 kubelet[2718]: I0813 02:08:21.993062 2718 kubelet.go:2351] "Pod admission denied" podUID="28cac515-3a87-4878-8a81-b16a3fc1332f" pod="tigera-operator/tigera-operator-747864d56d-jnhkp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:22.041546 kubelet[2718]: I0813 02:08:22.041488 2718 kubelet.go:2351] "Pod admission denied" podUID="6562a936-0636-4e4f-a016-6201f75d2ea1" pod="tigera-operator/tigera-operator-747864d56d-4qnh6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:22.145686 kubelet[2718]: I0813 02:08:22.145615 2718 kubelet.go:2351] "Pod admission denied" podUID="4b70121d-db8c-4fe9-a99e-aec3d22d1786" pod="tigera-operator/tigera-operator-747864d56d-bpl4r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:22.245733 kubelet[2718]: I0813 02:08:22.245173 2718 kubelet.go:2351] "Pod admission denied" podUID="5521ebb0-1e4a-4862-a80c-02910acddcc4" pod="tigera-operator/tigera-operator-747864d56d-l2gjb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:22.342092 kubelet[2718]: I0813 02:08:22.341946 2718 kubelet.go:2351] "Pod admission denied" podUID="c7380a87-04b2-44dd-ac0f-444f479c7ace" pod="tigera-operator/tigera-operator-747864d56d-dpfd4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:22.566132 kubelet[2718]: I0813 02:08:22.564894 2718 kubelet.go:2351] "Pod admission denied" podUID="9a81df93-7e19-41d5-b2c5-8cfa33a6abe6" pod="tigera-operator/tigera-operator-747864d56d-6nhz5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:22.646918 kubelet[2718]: I0813 02:08:22.646857 2718 kubelet.go:2351] "Pod admission denied" podUID="9973955d-5eb0-4b59-bf8b-44fcf6a68c28" pod="tigera-operator/tigera-operator-747864d56d-df9f9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:22.697082 kubelet[2718]: I0813 02:08:22.697007 2718 kubelet.go:2351] "Pod admission denied" podUID="06973ee0-3aaf-4109-ba8f-ca8cc1e2a2c4" pod="tigera-operator/tigera-operator-747864d56d-mjn5z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:22.792650 kubelet[2718]: I0813 02:08:22.792580 2718 kubelet.go:2351] "Pod admission denied" podUID="5890da30-992a-4080-9d01-220e11e76f34" pod="tigera-operator/tigera-operator-747864d56d-bzl58" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:22.891806 kubelet[2718]: I0813 02:08:22.891628 2718 kubelet.go:2351] "Pod admission denied" podUID="9f5c5b97-4023-4edf-a364-5bed1f5f1b10" pod="tigera-operator/tigera-operator-747864d56d-qzw6h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:22.991080 kubelet[2718]: I0813 02:08:22.991012 2718 kubelet.go:2351] "Pod admission denied" podUID="690e60a9-5582-4d7e-bdbe-39313548906d" pod="tigera-operator/tigera-operator-747864d56d-bg748" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:23.200968 kubelet[2718]: I0813 02:08:23.198912 2718 kubelet.go:2351] "Pod admission denied" podUID="d5bc6093-dcbc-4a51-bf7c-18ee88df2427" pod="tigera-operator/tigera-operator-747864d56d-c2vzf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:23.305954 kubelet[2718]: I0813 02:08:23.305913 2718 kubelet.go:2351] "Pod admission denied" podUID="75ebe476-b634-4404-95d7-c3c315536451" pod="tigera-operator/tigera-operator-747864d56d-c4lrt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:23.341330 kubelet[2718]: I0813 02:08:23.340906 2718 kubelet.go:2351] "Pod admission denied" podUID="9b5b54f6-467a-47e1-96cc-5e663b4f0ea3" pod="tigera-operator/tigera-operator-747864d56d-lfc92" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:23.442512 kubelet[2718]: I0813 02:08:23.442259 2718 kubelet.go:2351] "Pod admission denied" podUID="bc6bb8ee-f18b-4a5b-a809-de239857e9f7" pod="tigera-operator/tigera-operator-747864d56d-8xdb6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:23.657798 kubelet[2718]: I0813 02:08:23.655735 2718 kubelet.go:2351] "Pod admission denied" podUID="9830681f-341d-4f54-ab58-f0b63d165a56" pod="tigera-operator/tigera-operator-747864d56d-9gzsz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:23.742774 kubelet[2718]: I0813 02:08:23.742707 2718 kubelet.go:2351] "Pod admission denied" podUID="18830d23-e22a-4046-bfa8-a38eb2715798" pod="tigera-operator/tigera-operator-747864d56d-whsdr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:23.841896 kubelet[2718]: I0813 02:08:23.841832 2718 kubelet.go:2351] "Pod admission denied" podUID="17f6ecc1-2a3b-43aa-8d06-37679394f040" pod="tigera-operator/tigera-operator-747864d56d-kczv8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:23.961406 kubelet[2718]: I0813 02:08:23.961097 2718 kubelet.go:2351] "Pod admission denied" podUID="b7c1aa00-6d8e-49a2-b813-03c1a0dea1f0" pod="tigera-operator/tigera-operator-747864d56d-bfksb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:24.042112 kubelet[2718]: I0813 02:08:24.042049 2718 kubelet.go:2351] "Pod admission denied" podUID="35a3f05f-20ce-47ff-b2fa-2324a50a0219" pod="tigera-operator/tigera-operator-747864d56d-6r78z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:24.245394 kubelet[2718]: I0813 02:08:24.244620 2718 kubelet.go:2351] "Pod admission denied" podUID="f788d53e-41db-4ab4-a496-ba564476d343" pod="tigera-operator/tigera-operator-747864d56d-t8lpc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:24.358812 kubelet[2718]: I0813 02:08:24.358744 2718 kubelet.go:2351] "Pod admission denied" podUID="405dd2fd-a694-48ca-b585-90710a22a8d4" pod="tigera-operator/tigera-operator-747864d56d-gmw22" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:24.439916 kubelet[2718]: I0813 02:08:24.439845 2718 kubelet.go:2351] "Pod admission denied" podUID="6de92052-7f96-4881-8b18-39ce5bfe0a9d" pod="tigera-operator/tigera-operator-747864d56d-x22rm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:24.543377 kubelet[2718]: I0813 02:08:24.543326 2718 kubelet.go:2351] "Pod admission denied" podUID="717f023b-9cdc-4576-84af-250a69249862" pod="tigera-operator/tigera-operator-747864d56d-j8wzr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:24.649506 kubelet[2718]: I0813 02:08:24.649400 2718 kubelet.go:2351] "Pod admission denied" podUID="3dc0f15e-3a83-4d8a-ac79-0a492a5e6366" pod="tigera-operator/tigera-operator-747864d56d-7f2xc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:24.842940 kubelet[2718]: I0813 02:08:24.842767 2718 kubelet.go:2351] "Pod admission denied" podUID="6dcca442-73a5-4ed3-a62e-cd246c3de180" pod="tigera-operator/tigera-operator-747864d56d-mqcjd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:24.936118 kubelet[2718]: E0813 02:08:24.935751 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:08:24.937042 containerd[1542]: time="2025-08-13T02:08:24.936643189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,}" Aug 13 02:08:24.952451 kubelet[2718]: I0813 02:08:24.952388 2718 kubelet.go:2351] "Pod admission denied" podUID="cb507a10-f520-4082-bd37-a3c83d881f70" pod="tigera-operator/tigera-operator-747864d56d-j88fb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:25.009484 containerd[1542]: time="2025-08-13T02:08:25.009429990Z" level=error msg="Failed to destroy network for sandbox \"397243f7c8f2c207933792516da43979c27df0f87b7dcdff77323c601356d92a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:25.012865 containerd[1542]: time="2025-08-13T02:08:25.012634725Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"397243f7c8f2c207933792516da43979c27df0f87b7dcdff77323c601356d92a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:25.012988 kubelet[2718]: E0813 02:08:25.012892 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"397243f7c8f2c207933792516da43979c27df0f87b7dcdff77323c601356d92a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:25.012988 kubelet[2718]: E0813 02:08:25.012970 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"397243f7c8f2c207933792516da43979c27df0f87b7dcdff77323c601356d92a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:08:25.013066 kubelet[2718]: E0813 02:08:25.012995 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"397243f7c8f2c207933792516da43979c27df0f87b7dcdff77323c601356d92a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:08:25.013090 kubelet[2718]: E0813 02:08:25.013055 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"397243f7c8f2c207933792516da43979c27df0f87b7dcdff77323c601356d92a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p5qmw" podUID="26fd4059-1e9c-49a2-9bd9-181be9ad7bcb" Aug 13 02:08:25.013689 systemd[1]: run-netns-cni\x2da0b1432d\x2d2c3e\x2daa83\x2dc5c7\x2d0bfebcfae24e.mount: Deactivated successfully. Aug 13 02:08:25.046349 kubelet[2718]: I0813 02:08:25.046118 2718 kubelet.go:2351] "Pod admission denied" podUID="e6417b96-141a-4c3c-8b37-c4194e82dd81" pod="tigera-operator/tigera-operator-747864d56d-jnhbz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:25.244642 kubelet[2718]: I0813 02:08:25.244444 2718 kubelet.go:2351] "Pod admission denied" podUID="a981dc42-6b61-47d1-b49e-7bbbe9d3a58f" pod="tigera-operator/tigera-operator-747864d56d-5g4pw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:25.344579 kubelet[2718]: I0813 02:08:25.344508 2718 kubelet.go:2351] "Pod admission denied" podUID="5a097d12-e545-4291-a02c-8ccb462042c4" pod="tigera-operator/tigera-operator-747864d56d-l4hkc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:25.397613 kubelet[2718]: I0813 02:08:25.397337 2718 kubelet.go:2351] "Pod admission denied" podUID="07aeb3a0-2012-4405-aae1-618997ab07e6" pod="tigera-operator/tigera-operator-747864d56d-m5nbr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:25.492355 kubelet[2718]: I0813 02:08:25.492289 2718 kubelet.go:2351] "Pod admission denied" podUID="b8df1f27-78f4-489c-a7a7-e8eee5471e35" pod="tigera-operator/tigera-operator-747864d56d-z7fbf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:25.547793 kubelet[2718]: I0813 02:08:25.547766 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:08:25.547793 kubelet[2718]: I0813 02:08:25.547801 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:08:25.550320 kubelet[2718]: I0813 02:08:25.550289 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:08:25.559282 kubelet[2718]: I0813 02:08:25.559260 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:08:25.559356 kubelet[2718]: I0813 02:08:25.559316 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-p5qmw","calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/csi-node-driver-r6mhv","calico-system/calico-node-cdfxj","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:08:25.559356 kubelet[2718]: E0813 02:08:25.559346 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:08:25.559356 kubelet[2718]: E0813 02:08:25.559355 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:08:25.559483 kubelet[2718]: E0813 02:08:25.559362 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:08:25.559483 kubelet[2718]: E0813 02:08:25.559369 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:08:25.559483 kubelet[2718]: E0813 02:08:25.559376 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:08:25.559483 kubelet[2718]: E0813 02:08:25.559388 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:08:25.559483 kubelet[2718]: E0813 02:08:25.559397 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:08:25.559483 kubelet[2718]: E0813 02:08:25.559405 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:08:25.559483 kubelet[2718]: E0813 02:08:25.559414 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:08:25.559483 kubelet[2718]: E0813 02:08:25.559423 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:08:25.559483 kubelet[2718]: I0813 02:08:25.559432 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:08:25.592230 kubelet[2718]: I0813 02:08:25.592188 2718 kubelet.go:2351] "Pod admission denied" podUID="8218a7ed-9f31-4149-b247-d15b152190aa" pod="tigera-operator/tigera-operator-747864d56d-jglp4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:25.695145 kubelet[2718]: I0813 02:08:25.694023 2718 kubelet.go:2351] "Pod admission denied" podUID="170d6bd6-db0b-41be-966f-5270e691f63e" pod="tigera-operator/tigera-operator-747864d56d-h7c8g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:25.791373 kubelet[2718]: I0813 02:08:25.791306 2718 kubelet.go:2351] "Pod admission denied" podUID="3b38116f-12a4-4ceb-affe-275c75e20c36" pod="tigera-operator/tigera-operator-747864d56d-6xjjr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:25.895062 kubelet[2718]: I0813 02:08:25.894884 2718 kubelet.go:2351] "Pod admission denied" podUID="7e95133b-2af2-4cf7-8076-81a06a3c87fb" pod="tigera-operator/tigera-operator-747864d56d-vfnc8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:25.991422 kubelet[2718]: I0813 02:08:25.991363 2718 kubelet.go:2351] "Pod admission denied" podUID="a54a601d-c5e1-4a27-8cc0-00885cac5f02" pod="tigera-operator/tigera-operator-747864d56d-vp86g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:26.090830 kubelet[2718]: I0813 02:08:26.090771 2718 kubelet.go:2351] "Pod admission denied" podUID="48883562-54bd-4f03-b95c-5c5ad7a96f41" pod="tigera-operator/tigera-operator-747864d56d-vpz87" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:26.303249 kubelet[2718]: I0813 02:08:26.303188 2718 kubelet.go:2351] "Pod admission denied" podUID="7086af72-8a8a-4c11-91eb-48548401510b" pod="tigera-operator/tigera-operator-747864d56d-lvv8t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:26.547345 kubelet[2718]: I0813 02:08:26.547270 2718 kubelet.go:2351] "Pod admission denied" podUID="bac64815-f12b-4ee8-8d78-26acfd298084" pod="tigera-operator/tigera-operator-747864d56d-vl7mp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:26.642805 kubelet[2718]: I0813 02:08:26.642668 2718 kubelet.go:2351] "Pod admission denied" podUID="dfb19bd6-89b7-4326-b323-77e287f53b0b" pod="tigera-operator/tigera-operator-747864d56d-f6sk8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:26.742950 kubelet[2718]: I0813 02:08:26.742872 2718 kubelet.go:2351] "Pod admission denied" podUID="0e07b4de-5bde-4539-ab42-564f85a5bb88" pod="tigera-operator/tigera-operator-747864d56d-thlcp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:26.842459 kubelet[2718]: I0813 02:08:26.842357 2718 kubelet.go:2351] "Pod admission denied" podUID="a6cfea4d-0a1b-440a-aa07-5ec0e7425e1f" pod="tigera-operator/tigera-operator-747864d56d-xmzm9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:27.051680 kubelet[2718]: I0813 02:08:27.050900 2718 kubelet.go:2351] "Pod admission denied" podUID="655046f7-4f98-481b-8f2d-4f70fc0cd1f3" pod="tigera-operator/tigera-operator-747864d56d-m94tg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:27.148133 kubelet[2718]: I0813 02:08:27.148063 2718 kubelet.go:2351] "Pod admission denied" podUID="fe91d3ae-2854-49f6-8cd4-044e65cbcec4" pod="tigera-operator/tigera-operator-747864d56d-zkznf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:27.246471 kubelet[2718]: I0813 02:08:27.246385 2718 kubelet.go:2351] "Pod admission denied" podUID="6547995b-e051-4f2d-8ce0-b439ddb292a6" pod="tigera-operator/tigera-operator-747864d56d-wbmg9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:27.347367 kubelet[2718]: I0813 02:08:27.347194 2718 kubelet.go:2351] "Pod admission denied" podUID="84ba10a8-be24-4bc7-a9ae-12c14900e8e7" pod="tigera-operator/tigera-operator-747864d56d-s2486" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:27.443849 kubelet[2718]: I0813 02:08:27.443773 2718 kubelet.go:2351] "Pod admission denied" podUID="26762037-452f-4fbb-8837-f41aee406f8e" pod="tigera-operator/tigera-operator-747864d56d-kpmhb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:27.646553 kubelet[2718]: I0813 02:08:27.646365 2718 kubelet.go:2351] "Pod admission denied" podUID="a81d956d-3b8f-4cb0-bc67-f7f9cec86dd6" pod="tigera-operator/tigera-operator-747864d56d-gx9vq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:27.744523 kubelet[2718]: I0813 02:08:27.744443 2718 kubelet.go:2351] "Pod admission denied" podUID="00cf8580-685b-4fb5-a4ae-ea1aba6fe053" pod="tigera-operator/tigera-operator-747864d56d-tvh9j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:27.838990 kubelet[2718]: I0813 02:08:27.838928 2718 kubelet.go:2351] "Pod admission denied" podUID="b422267e-ca4c-47b8-a4c3-cbae1c79e596" pod="tigera-operator/tigera-operator-747864d56d-hltvc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:27.936657 kubelet[2718]: E0813 02:08:27.936503 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:08:27.939179 containerd[1542]: time="2025-08-13T02:08:27.938709229Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,}" Aug 13 02:08:27.952096 kubelet[2718]: I0813 02:08:27.951983 2718 kubelet.go:2351] "Pod admission denied" podUID="2982dbc5-f224-4c8a-abaa-89268d15c574" pod="tigera-operator/tigera-operator-747864d56d-v5qcp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:28.018194 containerd[1542]: time="2025-08-13T02:08:28.018121903Z" level=error msg="Failed to destroy network for sandbox \"ebc1a504279b64b5e2452d78c95e325f115aed4fe328ad1efdd55d95e4b40252\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:28.021678 systemd[1]: run-netns-cni\x2dced06abe\x2d441c\x2da3c0\x2de1d9\x2db5db04ceed1d.mount: Deactivated successfully. Aug 13 02:08:28.022204 containerd[1542]: time="2025-08-13T02:08:28.021936886Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebc1a504279b64b5e2452d78c95e325f115aed4fe328ad1efdd55d95e4b40252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:28.022469 kubelet[2718]: E0813 02:08:28.022150 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebc1a504279b64b5e2452d78c95e325f115aed4fe328ad1efdd55d95e4b40252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:28.022469 kubelet[2718]: E0813 02:08:28.022204 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebc1a504279b64b5e2452d78c95e325f115aed4fe328ad1efdd55d95e4b40252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:08:28.022469 kubelet[2718]: E0813 02:08:28.022229 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ebc1a504279b64b5e2452d78c95e325f115aed4fe328ad1efdd55d95e4b40252\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:08:28.022469 kubelet[2718]: E0813 02:08:28.022284 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ebc1a504279b64b5e2452d78c95e325f115aed4fe328ad1efdd55d95e4b40252\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pw6gg" podUID="e73a6876-bbb3-4e11-8a33-1945cf27a944" Aug 13 02:08:28.050673 kubelet[2718]: I0813 02:08:28.050609 2718 kubelet.go:2351] "Pod admission denied" podUID="a69f433e-10c9-4439-99b9-471b8addf85c" pod="tigera-operator/tigera-operator-747864d56d-nwbdn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:28.144153 kubelet[2718]: I0813 02:08:28.143507 2718 kubelet.go:2351] "Pod admission denied" podUID="14e53fda-a52f-4912-ac1b-a385fb5b3a32" pod="tigera-operator/tigera-operator-747864d56d-xzg29" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:28.242408 kubelet[2718]: I0813 02:08:28.241705 2718 kubelet.go:2351] "Pod admission denied" podUID="e7c500d2-8700-40f3-9da6-c9ceb8988217" pod="tigera-operator/tigera-operator-747864d56d-dj7xr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:28.345316 kubelet[2718]: I0813 02:08:28.345261 2718 kubelet.go:2351] "Pod admission denied" podUID="e093279c-dad6-4edd-9710-f399421fe282" pod="tigera-operator/tigera-operator-747864d56d-hmg9p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:28.398891 kubelet[2718]: I0813 02:08:28.398840 2718 kubelet.go:2351] "Pod admission denied" podUID="0ddb0017-2775-4137-b9c3-26c0220fdba2" pod="tigera-operator/tigera-operator-747864d56d-xpkf9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:28.498761 kubelet[2718]: I0813 02:08:28.497942 2718 kubelet.go:2351] "Pod admission denied" podUID="653b6f4d-a300-401e-a576-6d9a2a3aedc2" pod="tigera-operator/tigera-operator-747864d56d-nnxdl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:28.607643 kubelet[2718]: I0813 02:08:28.607568 2718 kubelet.go:2351] "Pod admission denied" podUID="28b65eed-76ce-45af-ace7-2009a3f95088" pod="tigera-operator/tigera-operator-747864d56d-z4tmj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:28.700284 kubelet[2718]: I0813 02:08:28.700217 2718 kubelet.go:2351] "Pod admission denied" podUID="8a59ff45-85b2-4928-9aa2-e1c39aa39751" pod="tigera-operator/tigera-operator-747864d56d-58rxx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:28.792425 kubelet[2718]: I0813 02:08:28.792365 2718 kubelet.go:2351] "Pod admission denied" podUID="e72225e5-6deb-415d-bd1a-520f6a881e5d" pod="tigera-operator/tigera-operator-747864d56d-5nkpf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:28.893832 kubelet[2718]: I0813 02:08:28.893767 2718 kubelet.go:2351] "Pod admission denied" podUID="6db0f336-bebe-4cd9-aca6-2e329e21e1a6" pod="tigera-operator/tigera-operator-747864d56d-rmjgm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:28.936654 kubelet[2718]: E0813 02:08:28.936444 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3051784: write /var/lib/containerd/tmpmounts/containerd-mount3051784/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-cdfxj" podUID="e8f51745-7382-4ead-96df-a31572ad4e1f" Aug 13 02:08:28.936847 containerd[1542]: time="2025-08-13T02:08:28.936670636Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,}" Aug 13 02:08:28.997085 kubelet[2718]: I0813 02:08:28.996945 2718 kubelet.go:2351] "Pod admission denied" podUID="667c69c6-e571-4945-8c79-5f1121629518" pod="tigera-operator/tigera-operator-747864d56d-vvfkk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:28.998825 containerd[1542]: time="2025-08-13T02:08:28.998704803Z" level=error msg="Failed to destroy network for sandbox \"fb4a497ee0b58b9733601c77ffa479861e5f9810b230a2b80a0e4013e895e387\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:29.003133 systemd[1]: run-netns-cni\x2d5a66c035\x2d8708\x2d525b\x2d1010\x2d743c80261f77.mount: Deactivated successfully. Aug 13 02:08:29.006504 containerd[1542]: time="2025-08-13T02:08:29.006345278Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb4a497ee0b58b9733601c77ffa479861e5f9810b230a2b80a0e4013e895e387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:29.007108 kubelet[2718]: E0813 02:08:29.006918 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb4a497ee0b58b9733601c77ffa479861e5f9810b230a2b80a0e4013e895e387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:29.007403 kubelet[2718]: E0813 02:08:29.007299 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb4a497ee0b58b9733601c77ffa479861e5f9810b230a2b80a0e4013e895e387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:08:29.007488 kubelet[2718]: E0813 02:08:29.007462 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb4a497ee0b58b9733601c77ffa479861e5f9810b230a2b80a0e4013e895e387\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:08:29.007924 kubelet[2718]: E0813 02:08:29.007704 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb4a497ee0b58b9733601c77ffa479861e5f9810b230a2b80a0e4013e895e387\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:08:29.041611 kubelet[2718]: I0813 02:08:29.041555 2718 kubelet.go:2351] "Pod admission denied" podUID="005f0a53-7a50-41c7-82e8-406e2eb8f907" pod="tigera-operator/tigera-operator-747864d56d-7wgpv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:29.139201 kubelet[2718]: I0813 02:08:29.139035 2718 kubelet.go:2351] "Pod admission denied" podUID="664ff23a-35a3-4276-a38d-00145c449f8e" pod="tigera-operator/tigera-operator-747864d56d-b974b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:29.239910 kubelet[2718]: I0813 02:08:29.239856 2718 kubelet.go:2351] "Pod admission denied" podUID="66f5f1cf-ae0c-4f42-bc5a-ee476c1f663f" pod="tigera-operator/tigera-operator-747864d56d-9c6vg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:29.338197 kubelet[2718]: I0813 02:08:29.338150 2718 kubelet.go:2351] "Pod admission denied" podUID="4855c055-a13c-40f5-86fd-1b4a44878adc" pod="tigera-operator/tigera-operator-747864d56d-j9zlz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:29.549267 kubelet[2718]: I0813 02:08:29.549177 2718 kubelet.go:2351] "Pod admission denied" podUID="0b8c687e-3a47-4375-b2a4-40e5137b1149" pod="tigera-operator/tigera-operator-747864d56d-5tqxb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:29.646548 kubelet[2718]: I0813 02:08:29.646480 2718 kubelet.go:2351] "Pod admission denied" podUID="4c37155c-18a7-402e-8eec-3ef53a726549" pod="tigera-operator/tigera-operator-747864d56d-lltkr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:29.692546 kubelet[2718]: I0813 02:08:29.692499 2718 kubelet.go:2351] "Pod admission denied" podUID="6e866603-407a-48df-9ed6-7a8649116d75" pod="tigera-operator/tigera-operator-747864d56d-jmtmp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:29.796665 kubelet[2718]: I0813 02:08:29.796609 2718 kubelet.go:2351] "Pod admission denied" podUID="37707d1b-8b0a-44f6-9a5f-8ca300667a16" pod="tigera-operator/tigera-operator-747864d56d-n2w2d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:29.896077 kubelet[2718]: I0813 02:08:29.895903 2718 kubelet.go:2351] "Pod admission denied" podUID="2b1446da-4e5d-4ef0-bfda-37edd82020bd" pod="tigera-operator/tigera-operator-747864d56d-pjvqd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:29.999046 kubelet[2718]: I0813 02:08:29.997368 2718 kubelet.go:2351] "Pod admission denied" podUID="ec93595d-fa88-4b2d-9115-2646bb0df97d" pod="tigera-operator/tigera-operator-747864d56d-j7t4s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:30.105825 kubelet[2718]: I0813 02:08:30.105735 2718 kubelet.go:2351] "Pod admission denied" podUID="adfe5976-d8f6-448e-a0b1-238755fea9fc" pod="tigera-operator/tigera-operator-747864d56d-qnwsd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:30.194502 kubelet[2718]: I0813 02:08:30.194351 2718 kubelet.go:2351] "Pod admission denied" podUID="0f2e0036-a2fd-40b7-861c-fb7ad0c72f24" pod="tigera-operator/tigera-operator-747864d56d-2sb67" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:30.293544 kubelet[2718]: I0813 02:08:30.293479 2718 kubelet.go:2351] "Pod admission denied" podUID="1bb384b9-a9b2-417e-873b-6645a2f3e15d" pod="tigera-operator/tigera-operator-747864d56d-8qm4p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:30.411614 kubelet[2718]: I0813 02:08:30.410507 2718 kubelet.go:2351] "Pod admission denied" podUID="947ce2b1-52ae-491f-a382-82c255758ad0" pod="tigera-operator/tigera-operator-747864d56d-9t2vk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:30.499290 kubelet[2718]: I0813 02:08:30.499111 2718 kubelet.go:2351] "Pod admission denied" podUID="3fe0ad9b-4149-42ab-ad81-2ed9cc880948" pod="tigera-operator/tigera-operator-747864d56d-5lgws" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:30.595611 kubelet[2718]: I0813 02:08:30.595553 2718 kubelet.go:2351] "Pod admission denied" podUID="f2e710b6-4d35-4d4b-8410-115202e83864" pod="tigera-operator/tigera-operator-747864d56d-cljvr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:30.817514 kubelet[2718]: I0813 02:08:30.817452 2718 kubelet.go:2351] "Pod admission denied" podUID="94e5b55c-858b-4783-aeb1-a455ac02b8d3" pod="tigera-operator/tigera-operator-747864d56d-gcps5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:30.896944 kubelet[2718]: I0813 02:08:30.896882 2718 kubelet.go:2351] "Pod admission denied" podUID="837176ce-6217-4181-83b3-e8da2f9bf9d6" pod="tigera-operator/tigera-operator-747864d56d-khgcn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:30.993271 kubelet[2718]: I0813 02:08:30.993211 2718 kubelet.go:2351] "Pod admission denied" podUID="1bcb0c4f-871c-431e-bb31-8a684d21bb78" pod="tigera-operator/tigera-operator-747864d56d-8scq2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:31.092850 kubelet[2718]: I0813 02:08:31.092702 2718 kubelet.go:2351] "Pod admission denied" podUID="41c14bd4-c69f-40aa-99a7-3dea21c96d98" pod="tigera-operator/tigera-operator-747864d56d-vbmwf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:31.191606 kubelet[2718]: I0813 02:08:31.191534 2718 kubelet.go:2351] "Pod admission denied" podUID="159ac980-89db-4545-996b-a9d4b387098c" pod="tigera-operator/tigera-operator-747864d56d-77h87" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:31.294632 kubelet[2718]: I0813 02:08:31.294467 2718 kubelet.go:2351] "Pod admission denied" podUID="bd039f66-760a-499c-bc7f-8767043791c9" pod="tigera-operator/tigera-operator-747864d56d-cwqvm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:31.396771 kubelet[2718]: I0813 02:08:31.394973 2718 kubelet.go:2351] "Pod admission denied" podUID="e5a44dfa-572f-4a2a-92ba-ae7d798b53a3" pod="tigera-operator/tigera-operator-747864d56d-98n59" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:31.494707 kubelet[2718]: I0813 02:08:31.494638 2718 kubelet.go:2351] "Pod admission denied" podUID="cbc10e81-c2dc-408e-b782-4c86e6674d44" pod="tigera-operator/tigera-operator-747864d56d-pnszt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:31.599565 kubelet[2718]: I0813 02:08:31.599452 2718 kubelet.go:2351] "Pod admission denied" podUID="b007ccae-9e01-43bd-8953-7c4e63fbbaf5" pod="tigera-operator/tigera-operator-747864d56d-wgw6b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:31.702153 kubelet[2718]: I0813 02:08:31.701692 2718 kubelet.go:2351] "Pod admission denied" podUID="35f7f1d6-c20a-4ab7-a6ec-a689c3b73586" pod="tigera-operator/tigera-operator-747864d56d-dbx79" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:31.752794 kubelet[2718]: I0813 02:08:31.752656 2718 kubelet.go:2351] "Pod admission denied" podUID="5f5a207c-08a1-4c50-b508-0d92bc1fa4d6" pod="tigera-operator/tigera-operator-747864d56d-tz28d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:31.846615 kubelet[2718]: I0813 02:08:31.846531 2718 kubelet.go:2351] "Pod admission denied" podUID="5ea29db3-1648-46a3-8ccf-dba69cca33ce" pod="tigera-operator/tigera-operator-747864d56d-j82l2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:31.938264 containerd[1542]: time="2025-08-13T02:08:31.937953288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,}" Aug 13 02:08:31.953884 kubelet[2718]: I0813 02:08:31.953765 2718 kubelet.go:2351] "Pod admission denied" podUID="3ad1cd55-51b1-4841-ac5d-0e05fc47f162" pod="tigera-operator/tigera-operator-747864d56d-77c7f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:32.019644 containerd[1542]: time="2025-08-13T02:08:32.019452766Z" level=error msg="Failed to destroy network for sandbox \"ef616061ffb69adec30a9e1bf0154efe13c4e5ea12eb134b9f53a362052936de\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:32.024510 systemd[1]: run-netns-cni\x2dc2cd2302\x2d01c4\x2dc8d8\x2d5775\x2db9936f46241e.mount: Deactivated successfully. Aug 13 02:08:32.028804 kubelet[2718]: I0813 02:08:32.025221 2718 kubelet.go:2351] "Pod admission denied" podUID="e2382887-a5c2-4533-b079-5c37d2541e08" pod="tigera-operator/tigera-operator-747864d56d-vtws2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:32.029349 containerd[1542]: time="2025-08-13T02:08:32.029145869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef616061ffb69adec30a9e1bf0154efe13c4e5ea12eb134b9f53a362052936de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:32.031179 kubelet[2718]: E0813 02:08:32.030883 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef616061ffb69adec30a9e1bf0154efe13c4e5ea12eb134b9f53a362052936de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:32.031179 kubelet[2718]: E0813 02:08:32.030935 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef616061ffb69adec30a9e1bf0154efe13c4e5ea12eb134b9f53a362052936de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:08:32.031179 kubelet[2718]: E0813 02:08:32.030964 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ef616061ffb69adec30a9e1bf0154efe13c4e5ea12eb134b9f53a362052936de\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:08:32.031179 kubelet[2718]: E0813 02:08:32.031003 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ef616061ffb69adec30a9e1bf0154efe13c4e5ea12eb134b9f53a362052936de\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:08:32.142799 kubelet[2718]: I0813 02:08:32.142745 2718 kubelet.go:2351] "Pod admission denied" podUID="829b446f-6f98-4fb3-aff6-c24353749f75" pod="tigera-operator/tigera-operator-747864d56d-5gjgg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:32.241717 kubelet[2718]: I0813 02:08:32.240923 2718 kubelet.go:2351] "Pod admission denied" podUID="7374c9e2-f204-49d3-8c24-b543a3950f3f" pod="tigera-operator/tigera-operator-747864d56d-nmwpb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:32.461697 kubelet[2718]: I0813 02:08:32.461634 2718 kubelet.go:2351] "Pod admission denied" podUID="2f40fcd4-be4b-4e09-a251-02c40a92b628" pod="tigera-operator/tigera-operator-747864d56d-9b5kv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:32.549616 kubelet[2718]: I0813 02:08:32.549531 2718 kubelet.go:2351] "Pod admission denied" podUID="d1370561-d591-4646-b33f-7b1dce5cd917" pod="tigera-operator/tigera-operator-747864d56d-mwxdh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:32.642445 kubelet[2718]: I0813 02:08:32.642384 2718 kubelet.go:2351] "Pod admission denied" podUID="51da3502-928f-4228-a7a9-1923d140d5ab" pod="tigera-operator/tigera-operator-747864d56d-z2wcr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:32.859729 kubelet[2718]: I0813 02:08:32.859179 2718 kubelet.go:2351] "Pod admission denied" podUID="30c2d3a3-ca5e-4266-b0e7-afbb88cf360a" pod="tigera-operator/tigera-operator-747864d56d-ls7p2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:32.942487 kubelet[2718]: I0813 02:08:32.942429 2718 kubelet.go:2351] "Pod admission denied" podUID="5521a0bc-573c-488b-8382-1bdc93fbf625" pod="tigera-operator/tigera-operator-747864d56d-vdllc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:33.045998 kubelet[2718]: I0813 02:08:33.045935 2718 kubelet.go:2351] "Pod admission denied" podUID="d0f78a11-f211-4b78-b707-6b37dfe93e8e" pod="tigera-operator/tigera-operator-747864d56d-2cwhv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:33.151729 kubelet[2718]: I0813 02:08:33.149602 2718 kubelet.go:2351] "Pod admission denied" podUID="f8f4a635-2cce-440e-ae05-04e8cbb581f2" pod="tigera-operator/tigera-operator-747864d56d-r8pz2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:33.241800 kubelet[2718]: I0813 02:08:33.241740 2718 kubelet.go:2351] "Pod admission denied" podUID="88988462-97aa-4c6f-81e6-e41804c1021b" pod="tigera-operator/tigera-operator-747864d56d-tq2pg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:33.341932 kubelet[2718]: I0813 02:08:33.341870 2718 kubelet.go:2351] "Pod admission denied" podUID="4b37e3ed-2b06-44e9-b7b1-01ec59777ad8" pod="tigera-operator/tigera-operator-747864d56d-x47zn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:33.444696 kubelet[2718]: I0813 02:08:33.444551 2718 kubelet.go:2351] "Pod admission denied" podUID="a4ee4a8b-a49d-4fcf-8ecd-0c55e4b9b4f0" pod="tigera-operator/tigera-operator-747864d56d-khjw7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:33.649535 kubelet[2718]: I0813 02:08:33.649472 2718 kubelet.go:2351] "Pod admission denied" podUID="0eaa9e92-32c0-4c9b-ba02-ed2e90ecf03c" pod="tigera-operator/tigera-operator-747864d56d-tn6nq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:33.744106 kubelet[2718]: I0813 02:08:33.743961 2718 kubelet.go:2351] "Pod admission denied" podUID="3d73cc6c-4035-4aaf-9ee6-57f78df3c065" pod="tigera-operator/tigera-operator-747864d56d-rt7lq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:33.797079 kubelet[2718]: I0813 02:08:33.797029 2718 kubelet.go:2351] "Pod admission denied" podUID="d7b29dcf-4694-46c4-ab5c-26c1924d5410" pod="tigera-operator/tigera-operator-747864d56d-5bhdx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:33.895680 kubelet[2718]: I0813 02:08:33.895616 2718 kubelet.go:2351] "Pod admission denied" podUID="836cf614-668b-4abd-a2e9-530a809ab67b" pod="tigera-operator/tigera-operator-747864d56d-ct4qh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:34.094794 kubelet[2718]: I0813 02:08:34.094729 2718 kubelet.go:2351] "Pod admission denied" podUID="b0749145-bdf7-4c04-8c59-337650613447" pod="tigera-operator/tigera-operator-747864d56d-cbtqm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:34.201648 kubelet[2718]: I0813 02:08:34.201433 2718 kubelet.go:2351] "Pod admission denied" podUID="c9ce05df-8e69-4a4a-a791-e3a075bc078d" pod="tigera-operator/tigera-operator-747864d56d-tdqgx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:34.244611 kubelet[2718]: I0813 02:08:34.244534 2718 kubelet.go:2351] "Pod admission denied" podUID="a7e81b15-dd04-4759-99a0-f2586475d847" pod="tigera-operator/tigera-operator-747864d56d-l76cn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:34.347006 kubelet[2718]: I0813 02:08:34.346836 2718 kubelet.go:2351] "Pod admission denied" podUID="b9180684-42a1-475b-8888-e49ce9872a49" pod="tigera-operator/tigera-operator-747864d56d-z2kb2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:34.459887 kubelet[2718]: I0813 02:08:34.459210 2718 kubelet.go:2351] "Pod admission denied" podUID="b98830d4-ba29-4edb-9a96-09376e5e1308" pod="tigera-operator/tigera-operator-747864d56d-hwsqg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:34.495158 kubelet[2718]: I0813 02:08:34.495111 2718 kubelet.go:2351] "Pod admission denied" podUID="149aaa02-eec7-406b-b783-a8824331e456" pod="tigera-operator/tigera-operator-747864d56d-px5fb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:34.596349 kubelet[2718]: I0813 02:08:34.595516 2718 kubelet.go:2351] "Pod admission denied" podUID="a6f358cd-301a-4a9e-965a-d6bfaf56f925" pod="tigera-operator/tigera-operator-747864d56d-gv8dw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:34.705126 kubelet[2718]: I0813 02:08:34.704623 2718 kubelet.go:2351] "Pod admission denied" podUID="d5e6733e-c129-4bd1-8aa5-3db41723939b" pod="tigera-operator/tigera-operator-747864d56d-dngll" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:34.739014 kubelet[2718]: I0813 02:08:34.738969 2718 kubelet.go:2351] "Pod admission denied" podUID="f6bc2816-510d-45be-a340-9eb6fd248b70" pod="tigera-operator/tigera-operator-747864d56d-ghb2g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:34.843831 kubelet[2718]: I0813 02:08:34.843570 2718 kubelet.go:2351] "Pod admission denied" podUID="5d089714-315b-4d40-884d-609205236512" pod="tigera-operator/tigera-operator-747864d56d-5gq8f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:35.061287 kubelet[2718]: I0813 02:08:35.060693 2718 kubelet.go:2351] "Pod admission denied" podUID="dd7f99ad-3cfe-4067-8c8f-ef0752902bef" pod="tigera-operator/tigera-operator-747864d56d-59ndr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:35.143952 kubelet[2718]: I0813 02:08:35.143886 2718 kubelet.go:2351] "Pod admission denied" podUID="50cffb0e-34d0-42c0-979a-4e75e2ccc601" pod="tigera-operator/tigera-operator-747864d56d-q7hlc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:35.191289 kubelet[2718]: I0813 02:08:35.191244 2718 kubelet.go:2351] "Pod admission denied" podUID="9da97e00-6d39-4901-8be0-8d8de3e9575a" pod="tigera-operator/tigera-operator-747864d56d-cx9sx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:35.305801 kubelet[2718]: I0813 02:08:35.304621 2718 kubelet.go:2351] "Pod admission denied" podUID="c3315050-1430-416d-a9ba-12622de9f09b" pod="tigera-operator/tigera-operator-747864d56d-8kchx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:35.499076 kubelet[2718]: I0813 02:08:35.498576 2718 kubelet.go:2351] "Pod admission denied" podUID="b9baa96e-3c60-4703-81f5-6e1b54fa3a9c" pod="tigera-operator/tigera-operator-747864d56d-7jphb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:35.574174 kubelet[2718]: I0813 02:08:35.574138 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:08:35.574174 kubelet[2718]: I0813 02:08:35.574184 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:08:35.575563 kubelet[2718]: I0813 02:08:35.575547 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:08:35.590127 kubelet[2718]: I0813 02:08:35.589917 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:08:35.590127 kubelet[2718]: I0813 02:08:35.589997 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-p5qmw","calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/calico-node-cdfxj","calico-system/csi-node-driver-r6mhv","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:08:35.590127 kubelet[2718]: E0813 02:08:35.590023 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:08:35.590127 kubelet[2718]: E0813 02:08:35.590035 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:08:35.590127 kubelet[2718]: E0813 02:08:35.590041 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:08:35.590127 kubelet[2718]: E0813 02:08:35.590048 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:08:35.590127 kubelet[2718]: E0813 02:08:35.590054 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:08:35.590127 kubelet[2718]: E0813 02:08:35.590064 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:08:35.590127 kubelet[2718]: E0813 02:08:35.590072 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:08:35.590127 kubelet[2718]: E0813 02:08:35.590080 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:08:35.590127 kubelet[2718]: E0813 02:08:35.590088 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:08:35.590127 kubelet[2718]: E0813 02:08:35.590096 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:08:35.590127 kubelet[2718]: I0813 02:08:35.590106 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:08:35.600762 kubelet[2718]: I0813 02:08:35.600728 2718 kubelet.go:2351] "Pod admission denied" podUID="1dfb6126-d63b-45ad-811c-1726e1f72063" pod="tigera-operator/tigera-operator-747864d56d-bmlvg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:35.650806 kubelet[2718]: I0813 02:08:35.650747 2718 kubelet.go:2351] "Pod admission denied" podUID="03364aa7-2d33-4a4f-bbe4-acade0419a51" pod="tigera-operator/tigera-operator-747864d56d-w6gjw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:35.744025 kubelet[2718]: I0813 02:08:35.743964 2718 kubelet.go:2351] "Pod admission denied" podUID="2aa42ded-91f6-4965-8238-3fa63931063a" pod="tigera-operator/tigera-operator-747864d56d-m9xv2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:35.847994 kubelet[2718]: I0813 02:08:35.847918 2718 kubelet.go:2351] "Pod admission denied" podUID="bdb47109-04a8-412f-a032-ea9e82777121" pod="tigera-operator/tigera-operator-747864d56d-j6cq4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:35.943474 kubelet[2718]: I0813 02:08:35.943421 2718 kubelet.go:2351] "Pod admission denied" podUID="da0c1273-5d49-4ccb-b9d7-40942f1cefcf" pod="tigera-operator/tigera-operator-747864d56d-7jb7r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:36.042717 kubelet[2718]: I0813 02:08:36.042657 2718 kubelet.go:2351] "Pod admission denied" podUID="32403612-3cd0-4934-ae4c-2a788ec0574c" pod="tigera-operator/tigera-operator-747864d56d-ql47g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:36.092612 kubelet[2718]: I0813 02:08:36.092545 2718 kubelet.go:2351] "Pod admission denied" podUID="75a2906f-26e2-4d8e-a293-99f12c22c34a" pod="tigera-operator/tigera-operator-747864d56d-k9452" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:36.191158 kubelet[2718]: I0813 02:08:36.190994 2718 kubelet.go:2351] "Pod admission denied" podUID="9b294366-a14c-44de-936c-2c1e618b348e" pod="tigera-operator/tigera-operator-747864d56d-vg286" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:36.289696 kubelet[2718]: I0813 02:08:36.289536 2718 kubelet.go:2351] "Pod admission denied" podUID="358ff572-1d53-40f6-8295-9ea7616f2955" pod="tigera-operator/tigera-operator-747864d56d-56bck" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:36.391649 kubelet[2718]: I0813 02:08:36.391577 2718 kubelet.go:2351] "Pod admission denied" podUID="20041667-5854-4a22-9b23-dc9af8bc16bd" pod="tigera-operator/tigera-operator-747864d56d-zctwf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:36.490504 kubelet[2718]: I0813 02:08:36.490374 2718 kubelet.go:2351] "Pod admission denied" podUID="2cde40ae-5681-4bc4-a039-435a1401ead3" pod="tigera-operator/tigera-operator-747864d56d-dd574" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:36.540599 kubelet[2718]: I0813 02:08:36.540516 2718 kubelet.go:2351] "Pod admission denied" podUID="7abbec8d-7f9d-4bc8-961f-43444cd9ca1a" pod="tigera-operator/tigera-operator-747864d56d-b5bkk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:36.639480 kubelet[2718]: I0813 02:08:36.639419 2718 kubelet.go:2351] "Pod admission denied" podUID="f21bb123-2709-40e4-959d-e5f815cc904a" pod="tigera-operator/tigera-operator-747864d56d-6hdss" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:36.853338 kubelet[2718]: I0813 02:08:36.853285 2718 kubelet.go:2351] "Pod admission denied" podUID="008932b8-9c8e-4f44-98e5-94e769b7f925" pod="tigera-operator/tigera-operator-747864d56d-xpgnh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:36.944641 kubelet[2718]: I0813 02:08:36.944565 2718 kubelet.go:2351] "Pod admission denied" podUID="67092968-7b25-4e26-8e40-7c455f471fc2" pod="tigera-operator/tigera-operator-747864d56d-zhg54" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:37.042546 kubelet[2718]: I0813 02:08:37.042490 2718 kubelet.go:2351] "Pod admission denied" podUID="46c772d7-0380-47ba-9d32-c69311b030b6" pod="tigera-operator/tigera-operator-747864d56d-2745z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:37.143441 kubelet[2718]: I0813 02:08:37.143279 2718 kubelet.go:2351] "Pod admission denied" podUID="427cbfa2-ea91-4b3b-895e-96c02fbcc742" pod="tigera-operator/tigera-operator-747864d56d-hbj92" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:37.241904 kubelet[2718]: I0813 02:08:37.241845 2718 kubelet.go:2351] "Pod admission denied" podUID="a759b52f-467a-46b0-98a8-5291a86f2ee3" pod="tigera-operator/tigera-operator-747864d56d-78slq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:37.340576 kubelet[2718]: I0813 02:08:37.340519 2718 kubelet.go:2351] "Pod admission denied" podUID="ea6cf28b-6c6c-4114-ac0d-51490bf02606" pod="tigera-operator/tigera-operator-747864d56d-qcz5p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:37.440909 kubelet[2718]: I0813 02:08:37.440172 2718 kubelet.go:2351] "Pod admission denied" podUID="998be2e9-57de-40ef-9e79-b6250aa4ddd9" pod="tigera-operator/tigera-operator-747864d56d-w42f5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:37.543517 kubelet[2718]: I0813 02:08:37.543447 2718 kubelet.go:2351] "Pod admission denied" podUID="e816b750-15cf-41c9-b54e-f4d2de362dd4" pod="tigera-operator/tigera-operator-747864d56d-85mmx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:37.640545 kubelet[2718]: I0813 02:08:37.640482 2718 kubelet.go:2351] "Pod admission denied" podUID="f03bb791-40a4-4e4f-8e52-417fee2bf3c6" pod="tigera-operator/tigera-operator-747864d56d-mtstr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:37.849612 kubelet[2718]: I0813 02:08:37.848910 2718 kubelet.go:2351] "Pod admission denied" podUID="e74272a7-148f-4729-babb-617ea96e8441" pod="tigera-operator/tigera-operator-747864d56d-rkmg5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:37.941772 kubelet[2718]: I0813 02:08:37.941718 2718 kubelet.go:2351] "Pod admission denied" podUID="3db103b0-d3d3-479e-a07f-90e4e7685835" pod="tigera-operator/tigera-operator-747864d56d-999s7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:38.041461 kubelet[2718]: I0813 02:08:38.041406 2718 kubelet.go:2351] "Pod admission denied" podUID="2a76b01b-54a0-4040-9d82-9a6bbd57cfbf" pod="tigera-operator/tigera-operator-747864d56d-qsk72" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:38.143738 kubelet[2718]: I0813 02:08:38.143573 2718 kubelet.go:2351] "Pod admission denied" podUID="224b6372-1cfe-479b-a62d-fa621c6104fe" pod="tigera-operator/tigera-operator-747864d56d-k2nxf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:38.189861 kubelet[2718]: I0813 02:08:38.189808 2718 kubelet.go:2351] "Pod admission denied" podUID="b44d61a0-6716-40dd-a52f-b07575526e46" pod="tigera-operator/tigera-operator-747864d56d-tlg57" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:38.294615 kubelet[2718]: I0813 02:08:38.294368 2718 kubelet.go:2351] "Pod admission denied" podUID="f56dd82e-9b20-4cf2-9c3d-d24e74541d1c" pod="tigera-operator/tigera-operator-747864d56d-vxjpc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:38.391789 kubelet[2718]: I0813 02:08:38.391743 2718 kubelet.go:2351] "Pod admission denied" podUID="70f8504d-628b-48bc-acc1-9a5b524e9fc7" pod="tigera-operator/tigera-operator-747864d56d-88lvd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:38.492534 kubelet[2718]: I0813 02:08:38.491982 2718 kubelet.go:2351] "Pod admission denied" podUID="39427d5a-0347-4fee-9325-4c5a794bf9d2" pod="tigera-operator/tigera-operator-747864d56d-6wh5b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:38.591866 kubelet[2718]: I0813 02:08:38.591808 2718 kubelet.go:2351] "Pod admission denied" podUID="fa84fbf6-7c52-4228-85f1-cd7c9055f077" pod="tigera-operator/tigera-operator-747864d56d-6dcw4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:38.694656 kubelet[2718]: I0813 02:08:38.694599 2718 kubelet.go:2351] "Pod admission denied" podUID="c0d27d31-3833-4869-886e-4568f4bc11f6" pod="tigera-operator/tigera-operator-747864d56d-5mlds" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:38.793780 kubelet[2718]: I0813 02:08:38.793719 2718 kubelet.go:2351] "Pod admission denied" podUID="310b32f8-8b1a-4a68-92e8-e97ac4e7de5f" pod="tigera-operator/tigera-operator-747864d56d-gb4wq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:38.893057 kubelet[2718]: I0813 02:08:38.892990 2718 kubelet.go:2351] "Pod admission denied" podUID="2191061c-7094-449b-89c6-249f4109f850" pod="tigera-operator/tigera-operator-747864d56d-qldt9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:38.936452 kubelet[2718]: E0813 02:08:38.936187 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:08:38.936452 kubelet[2718]: E0813 02:08:38.936199 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:08:38.937407 containerd[1542]: time="2025-08-13T02:08:38.937126023Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,}" Aug 13 02:08:38.937407 containerd[1542]: time="2025-08-13T02:08:38.937234132Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,}" Aug 13 02:08:39.009343 kubelet[2718]: I0813 02:08:39.009295 2718 kubelet.go:2351] "Pod admission denied" podUID="9c07d671-8d35-43e2-b38d-7022e941933e" pod="tigera-operator/tigera-operator-747864d56d-49lfx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:39.040649 containerd[1542]: time="2025-08-13T02:08:39.038102340Z" level=error msg="Failed to destroy network for sandbox \"2df5b064c9f5b3c13f6ba0ec095c823987444fda3c718a428cd6da4062bbcae5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:39.042546 containerd[1542]: time="2025-08-13T02:08:39.042312382Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2df5b064c9f5b3c13f6ba0ec095c823987444fda3c718a428cd6da4062bbcae5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:39.042729 systemd[1]: run-netns-cni\x2de327187b\x2d48f7\x2dc5b4\x2dd259\x2d46362a756ef2.mount: Deactivated successfully. Aug 13 02:08:39.044673 kubelet[2718]: E0813 02:08:39.043748 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2df5b064c9f5b3c13f6ba0ec095c823987444fda3c718a428cd6da4062bbcae5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:39.044673 kubelet[2718]: E0813 02:08:39.043806 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2df5b064c9f5b3c13f6ba0ec095c823987444fda3c718a428cd6da4062bbcae5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:08:39.044673 kubelet[2718]: E0813 02:08:39.043832 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2df5b064c9f5b3c13f6ba0ec095c823987444fda3c718a428cd6da4062bbcae5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:08:39.044673 kubelet[2718]: E0813 02:08:39.043869 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2df5b064c9f5b3c13f6ba0ec095c823987444fda3c718a428cd6da4062bbcae5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p5qmw" podUID="26fd4059-1e9c-49a2-9bd9-181be9ad7bcb" Aug 13 02:08:39.057475 containerd[1542]: time="2025-08-13T02:08:39.057446945Z" level=error msg="Failed to destroy network for sandbox \"15b577184c2fc1aa85eeaef951ee279e005d75ed736e1327427eb3d545e9d8b5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:39.059781 systemd[1]: run-netns-cni\x2d515ecf04\x2d1e3a\x2df13e\x2d87be\x2d6d9304433bee.mount: Deactivated successfully. Aug 13 02:08:39.060003 containerd[1542]: time="2025-08-13T02:08:39.059917659Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"15b577184c2fc1aa85eeaef951ee279e005d75ed736e1327427eb3d545e9d8b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:39.060358 kubelet[2718]: E0813 02:08:39.060307 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15b577184c2fc1aa85eeaef951ee279e005d75ed736e1327427eb3d545e9d8b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:39.060414 kubelet[2718]: E0813 02:08:39.060380 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15b577184c2fc1aa85eeaef951ee279e005d75ed736e1327427eb3d545e9d8b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:08:39.060414 kubelet[2718]: E0813 02:08:39.060399 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15b577184c2fc1aa85eeaef951ee279e005d75ed736e1327427eb3d545e9d8b5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:08:39.060900 kubelet[2718]: E0813 02:08:39.060450 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15b577184c2fc1aa85eeaef951ee279e005d75ed736e1327427eb3d545e9d8b5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pw6gg" podUID="e73a6876-bbb3-4e11-8a33-1945cf27a944" Aug 13 02:08:39.095613 kubelet[2718]: I0813 02:08:39.094961 2718 kubelet.go:2351] "Pod admission denied" podUID="ae97e0c7-af18-4a0d-99bc-db7888f16d21" pod="tigera-operator/tigera-operator-747864d56d-mnd5j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:39.193206 kubelet[2718]: I0813 02:08:39.193154 2718 kubelet.go:2351] "Pod admission denied" podUID="6c4e0eb0-d095-4cb9-8ac9-bdc00021d0e7" pod="tigera-operator/tigera-operator-747864d56d-l4xnr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:39.241864 kubelet[2718]: I0813 02:08:39.241808 2718 kubelet.go:2351] "Pod admission denied" podUID="846b9db6-0dc4-4c58-b4f7-e59b903b21bd" pod="tigera-operator/tigera-operator-747864d56d-57qcs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:39.344687 kubelet[2718]: I0813 02:08:39.344117 2718 kubelet.go:2351] "Pod admission denied" podUID="97e35296-6faa-46a0-8b84-5e173a49593c" pod="tigera-operator/tigera-operator-747864d56d-7qgcd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:39.443486 kubelet[2718]: I0813 02:08:39.443430 2718 kubelet.go:2351] "Pod admission denied" podUID="075ea4b9-3192-4783-87d7-612a11fe4c00" pod="tigera-operator/tigera-operator-747864d56d-6jcqr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:39.495172 kubelet[2718]: I0813 02:08:39.495128 2718 kubelet.go:2351] "Pod admission denied" podUID="e69ee56d-8ac0-41a5-903d-1bee836f159f" pod="tigera-operator/tigera-operator-747864d56d-wrmf4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:39.591942 kubelet[2718]: I0813 02:08:39.591865 2718 kubelet.go:2351] "Pod admission denied" podUID="b54f099a-f945-4202-bd93-6f247730cf5d" pod="tigera-operator/tigera-operator-747864d56d-r7p2k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:39.789369 kubelet[2718]: I0813 02:08:39.789309 2718 kubelet.go:2351] "Pod admission denied" podUID="a0b2ed2d-265e-454d-826d-b48cbb89f92d" pod="tigera-operator/tigera-operator-747864d56d-v6wm8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:39.892471 kubelet[2718]: I0813 02:08:39.892401 2718 kubelet.go:2351] "Pod admission denied" podUID="13dd9d3e-f8ad-4737-8b80-68f3f6e0b115" pod="tigera-operator/tigera-operator-747864d56d-tdn22" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:39.994618 kubelet[2718]: I0813 02:08:39.993797 2718 kubelet.go:2351] "Pod admission denied" podUID="5fc55e51-7666-4b83-b955-c90f7b2c50d7" pod="tigera-operator/tigera-operator-747864d56d-zh25s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:40.196572 kubelet[2718]: I0813 02:08:40.196416 2718 kubelet.go:2351] "Pod admission denied" podUID="f7d345cc-bd28-4034-9038-472e787495fd" pod="tigera-operator/tigera-operator-747864d56d-mt4xg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:40.295008 kubelet[2718]: I0813 02:08:40.294933 2718 kubelet.go:2351] "Pod admission denied" podUID="5e9fbdfc-6f0f-4718-8fde-df485a259d7f" pod="tigera-operator/tigera-operator-747864d56d-h9k4d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:40.342768 kubelet[2718]: I0813 02:08:40.342709 2718 kubelet.go:2351] "Pod admission denied" podUID="9ef76c3e-a9a3-4fcf-88d5-516cd778d5d4" pod="tigera-operator/tigera-operator-747864d56d-2jcz4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:40.440144 kubelet[2718]: I0813 02:08:40.440071 2718 kubelet.go:2351] "Pod admission denied" podUID="5b017bd1-879f-4ebc-94a4-1d46f16fa113" pod="tigera-operator/tigera-operator-747864d56d-q47b2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:40.646449 kubelet[2718]: I0813 02:08:40.646364 2718 kubelet.go:2351] "Pod admission denied" podUID="a24da4c1-44d2-46a9-b880-a08e1bcba75b" pod="tigera-operator/tigera-operator-747864d56d-s874w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:40.748959 kubelet[2718]: I0813 02:08:40.748883 2718 kubelet.go:2351] "Pod admission denied" podUID="07e14708-85aa-4134-aac7-445210d7a7f7" pod="tigera-operator/tigera-operator-747864d56d-qmxgz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:40.853730 kubelet[2718]: I0813 02:08:40.853653 2718 kubelet.go:2351] "Pod admission denied" podUID="8ce50a18-911b-4d59-9a3f-d6f5c728d9af" pod="tigera-operator/tigera-operator-747864d56d-wrphj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:40.936612 kubelet[2718]: E0813 02:08:40.935978 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:08:40.946321 kubelet[2718]: I0813 02:08:40.946284 2718 kubelet.go:2351] "Pod admission denied" podUID="c926e46e-a15b-4dfd-93be-86e595042380" pod="tigera-operator/tigera-operator-747864d56d-j4n2w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:41.049615 kubelet[2718]: I0813 02:08:41.049044 2718 kubelet.go:2351] "Pod admission denied" podUID="8710f955-ca29-46b5-a5d1-045f1c67cbcd" pod="tigera-operator/tigera-operator-747864d56d-kjdth" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:41.145819 kubelet[2718]: I0813 02:08:41.145753 2718 kubelet.go:2351] "Pod admission denied" podUID="09e91cf9-859a-4447-b4ca-bb193f9ece94" pod="tigera-operator/tigera-operator-747864d56d-jdbtf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:41.245411 kubelet[2718]: I0813 02:08:41.244389 2718 kubelet.go:2351] "Pod admission denied" podUID="294f7cc2-f794-4b4a-be95-b99a51253efa" pod="tigera-operator/tigera-operator-747864d56d-h9mwr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:41.348388 kubelet[2718]: I0813 02:08:41.348335 2718 kubelet.go:2351] "Pod admission denied" podUID="530ae1d4-a35e-48a5-a7bc-1fa22864f2a4" pod="tigera-operator/tigera-operator-747864d56d-shsh2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:41.392356 kubelet[2718]: I0813 02:08:41.392295 2718 kubelet.go:2351] "Pod admission denied" podUID="ab66d3fe-a06d-45d3-832e-fbcf9dec28e1" pod="tigera-operator/tigera-operator-747864d56d-fnq48" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:41.499809 kubelet[2718]: I0813 02:08:41.499654 2718 kubelet.go:2351] "Pod admission denied" podUID="de3337cf-588e-4edf-9a0e-2eba6c1e4690" pod="tigera-operator/tigera-operator-747864d56d-f2v8k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:41.597207 kubelet[2718]: I0813 02:08:41.597132 2718 kubelet.go:2351] "Pod admission denied" podUID="770c898f-38be-4c6e-ae2f-0f523b9e94a9" pod="tigera-operator/tigera-operator-747864d56d-sb2rt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:41.695330 kubelet[2718]: I0813 02:08:41.695268 2718 kubelet.go:2351] "Pod admission denied" podUID="0b5e0362-0ee8-4717-85b4-99c223085310" pod="tigera-operator/tigera-operator-747864d56d-66sks" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:41.811611 kubelet[2718]: I0813 02:08:41.810131 2718 kubelet.go:2351] "Pod admission denied" podUID="1ae35e35-d8b2-4b36-8922-d811a4df2881" pod="tigera-operator/tigera-operator-747864d56d-rkvpr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:41.897939 kubelet[2718]: I0813 02:08:41.897887 2718 kubelet.go:2351] "Pod admission denied" podUID="0bf08676-c93c-41ff-8d70-de8bc2674832" pod="tigera-operator/tigera-operator-747864d56d-6q7vn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:41.936380 containerd[1542]: time="2025-08-13T02:08:41.936093880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,}" Aug 13 02:08:42.008711 containerd[1542]: time="2025-08-13T02:08:42.008651123Z" level=error msg="Failed to destroy network for sandbox \"72444d6cdac4d18dbf9ac8f18d2f8ca56104d88aa22cabcaacb2b77578bd44f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:42.011113 systemd[1]: run-netns-cni\x2da084e661\x2d1a9b\x2de37d\x2d1cb7\x2dcdce40852954.mount: Deactivated successfully. Aug 13 02:08:42.012331 containerd[1542]: time="2025-08-13T02:08:42.012275311Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"72444d6cdac4d18dbf9ac8f18d2f8ca56104d88aa22cabcaacb2b77578bd44f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:42.013785 kubelet[2718]: E0813 02:08:42.013695 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72444d6cdac4d18dbf9ac8f18d2f8ca56104d88aa22cabcaacb2b77578bd44f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:42.013785 kubelet[2718]: E0813 02:08:42.013751 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72444d6cdac4d18dbf9ac8f18d2f8ca56104d88aa22cabcaacb2b77578bd44f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:08:42.013879 kubelet[2718]: E0813 02:08:42.013857 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"72444d6cdac4d18dbf9ac8f18d2f8ca56104d88aa22cabcaacb2b77578bd44f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:08:42.014033 kubelet[2718]: E0813 02:08:42.013905 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"72444d6cdac4d18dbf9ac8f18d2f8ca56104d88aa22cabcaacb2b77578bd44f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:08:42.015327 kubelet[2718]: I0813 02:08:42.014898 2718 kubelet.go:2351] "Pod admission denied" podUID="fec24fd6-d66a-4124-937e-882881fbf9ba" pod="tigera-operator/tigera-operator-747864d56d-cvjvh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:42.095054 kubelet[2718]: I0813 02:08:42.094341 2718 kubelet.go:2351] "Pod admission denied" podUID="f4d2fde5-2253-4682-a003-11a4f67f978c" pod="tigera-operator/tigera-operator-747864d56d-8qzxb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:42.193346 kubelet[2718]: I0813 02:08:42.193285 2718 kubelet.go:2351] "Pod admission denied" podUID="88c38dd6-43aa-429b-be3d-50b414c9b931" pod="tigera-operator/tigera-operator-747864d56d-vvtvg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:42.300613 kubelet[2718]: I0813 02:08:42.300133 2718 kubelet.go:2351] "Pod admission denied" podUID="dc0fdad7-ef89-4ff9-a9be-6e1b49b94d24" pod="tigera-operator/tigera-operator-747864d56d-wllr9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:42.395140 kubelet[2718]: I0813 02:08:42.394989 2718 kubelet.go:2351] "Pod admission denied" podUID="4ff15566-3c35-4702-bc50-973d1260c674" pod="tigera-operator/tigera-operator-747864d56d-gtf2q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:42.497160 kubelet[2718]: I0813 02:08:42.497091 2718 kubelet.go:2351] "Pod admission denied" podUID="a03f24e6-b34e-48cd-adef-f2e95b46a2b3" pod="tigera-operator/tigera-operator-747864d56d-pf7nd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:42.596706 kubelet[2718]: I0813 02:08:42.596643 2718 kubelet.go:2351] "Pod admission denied" podUID="6a600140-8915-4257-a471-92a9042b3076" pod="tigera-operator/tigera-operator-747864d56d-2fszj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:42.642410 kubelet[2718]: I0813 02:08:42.642356 2718 kubelet.go:2351] "Pod admission denied" podUID="7585861b-05d0-4d24-b849-896b05388913" pod="tigera-operator/tigera-operator-747864d56d-sn9zt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:42.753853 kubelet[2718]: I0813 02:08:42.752417 2718 kubelet.go:2351] "Pod admission denied" podUID="618c09e1-6f1a-484a-ac78-93e6e137e4fb" pod="tigera-operator/tigera-operator-747864d56d-zwblh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:42.936583 containerd[1542]: time="2025-08-13T02:08:42.936523406Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,}" Aug 13 02:08:42.937422 containerd[1542]: time="2025-08-13T02:08:42.937275662Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 02:08:42.951009 kubelet[2718]: I0813 02:08:42.950974 2718 kubelet.go:2351] "Pod admission denied" podUID="ac63ff43-8cdf-4875-a527-1bb881c3ff2a" pod="tigera-operator/tigera-operator-747864d56d-lxfgk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:43.015719 containerd[1542]: time="2025-08-13T02:08:43.015352715Z" level=error msg="Failed to destroy network for sandbox \"90554d8ff498636b563574f54dc60ce1738e2ee7bdcba6eee8ae8fbfaa8428fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:43.018340 containerd[1542]: time="2025-08-13T02:08:43.018281137Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90554d8ff498636b563574f54dc60ce1738e2ee7bdcba6eee8ae8fbfaa8428fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:43.019443 systemd[1]: run-netns-cni\x2db3be96f4\x2d2d33\x2d0d1f\x2dd4a9\x2d958da10c7a1e.mount: Deactivated successfully. Aug 13 02:08:43.021982 kubelet[2718]: E0813 02:08:43.021304 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90554d8ff498636b563574f54dc60ce1738e2ee7bdcba6eee8ae8fbfaa8428fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:43.021982 kubelet[2718]: E0813 02:08:43.021358 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90554d8ff498636b563574f54dc60ce1738e2ee7bdcba6eee8ae8fbfaa8428fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:08:43.021982 kubelet[2718]: E0813 02:08:43.021380 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90554d8ff498636b563574f54dc60ce1738e2ee7bdcba6eee8ae8fbfaa8428fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:08:43.021982 kubelet[2718]: E0813 02:08:43.021426 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90554d8ff498636b563574f54dc60ce1738e2ee7bdcba6eee8ae8fbfaa8428fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:08:43.056156 kubelet[2718]: I0813 02:08:43.056099 2718 kubelet.go:2351] "Pod admission denied" podUID="b75e00fa-1130-4e2c-8705-933ddd51ce38" pod="tigera-operator/tigera-operator-747864d56d-gkzcj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:43.106176 kubelet[2718]: I0813 02:08:43.106126 2718 kubelet.go:2351] "Pod admission denied" podUID="3521fd00-cf98-456b-8573-708f552bcd1b" pod="tigera-operator/tigera-operator-747864d56d-qmfbp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:43.193502 kubelet[2718]: I0813 02:08:43.193454 2718 kubelet.go:2351] "Pod admission denied" podUID="811c2c78-3d4a-42f8-a904-b33897c1f02b" pod="tigera-operator/tigera-operator-747864d56d-mw8vs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:43.294249 kubelet[2718]: I0813 02:08:43.294197 2718 kubelet.go:2351] "Pod admission denied" podUID="b1b85f22-40ce-4c8d-ad08-9ff7b932d3de" pod="tigera-operator/tigera-operator-747864d56d-ltldl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:43.416791 kubelet[2718]: I0813 02:08:43.416732 2718 kubelet.go:2351] "Pod admission denied" podUID="10114dfd-c776-4af3-8dd2-864f967ed20f" pod="tigera-operator/tigera-operator-747864d56d-vdjpf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:43.605446 kubelet[2718]: I0813 02:08:43.605181 2718 kubelet.go:2351] "Pod admission denied" podUID="ce6861b3-6364-41ad-9f90-80c337c4131c" pod="tigera-operator/tigera-operator-747864d56d-p4wxg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:43.696821 kubelet[2718]: I0813 02:08:43.696778 2718 kubelet.go:2351] "Pod admission denied" podUID="85af8191-7a18-4cc1-9fe7-34a01d16a549" pod="tigera-operator/tigera-operator-747864d56d-zrsgh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:43.767201 kubelet[2718]: I0813 02:08:43.767142 2718 kubelet.go:2351] "Pod admission denied" podUID="8d080c0a-2c3d-437e-83ec-72068ff1e431" pod="tigera-operator/tigera-operator-747864d56d-ds7hq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:43.904961 kubelet[2718]: I0813 02:08:43.904415 2718 kubelet.go:2351] "Pod admission denied" podUID="45f9864e-bf3b-4d61-bdfa-59a6a80d4c6a" pod="tigera-operator/tigera-operator-747864d56d-cnrtf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:43.985135 kubelet[2718]: I0813 02:08:43.985077 2718 kubelet.go:2351] "Pod admission denied" podUID="6bd1b0e5-73f6-4e26-9279-061dd24ee310" pod="tigera-operator/tigera-operator-747864d56d-hq27k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:44.102580 kubelet[2718]: I0813 02:08:44.102526 2718 kubelet.go:2351] "Pod admission denied" podUID="22478b8d-4559-4ce1-a8f2-621a7e1ce9be" pod="tigera-operator/tigera-operator-747864d56d-9rmcx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:44.201551 kubelet[2718]: I0813 02:08:44.200921 2718 kubelet.go:2351] "Pod admission denied" podUID="b20d2062-b109-41a3-a898-2000b5cada5a" pod="tigera-operator/tigera-operator-747864d56d-x6ccb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:44.307803 kubelet[2718]: I0813 02:08:44.306761 2718 kubelet.go:2351] "Pod admission denied" podUID="70121eda-87c9-407e-be00-f728b615d4e1" pod="tigera-operator/tigera-operator-747864d56d-k4vm2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:44.424992 kubelet[2718]: I0813 02:08:44.424889 2718 kubelet.go:2351] "Pod admission denied" podUID="0e5e4002-6dd9-4f7d-84d8-759cfd62c90d" pod="tigera-operator/tigera-operator-747864d56d-tpdp7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:44.502579 kubelet[2718]: I0813 02:08:44.502308 2718 kubelet.go:2351] "Pod admission denied" podUID="2d7a933f-230e-4ef6-9b17-edf65850b0b6" pod="tigera-operator/tigera-operator-747864d56d-jx2sl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:44.636423 kubelet[2718]: I0813 02:08:44.635764 2718 kubelet.go:2351] "Pod admission denied" podUID="eb0e16db-a7d3-445e-9301-a607ae5e745c" pod="tigera-operator/tigera-operator-747864d56d-2gps2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:44.888276 kubelet[2718]: I0813 02:08:44.886575 2718 kubelet.go:2351] "Pod admission denied" podUID="a259a5c8-805d-4b2c-ae82-bfe2536fc3fc" pod="tigera-operator/tigera-operator-747864d56d-trj5l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:45.024897 kubelet[2718]: I0813 02:08:45.024834 2718 kubelet.go:2351] "Pod admission denied" podUID="da063300-ad27-4d0d-a32a-7072290fa66f" pod="tigera-operator/tigera-operator-747864d56d-sd44m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:45.149611 kubelet[2718]: I0813 02:08:45.149090 2718 kubelet.go:2351] "Pod admission denied" podUID="20bd9978-f9af-4594-8dce-79b22e1a2e27" pod="tigera-operator/tigera-operator-747864d56d-vzcth" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:45.250482 kubelet[2718]: I0813 02:08:45.250429 2718 kubelet.go:2351] "Pod admission denied" podUID="28969d1c-c798-4dac-b30e-9fe1a66bd362" pod="tigera-operator/tigera-operator-747864d56d-zdfdc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:45.434659 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1264763250.mount: Deactivated successfully. Aug 13 02:08:45.438906 containerd[1542]: time="2025-08-13T02:08:45.438804283Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1264763250: write /var/lib/containerd/tmpmounts/containerd-mount1264763250/usr/bin/calico-node: no space left on device" Aug 13 02:08:45.438906 containerd[1542]: time="2025-08-13T02:08:45.438870122Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 02:08:45.439287 kubelet[2718]: E0813 02:08:45.438986 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1264763250: write /var/lib/containerd/tmpmounts/containerd-mount1264763250/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 02:08:45.439287 kubelet[2718]: E0813 02:08:45.439025 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1264763250: write /var/lib/containerd/tmpmounts/containerd-mount1264763250/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 02:08:45.439649 kubelet[2718]: E0813 02:08:45.439192 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-j884b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-cdfxj_calico-system(e8f51745-7382-4ead-96df-a31572ad4e1f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1264763250: write /var/lib/containerd/tmpmounts/containerd-mount1264763250/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 02:08:45.441231 kubelet[2718]: E0813 02:08:45.441190 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1264763250: write /var/lib/containerd/tmpmounts/containerd-mount1264763250/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-cdfxj" podUID="e8f51745-7382-4ead-96df-a31572ad4e1f" Aug 13 02:08:45.468608 kubelet[2718]: I0813 02:08:45.467375 2718 kubelet.go:2351] "Pod admission denied" podUID="d6ac99e9-6513-4566-abc4-4af1ef5570fe" pod="tigera-operator/tigera-operator-747864d56d-5b9mt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:45.606225 kubelet[2718]: I0813 02:08:45.606176 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:08:45.606225 kubelet[2718]: I0813 02:08:45.606218 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:08:45.608131 kubelet[2718]: I0813 02:08:45.608114 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:08:45.610814 kubelet[2718]: I0813 02:08:45.610751 2718 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" size=18562039 runtimeHandler="" Aug 13 02:08:45.611324 containerd[1542]: time="2025-08-13T02:08:45.611253937Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 02:08:45.612764 containerd[1542]: time="2025-08-13T02:08:45.612728398Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 02:08:45.613428 containerd[1542]: time="2025-08-13T02:08:45.613388014Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\"" Aug 13 02:08:45.613980 containerd[1542]: time="2025-08-13T02:08:45.613921121Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" returns successfully" Aug 13 02:08:45.614208 containerd[1542]: time="2025-08-13T02:08:45.614036100Z" level=info msg="ImageDelete event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 02:08:45.614351 kubelet[2718]: I0813 02:08:45.614174 2718 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" size=57680541 runtimeHandler="" Aug 13 02:08:45.614486 containerd[1542]: time="2025-08-13T02:08:45.614454888Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 02:08:45.615324 containerd[1542]: time="2025-08-13T02:08:45.615291693Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 02:08:45.615934 containerd[1542]: time="2025-08-13T02:08:45.615895079Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\"" Aug 13 02:08:45.616291 containerd[1542]: time="2025-08-13T02:08:45.616254737Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" returns successfully" Aug 13 02:08:45.616376 containerd[1542]: time="2025-08-13T02:08:45.616343826Z" level=info msg="ImageDelete event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 02:08:45.629205 kubelet[2718]: I0813 02:08:45.629180 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:08:45.629322 kubelet[2718]: I0813 02:08:45.629296 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-p5qmw","calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/csi-node-driver-r6mhv","calico-system/calico-node-cdfxj","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:08:45.629412 kubelet[2718]: E0813 02:08:45.629332 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:08:45.629412 kubelet[2718]: E0813 02:08:45.629345 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:08:45.629412 kubelet[2718]: E0813 02:08:45.629354 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:08:45.629412 kubelet[2718]: E0813 02:08:45.629362 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:08:45.629412 kubelet[2718]: E0813 02:08:45.629369 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:08:45.629412 kubelet[2718]: E0813 02:08:45.629379 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:08:45.629412 kubelet[2718]: E0813 02:08:45.629395 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:08:45.629412 kubelet[2718]: E0813 02:08:45.629405 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:08:45.629412 kubelet[2718]: E0813 02:08:45.629414 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:08:45.629412 kubelet[2718]: E0813 02:08:45.629423 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:08:45.629696 kubelet[2718]: I0813 02:08:45.629433 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:08:45.695758 kubelet[2718]: I0813 02:08:45.694881 2718 kubelet.go:2351] "Pod admission denied" podUID="1d978a14-f6ef-423c-b8ce-4a9fa41d99cd" pod="tigera-operator/tigera-operator-747864d56d-h4rgv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:45.790765 kubelet[2718]: I0813 02:08:45.790704 2718 kubelet.go:2351] "Pod admission denied" podUID="dec51941-21b0-4a2b-bcf8-b048c1eda0dd" pod="tigera-operator/tigera-operator-747864d56d-frhsw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:45.997091 kubelet[2718]: I0813 02:08:45.996940 2718 kubelet.go:2351] "Pod admission denied" podUID="a41ecb2c-147c-4fbe-bf0b-54699ddb24b5" pod="tigera-operator/tigera-operator-747864d56d-bnk9p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:46.090773 kubelet[2718]: I0813 02:08:46.090720 2718 kubelet.go:2351] "Pod admission denied" podUID="ae55cbec-fe77-4fd1-9e46-bb2789f144e6" pod="tigera-operator/tigera-operator-747864d56d-mrmdg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:46.201883 kubelet[2718]: I0813 02:08:46.201777 2718 kubelet.go:2351] "Pod admission denied" podUID="57842b9f-5f83-4df1-b784-de81073fe13f" pod="tigera-operator/tigera-operator-747864d56d-5r6jn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:46.290606 kubelet[2718]: I0813 02:08:46.290555 2718 kubelet.go:2351] "Pod admission denied" podUID="b6823516-d743-42b8-a483-82635ad93206" pod="tigera-operator/tigera-operator-747864d56d-sfdbd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:46.338204 kubelet[2718]: I0813 02:08:46.338151 2718 kubelet.go:2351] "Pod admission denied" podUID="5788aa26-55a1-464a-a8de-611611e3e2c4" pod="tigera-operator/tigera-operator-747864d56d-x4cr5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:46.444407 kubelet[2718]: I0813 02:08:46.444343 2718 kubelet.go:2351] "Pod admission denied" podUID="08439a6f-0627-48cc-a41d-e4d384119355" pod="tigera-operator/tigera-operator-747864d56d-j5p4k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:46.654716 kubelet[2718]: I0813 02:08:46.653811 2718 kubelet.go:2351] "Pod admission denied" podUID="a045e5cc-55bf-4eec-a5e6-a8f638343332" pod="tigera-operator/tigera-operator-747864d56d-qctfd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:46.756618 kubelet[2718]: I0813 02:08:46.756217 2718 kubelet.go:2351] "Pod admission denied" podUID="f1b13f46-7922-4c94-98ef-cac1b4262a01" pod="tigera-operator/tigera-operator-747864d56d-x8hvt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:46.847802 kubelet[2718]: I0813 02:08:46.847720 2718 kubelet.go:2351] "Pod admission denied" podUID="9dc707ed-5427-4fa4-a366-49ecb1c1aad5" pod="tigera-operator/tigera-operator-747864d56d-524w2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:46.947582 kubelet[2718]: I0813 02:08:46.947416 2718 kubelet.go:2351] "Pod admission denied" podUID="8cfda47f-bc26-479c-8ec4-005256b719c1" pod="tigera-operator/tigera-operator-747864d56d-s4k7j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:47.058128 kubelet[2718]: I0813 02:08:47.057727 2718 kubelet.go:2351] "Pod admission denied" podUID="826461c5-7586-4020-b0c5-d0a839bdafb4" pod="tigera-operator/tigera-operator-747864d56d-f79lr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:47.144538 kubelet[2718]: I0813 02:08:47.144478 2718 kubelet.go:2351] "Pod admission denied" podUID="8c387fe5-6a5f-4602-8ee2-e982592ec5f3" pod="tigera-operator/tigera-operator-747864d56d-8mxvt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:47.245854 kubelet[2718]: I0813 02:08:47.245216 2718 kubelet.go:2351] "Pod admission denied" podUID="6aaca913-1997-4a4c-8bff-8a6f0a8d8678" pod="tigera-operator/tigera-operator-747864d56d-4sd5c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:47.460618 kubelet[2718]: I0813 02:08:47.459545 2718 kubelet.go:2351] "Pod admission denied" podUID="1ad5887a-7c8e-4e5b-ac52-be83a8028f9a" pod="tigera-operator/tigera-operator-747864d56d-5554s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:47.547540 kubelet[2718]: I0813 02:08:47.547477 2718 kubelet.go:2351] "Pod admission denied" podUID="70994df1-46bc-49e3-9a48-759bb591e9a2" pod="tigera-operator/tigera-operator-747864d56d-brf9c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:47.645716 kubelet[2718]: I0813 02:08:47.645654 2718 kubelet.go:2351] "Pod admission denied" podUID="15101ba1-4cda-45e8-b275-f41741892a34" pod="tigera-operator/tigera-operator-747864d56d-6w72k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:47.748454 kubelet[2718]: I0813 02:08:47.748384 2718 kubelet.go:2351] "Pod admission denied" podUID="77502488-45c8-4aba-9dfe-2d6e06fa109f" pod="tigera-operator/tigera-operator-747864d56d-q9499" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:47.795092 kubelet[2718]: I0813 02:08:47.795026 2718 kubelet.go:2351] "Pod admission denied" podUID="6a4f5611-4a9f-4539-9f45-4499fa5f437a" pod="tigera-operator/tigera-operator-747864d56d-pbxh5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:47.907836 kubelet[2718]: I0813 02:08:47.907312 2718 kubelet.go:2351] "Pod admission denied" podUID="5d76294a-b25c-43f5-b93e-d286fb62896c" pod="tigera-operator/tigera-operator-747864d56d-ft5f5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:48.099148 kubelet[2718]: I0813 02:08:48.099094 2718 kubelet.go:2351] "Pod admission denied" podUID="d7a5b9de-23c5-458a-9b80-623cafb769d5" pod="tigera-operator/tigera-operator-747864d56d-v2hxv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:48.198931 kubelet[2718]: I0813 02:08:48.198773 2718 kubelet.go:2351] "Pod admission denied" podUID="d6413c99-f252-4c9f-a1f5-d130e00964f2" pod="tigera-operator/tigera-operator-747864d56d-6sxct" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:48.296539 kubelet[2718]: I0813 02:08:48.296458 2718 kubelet.go:2351] "Pod admission denied" podUID="6a8f4f66-9701-4e1b-bbc4-f705470da8f6" pod="tigera-operator/tigera-operator-747864d56d-bvbk8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:48.498009 kubelet[2718]: I0813 02:08:48.497822 2718 kubelet.go:2351] "Pod admission denied" podUID="57774c21-7c93-42e9-ae71-f7cac4a84e31" pod="tigera-operator/tigera-operator-747864d56d-whbxk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:48.610212 kubelet[2718]: I0813 02:08:48.610158 2718 kubelet.go:2351] "Pod admission denied" podUID="4a149f9c-1ad5-475b-91b4-c35bfb21d237" pod="tigera-operator/tigera-operator-747864d56d-54c5q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:48.696860 kubelet[2718]: I0813 02:08:48.696795 2718 kubelet.go:2351] "Pod admission denied" podUID="47b420b5-ebb5-430c-bdb6-895a20b56708" pod="tigera-operator/tigera-operator-747864d56d-9tz7r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 02:08:49.317437 systemd[1]: Started sshd@10-172.236.122.171:22-147.75.109.163:34778.service - OpenSSH per-connection server daemon (147.75.109.163:34778). Aug 13 02:08:49.658046 sshd[4671]: Accepted publickey for core from 147.75.109.163 port 34778 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:08:49.660322 sshd-session[4671]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:08:49.665868 systemd-logind[1527]: New session 8 of user core. Aug 13 02:08:49.676703 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 02:08:49.979929 sshd[4673]: Connection closed by 147.75.109.163 port 34778 Aug 13 02:08:49.980895 sshd-session[4671]: pam_unix(sshd:session): session closed for user core Aug 13 02:08:49.986068 systemd[1]: sshd@10-172.236.122.171:22-147.75.109.163:34778.service: Deactivated successfully. Aug 13 02:08:49.986724 systemd-logind[1527]: Session 8 logged out. Waiting for processes to exit. Aug 13 02:08:49.988923 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 02:08:49.996792 systemd-logind[1527]: Removed session 8. Aug 13 02:08:51.935652 kubelet[2718]: E0813 02:08:51.935515 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:08:51.937097 containerd[1542]: time="2025-08-13T02:08:51.937043718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,}" Aug 13 02:08:52.002999 containerd[1542]: time="2025-08-13T02:08:52.002924579Z" level=error msg="Failed to destroy network for sandbox \"a331babf39799035287ce0e7ebf4e2f7cd5d68c965533ead7dc705a49e833d2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:52.005417 systemd[1]: run-netns-cni\x2d0c20afee\x2d91f1\x2d7507\x2dcf6e\x2d5081b18df59e.mount: Deactivated successfully. Aug 13 02:08:52.006837 containerd[1542]: time="2025-08-13T02:08:52.006704707Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a331babf39799035287ce0e7ebf4e2f7cd5d68c965533ead7dc705a49e833d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:52.007035 kubelet[2718]: E0813 02:08:52.006993 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a331babf39799035287ce0e7ebf4e2f7cd5d68c965533ead7dc705a49e833d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:52.007096 kubelet[2718]: E0813 02:08:52.007062 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a331babf39799035287ce0e7ebf4e2f7cd5d68c965533ead7dc705a49e833d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:08:52.007125 kubelet[2718]: E0813 02:08:52.007090 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a331babf39799035287ce0e7ebf4e2f7cd5d68c965533ead7dc705a49e833d2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:08:52.007172 kubelet[2718]: E0813 02:08:52.007140 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a331babf39799035287ce0e7ebf4e2f7cd5d68c965533ead7dc705a49e833d2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p5qmw" podUID="26fd4059-1e9c-49a2-9bd9-181be9ad7bcb" Aug 13 02:08:53.936337 kubelet[2718]: E0813 02:08:53.935996 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:08:53.937506 containerd[1542]: time="2025-08-13T02:08:53.937230029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,}" Aug 13 02:08:53.981308 containerd[1542]: time="2025-08-13T02:08:53.981255199Z" level=error msg="Failed to destroy network for sandbox \"f013f61a4036ad138264d0a301fc82825218b8583369730d0d08ec536d00fa45\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:53.982793 containerd[1542]: time="2025-08-13T02:08:53.982759880Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f013f61a4036ad138264d0a301fc82825218b8583369730d0d08ec536d00fa45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:53.983129 kubelet[2718]: E0813 02:08:53.983099 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f013f61a4036ad138264d0a301fc82825218b8583369730d0d08ec536d00fa45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:53.983293 kubelet[2718]: E0813 02:08:53.983242 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f013f61a4036ad138264d0a301fc82825218b8583369730d0d08ec536d00fa45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:08:53.983293 kubelet[2718]: E0813 02:08:53.983267 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f013f61a4036ad138264d0a301fc82825218b8583369730d0d08ec536d00fa45\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:08:53.983410 kubelet[2718]: E0813 02:08:53.983383 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f013f61a4036ad138264d0a301fc82825218b8583369730d0d08ec536d00fa45\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pw6gg" podUID="e73a6876-bbb3-4e11-8a33-1945cf27a944" Aug 13 02:08:53.984190 systemd[1]: run-netns-cni\x2d6e217471\x2d953b\x2d8eb5\x2d281c\x2d499232a62d31.mount: Deactivated successfully. Aug 13 02:08:54.936724 containerd[1542]: time="2025-08-13T02:08:54.936675786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,}" Aug 13 02:08:54.986865 containerd[1542]: time="2025-08-13T02:08:54.986805473Z" level=error msg="Failed to destroy network for sandbox \"3ad22bc833c2c23764e511ae590744a6d17434afa94d1577486c8fec25eccd2f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:54.989488 containerd[1542]: time="2025-08-13T02:08:54.988644263Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ad22bc833c2c23764e511ae590744a6d17434afa94d1577486c8fec25eccd2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:54.989690 kubelet[2718]: E0813 02:08:54.989642 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ad22bc833c2c23764e511ae590744a6d17434afa94d1577486c8fec25eccd2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:54.991151 kubelet[2718]: E0813 02:08:54.989739 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ad22bc833c2c23764e511ae590744a6d17434afa94d1577486c8fec25eccd2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:08:54.991151 kubelet[2718]: E0813 02:08:54.989764 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ad22bc833c2c23764e511ae590744a6d17434afa94d1577486c8fec25eccd2f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:08:54.991151 kubelet[2718]: E0813 02:08:54.989833 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ad22bc833c2c23764e511ae590744a6d17434afa94d1577486c8fec25eccd2f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:08:54.991535 systemd[1]: run-netns-cni\x2dfedbd4b1\x2dd2f9\x2d27d3\x2db912\x2d59c084977a82.mount: Deactivated successfully. Aug 13 02:08:55.042558 systemd[1]: Started sshd@11-172.236.122.171:22-147.75.109.163:34790.service - OpenSSH per-connection server daemon (147.75.109.163:34790). Aug 13 02:08:55.381883 sshd[4765]: Accepted publickey for core from 147.75.109.163 port 34790 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:08:55.384836 sshd-session[4765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:08:55.391415 systemd-logind[1527]: New session 9 of user core. Aug 13 02:08:55.394912 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 02:08:55.678482 sshd[4767]: Connection closed by 147.75.109.163 port 34790 Aug 13 02:08:55.679468 sshd-session[4765]: pam_unix(sshd:session): session closed for user core Aug 13 02:08:55.684507 systemd[1]: sshd@11-172.236.122.171:22-147.75.109.163:34790.service: Deactivated successfully. Aug 13 02:08:55.687273 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 02:08:55.689011 systemd-logind[1527]: Session 9 logged out. Waiting for processes to exit. Aug 13 02:08:55.690259 systemd-logind[1527]: Removed session 9. Aug 13 02:08:55.937107 containerd[1542]: time="2025-08-13T02:08:55.936578601Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,}" Aug 13 02:08:55.937213 kubelet[2718]: E0813 02:08:55.936711 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:08:55.997995 containerd[1542]: time="2025-08-13T02:08:55.997758729Z" level=error msg="Failed to destroy network for sandbox \"f1ef52bcf7daccec9d06c1cddc36d1d780571cab3648cb25eefca88b12b402eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:55.999738 containerd[1542]: time="2025-08-13T02:08:55.999519159Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ef52bcf7daccec9d06c1cddc36d1d780571cab3648cb25eefca88b12b402eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:56.000619 kubelet[2718]: E0813 02:08:56.000145 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ef52bcf7daccec9d06c1cddc36d1d780571cab3648cb25eefca88b12b402eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:08:56.000619 kubelet[2718]: E0813 02:08:56.000199 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ef52bcf7daccec9d06c1cddc36d1d780571cab3648cb25eefca88b12b402eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:08:56.000619 kubelet[2718]: E0813 02:08:56.000226 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1ef52bcf7daccec9d06c1cddc36d1d780571cab3648cb25eefca88b12b402eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:08:56.000619 kubelet[2718]: E0813 02:08:56.000277 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1ef52bcf7daccec9d06c1cddc36d1d780571cab3648cb25eefca88b12b402eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:08:56.003806 systemd[1]: run-netns-cni\x2dab430415\x2de5ed\x2d91f2\x2dff24\x2d251c01ade719.mount: Deactivated successfully. Aug 13 02:08:59.942918 kubelet[2718]: E0813 02:08:59.942816 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1264763250: write /var/lib/containerd/tmpmounts/containerd-mount1264763250/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-cdfxj" podUID="e8f51745-7382-4ead-96df-a31572ad4e1f" Aug 13 02:09:00.743133 systemd[1]: Started sshd@12-172.236.122.171:22-147.75.109.163:43924.service - OpenSSH per-connection server daemon (147.75.109.163:43924). Aug 13 02:09:01.078302 sshd[4807]: Accepted publickey for core from 147.75.109.163 port 43924 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:09:01.085848 sshd-session[4807]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:09:01.092278 systemd-logind[1527]: New session 10 of user core. Aug 13 02:09:01.096726 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 02:09:01.381122 sshd[4809]: Connection closed by 147.75.109.163 port 43924 Aug 13 02:09:01.382924 sshd-session[4807]: pam_unix(sshd:session): session closed for user core Aug 13 02:09:01.390418 systemd[1]: sshd@12-172.236.122.171:22-147.75.109.163:43924.service: Deactivated successfully. Aug 13 02:09:01.393721 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 02:09:01.395407 systemd-logind[1527]: Session 10 logged out. Waiting for processes to exit. Aug 13 02:09:01.397848 systemd-logind[1527]: Removed session 10. Aug 13 02:09:01.451810 systemd[1]: Started sshd@13-172.236.122.171:22-147.75.109.163:43934.service - OpenSSH per-connection server daemon (147.75.109.163:43934). Aug 13 02:09:01.792804 sshd[4821]: Accepted publickey for core from 147.75.109.163 port 43934 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:09:01.794419 sshd-session[4821]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:09:01.803376 systemd-logind[1527]: New session 11 of user core. Aug 13 02:09:01.806721 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 02:09:02.144453 sshd[4823]: Connection closed by 147.75.109.163 port 43934 Aug 13 02:09:02.145542 sshd-session[4821]: pam_unix(sshd:session): session closed for user core Aug 13 02:09:02.150581 systemd[1]: sshd@13-172.236.122.171:22-147.75.109.163:43934.service: Deactivated successfully. Aug 13 02:09:02.153162 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 02:09:02.154264 systemd-logind[1527]: Session 11 logged out. Waiting for processes to exit. Aug 13 02:09:02.157144 systemd-logind[1527]: Removed session 11. Aug 13 02:09:02.207054 systemd[1]: Started sshd@14-172.236.122.171:22-147.75.109.163:43950.service - OpenSSH per-connection server daemon (147.75.109.163:43950). Aug 13 02:09:02.559082 sshd[4833]: Accepted publickey for core from 147.75.109.163 port 43950 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:09:02.560855 sshd-session[4833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:09:02.567805 systemd-logind[1527]: New session 12 of user core. Aug 13 02:09:02.573909 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 02:09:02.643057 systemd[1]: Started sshd@15-172.236.122.171:22-165.154.201.122:35340.service - OpenSSH per-connection server daemon (165.154.201.122:35340). Aug 13 02:09:02.891302 sshd[4835]: Connection closed by 147.75.109.163 port 43950 Aug 13 02:09:02.891999 sshd-session[4833]: pam_unix(sshd:session): session closed for user core Aug 13 02:09:02.898849 systemd-logind[1527]: Session 12 logged out. Waiting for processes to exit. Aug 13 02:09:02.900026 systemd[1]: sshd@14-172.236.122.171:22-147.75.109.163:43950.service: Deactivated successfully. Aug 13 02:09:02.903483 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 02:09:02.906019 systemd-logind[1527]: Removed session 12. Aug 13 02:09:04.197426 sshd[4837]: Received disconnect from 165.154.201.122 port 35340:11: Bye Bye [preauth] Aug 13 02:09:04.197426 sshd[4837]: Disconnected from authenticating user root 165.154.201.122 port 35340 [preauth] Aug 13 02:09:04.200851 systemd[1]: sshd@15-172.236.122.171:22-165.154.201.122:35340.service: Deactivated successfully. Aug 13 02:09:06.935741 kubelet[2718]: E0813 02:09:06.935486 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:09:06.937490 containerd[1542]: time="2025-08-13T02:09:06.936139817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,}" Aug 13 02:09:06.937490 containerd[1542]: time="2025-08-13T02:09:06.936732473Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,}" Aug 13 02:09:07.014342 containerd[1542]: time="2025-08-13T02:09:07.014134990Z" level=error msg="Failed to destroy network for sandbox \"9b7c5056947c5e3990b7d22bcdf33019b16d8b62075a45f1a37329c0200066b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:07.015757 containerd[1542]: time="2025-08-13T02:09:07.015686252Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b7c5056947c5e3990b7d22bcdf33019b16d8b62075a45f1a37329c0200066b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:07.017027 kubelet[2718]: E0813 02:09:07.015965 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b7c5056947c5e3990b7d22bcdf33019b16d8b62075a45f1a37329c0200066b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:07.017027 kubelet[2718]: E0813 02:09:07.016021 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b7c5056947c5e3990b7d22bcdf33019b16d8b62075a45f1a37329c0200066b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:09:07.017027 kubelet[2718]: E0813 02:09:07.016044 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b7c5056947c5e3990b7d22bcdf33019b16d8b62075a45f1a37329c0200066b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:09:07.017027 kubelet[2718]: E0813 02:09:07.016091 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b7c5056947c5e3990b7d22bcdf33019b16d8b62075a45f1a37329c0200066b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:09:07.018353 systemd[1]: run-netns-cni\x2dc28ee524\x2d5a9f\x2d77f1\x2d8fa8\x2df10a93929922.mount: Deactivated successfully. Aug 13 02:09:07.027642 containerd[1542]: time="2025-08-13T02:09:07.027553740Z" level=error msg="Failed to destroy network for sandbox \"91e13a0a5d8661d7a0823248ecb8f0159cfc6ff62299da38b381c2ad21c7e8b4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:07.029435 containerd[1542]: time="2025-08-13T02:09:07.029360241Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91e13a0a5d8661d7a0823248ecb8f0159cfc6ff62299da38b381c2ad21c7e8b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:07.029669 kubelet[2718]: E0813 02:09:07.029641 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91e13a0a5d8661d7a0823248ecb8f0159cfc6ff62299da38b381c2ad21c7e8b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:07.029863 kubelet[2718]: E0813 02:09:07.029780 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91e13a0a5d8661d7a0823248ecb8f0159cfc6ff62299da38b381c2ad21c7e8b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:09:07.029863 kubelet[2718]: E0813 02:09:07.029805 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91e13a0a5d8661d7a0823248ecb8f0159cfc6ff62299da38b381c2ad21c7e8b4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:09:07.030183 kubelet[2718]: E0813 02:09:07.029958 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91e13a0a5d8661d7a0823248ecb8f0159cfc6ff62299da38b381c2ad21c7e8b4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p5qmw" podUID="26fd4059-1e9c-49a2-9bd9-181be9ad7bcb" Aug 13 02:09:07.031246 systemd[1]: run-netns-cni\x2d12455e97\x2d8c7c\x2dd62c\x2ddc0d\x2df5f2d9588074.mount: Deactivated successfully. Aug 13 02:09:07.954887 systemd[1]: Started sshd@16-172.236.122.171:22-147.75.109.163:43960.service - OpenSSH per-connection server daemon (147.75.109.163:43960). Aug 13 02:09:08.284016 sshd[4909]: Accepted publickey for core from 147.75.109.163 port 43960 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:09:08.285510 sshd-session[4909]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:09:08.294014 systemd-logind[1527]: New session 13 of user core. Aug 13 02:09:08.298724 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 02:09:08.581895 sshd[4911]: Connection closed by 147.75.109.163 port 43960 Aug 13 02:09:08.582740 sshd-session[4909]: pam_unix(sshd:session): session closed for user core Aug 13 02:09:08.587620 systemd[1]: sshd@16-172.236.122.171:22-147.75.109.163:43960.service: Deactivated successfully. Aug 13 02:09:08.590096 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 02:09:08.591137 systemd-logind[1527]: Session 13 logged out. Waiting for processes to exit. Aug 13 02:09:08.592870 systemd-logind[1527]: Removed session 13. Aug 13 02:09:08.644357 systemd[1]: Started sshd@17-172.236.122.171:22-147.75.109.163:34688.service - OpenSSH per-connection server daemon (147.75.109.163:34688). Aug 13 02:09:08.938121 kubelet[2718]: E0813 02:09:08.937433 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:09:08.939494 containerd[1542]: time="2025-08-13T02:09:08.939383746Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,}" Aug 13 02:09:08.983273 sshd[4922]: Accepted publickey for core from 147.75.109.163 port 34688 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:09:08.985673 sshd-session[4922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:09:08.998028 systemd-logind[1527]: New session 14 of user core. Aug 13 02:09:09.000885 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 02:09:09.054653 containerd[1542]: time="2025-08-13T02:09:09.054563243Z" level=error msg="Failed to destroy network for sandbox \"d75f24f56720bd091ee8dc2d2f36ee7269bc492e7b9ffe4f1823cab61b66596e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:09.057565 systemd[1]: run-netns-cni\x2dd57cf993\x2dc71d\x2d4394\x2dff0c\x2deec6c79a33d2.mount: Deactivated successfully. Aug 13 02:09:09.057907 containerd[1542]: time="2025-08-13T02:09:09.057853996Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d75f24f56720bd091ee8dc2d2f36ee7269bc492e7b9ffe4f1823cab61b66596e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:09.058202 kubelet[2718]: E0813 02:09:09.058161 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d75f24f56720bd091ee8dc2d2f36ee7269bc492e7b9ffe4f1823cab61b66596e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:09.058267 kubelet[2718]: E0813 02:09:09.058223 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d75f24f56720bd091ee8dc2d2f36ee7269bc492e7b9ffe4f1823cab61b66596e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:09:09.058267 kubelet[2718]: E0813 02:09:09.058252 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d75f24f56720bd091ee8dc2d2f36ee7269bc492e7b9ffe4f1823cab61b66596e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:09:09.059779 kubelet[2718]: E0813 02:09:09.058988 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d75f24f56720bd091ee8dc2d2f36ee7269bc492e7b9ffe4f1823cab61b66596e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pw6gg" podUID="e73a6876-bbb3-4e11-8a33-1945cf27a944" Aug 13 02:09:09.540517 sshd[4944]: Connection closed by 147.75.109.163 port 34688 Aug 13 02:09:09.541140 sshd-session[4922]: pam_unix(sshd:session): session closed for user core Aug 13 02:09:09.547823 systemd-logind[1527]: Session 14 logged out. Waiting for processes to exit. Aug 13 02:09:09.548796 systemd[1]: sshd@17-172.236.122.171:22-147.75.109.163:34688.service: Deactivated successfully. Aug 13 02:09:09.552321 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 02:09:09.555070 systemd-logind[1527]: Removed session 14. Aug 13 02:09:09.606874 systemd[1]: Started sshd@18-172.236.122.171:22-147.75.109.163:34704.service - OpenSSH per-connection server daemon (147.75.109.163:34704). Aug 13 02:09:09.942428 kubelet[2718]: E0813 02:09:09.942292 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:09:09.943087 containerd[1542]: time="2025-08-13T02:09:09.942953416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,}" Aug 13 02:09:09.953632 sshd[4960]: Accepted publickey for core from 147.75.109.163 port 34704 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:09:09.955902 sshd-session[4960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:09:09.961114 systemd-logind[1527]: New session 15 of user core. Aug 13 02:09:09.968091 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 02:09:10.016642 containerd[1542]: time="2025-08-13T02:09:10.016563998Z" level=error msg="Failed to destroy network for sandbox \"a5b9cf00923c2d73cdeb56674edbcea0f783577d540f8fb3f0a2fb4799a28d65\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:10.018400 systemd[1]: run-netns-cni\x2d389899a9\x2d3164\x2d3659\x2de3fe\x2d84f4ec801f5e.mount: Deactivated successfully. Aug 13 02:09:10.023453 containerd[1542]: time="2025-08-13T02:09:10.023303444Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b9cf00923c2d73cdeb56674edbcea0f783577d540f8fb3f0a2fb4799a28d65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:10.023901 kubelet[2718]: E0813 02:09:10.023834 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b9cf00923c2d73cdeb56674edbcea0f783577d540f8fb3f0a2fb4799a28d65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:10.024224 kubelet[2718]: E0813 02:09:10.024148 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b9cf00923c2d73cdeb56674edbcea0f783577d540f8fb3f0a2fb4799a28d65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:09:10.024224 kubelet[2718]: E0813 02:09:10.024188 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a5b9cf00923c2d73cdeb56674edbcea0f783577d540f8fb3f0a2fb4799a28d65\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:09:10.024376 kubelet[2718]: E0813 02:09:10.024349 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a5b9cf00923c2d73cdeb56674edbcea0f783577d540f8fb3f0a2fb4799a28d65\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:09:10.749944 sshd[4972]: Connection closed by 147.75.109.163 port 34704 Aug 13 02:09:10.751689 sshd-session[4960]: pam_unix(sshd:session): session closed for user core Aug 13 02:09:10.760325 systemd-logind[1527]: Session 15 logged out. Waiting for processes to exit. Aug 13 02:09:10.762418 systemd[1]: sshd@18-172.236.122.171:22-147.75.109.163:34704.service: Deactivated successfully. Aug 13 02:09:10.765114 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 02:09:10.768169 systemd-logind[1527]: Removed session 15. Aug 13 02:09:10.808955 systemd[1]: Started sshd@19-172.236.122.171:22-147.75.109.163:34712.service - OpenSSH per-connection server daemon (147.75.109.163:34712). Aug 13 02:09:11.142841 sshd[5007]: Accepted publickey for core from 147.75.109.163 port 34712 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:09:11.144802 sshd-session[5007]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:09:11.150870 systemd-logind[1527]: New session 16 of user core. Aug 13 02:09:11.154720 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 02:09:11.554687 sshd[5009]: Connection closed by 147.75.109.163 port 34712 Aug 13 02:09:11.555461 sshd-session[5007]: pam_unix(sshd:session): session closed for user core Aug 13 02:09:11.560379 systemd-logind[1527]: Session 16 logged out. Waiting for processes to exit. Aug 13 02:09:11.561276 systemd[1]: sshd@19-172.236.122.171:22-147.75.109.163:34712.service: Deactivated successfully. Aug 13 02:09:11.564099 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 02:09:11.566626 systemd-logind[1527]: Removed session 16. Aug 13 02:09:11.621483 systemd[1]: Started sshd@20-172.236.122.171:22-147.75.109.163:34718.service - OpenSSH per-connection server daemon (147.75.109.163:34718). Aug 13 02:09:11.965752 sshd[5019]: Accepted publickey for core from 147.75.109.163 port 34718 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:09:11.967544 sshd-session[5019]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:09:11.976462 systemd-logind[1527]: New session 17 of user core. Aug 13 02:09:11.981724 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 02:09:12.275789 sshd[5021]: Connection closed by 147.75.109.163 port 34718 Aug 13 02:09:12.276479 sshd-session[5019]: pam_unix(sshd:session): session closed for user core Aug 13 02:09:12.282012 systemd[1]: sshd@20-172.236.122.171:22-147.75.109.163:34718.service: Deactivated successfully. Aug 13 02:09:12.284911 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 02:09:12.286188 systemd-logind[1527]: Session 17 logged out. Waiting for processes to exit. Aug 13 02:09:12.287867 systemd-logind[1527]: Removed session 17. Aug 13 02:09:14.936687 kubelet[2718]: E0813 02:09:14.936347 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1264763250: write /var/lib/containerd/tmpmounts/containerd-mount1264763250/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-cdfxj" podUID="e8f51745-7382-4ead-96df-a31572ad4e1f" Aug 13 02:09:16.935509 kubelet[2718]: E0813 02:09:16.935470 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:09:17.342231 systemd[1]: Started sshd@21-172.236.122.171:22-147.75.109.163:34724.service - OpenSSH per-connection server daemon (147.75.109.163:34724). Aug 13 02:09:17.681268 sshd[5033]: Accepted publickey for core from 147.75.109.163 port 34724 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:09:17.682636 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:09:17.687679 systemd-logind[1527]: New session 18 of user core. Aug 13 02:09:17.694712 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 02:09:17.938915 kubelet[2718]: E0813 02:09:17.938803 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:09:17.939633 containerd[1542]: time="2025-08-13T02:09:17.939566195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,}" Aug 13 02:09:17.983912 sshd[5035]: Connection closed by 147.75.109.163 port 34724 Aug 13 02:09:17.984616 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Aug 13 02:09:17.989544 systemd-logind[1527]: Session 18 logged out. Waiting for processes to exit. Aug 13 02:09:17.989819 systemd[1]: sshd@21-172.236.122.171:22-147.75.109.163:34724.service: Deactivated successfully. Aug 13 02:09:17.992372 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 02:09:17.993427 containerd[1542]: time="2025-08-13T02:09:17.993369058Z" level=error msg="Failed to destroy network for sandbox \"a3c773d71fbb6bb3304e22f36a3e960e75a148d3d741ced941c297b274ef797a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:17.996048 containerd[1542]: time="2025-08-13T02:09:17.995984745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3c773d71fbb6bb3304e22f36a3e960e75a148d3d741ced941c297b274ef797a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:17.997002 kubelet[2718]: E0813 02:09:17.996964 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3c773d71fbb6bb3304e22f36a3e960e75a148d3d741ced941c297b274ef797a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:17.997072 kubelet[2718]: E0813 02:09:17.997024 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3c773d71fbb6bb3304e22f36a3e960e75a148d3d741ced941c297b274ef797a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:09:17.997072 kubelet[2718]: E0813 02:09:17.997046 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a3c773d71fbb6bb3304e22f36a3e960e75a148d3d741ced941c297b274ef797a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:09:17.997126 kubelet[2718]: E0813 02:09:17.997084 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a3c773d71fbb6bb3304e22f36a3e960e75a148d3d741ced941c297b274ef797a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:09:17.999100 systemd[1]: run-netns-cni\x2d3d41114a\x2df146\x2dd903\x2d38cb\x2dc098b6701b1d.mount: Deactivated successfully. Aug 13 02:09:18.002937 systemd-logind[1527]: Removed session 18. Aug 13 02:09:19.941402 kubelet[2718]: E0813 02:09:19.940778 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:09:19.942896 containerd[1542]: time="2025-08-13T02:09:19.941446413Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,}" Aug 13 02:09:20.004631 containerd[1542]: time="2025-08-13T02:09:20.004547223Z" level=error msg="Failed to destroy network for sandbox \"8bfae9da513cacb242deea38094300bff44a08a8045c33fd5d8065201280a546\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:20.007420 systemd[1]: run-netns-cni\x2d5de6a0df\x2de583\x2dbe86\x2da1cd\x2d8f72e205316a.mount: Deactivated successfully. Aug 13 02:09:20.009346 containerd[1542]: time="2025-08-13T02:09:20.009247630Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bfae9da513cacb242deea38094300bff44a08a8045c33fd5d8065201280a546\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:20.010267 kubelet[2718]: E0813 02:09:20.010216 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bfae9da513cacb242deea38094300bff44a08a8045c33fd5d8065201280a546\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:20.010267 kubelet[2718]: E0813 02:09:20.010261 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bfae9da513cacb242deea38094300bff44a08a8045c33fd5d8065201280a546\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:09:20.010422 kubelet[2718]: E0813 02:09:20.010281 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bfae9da513cacb242deea38094300bff44a08a8045c33fd5d8065201280a546\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:09:20.010422 kubelet[2718]: E0813 02:09:20.010317 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bfae9da513cacb242deea38094300bff44a08a8045c33fd5d8065201280a546\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p5qmw" podUID="26fd4059-1e9c-49a2-9bd9-181be9ad7bcb" Aug 13 02:09:23.043198 systemd[1]: Started sshd@22-172.236.122.171:22-147.75.109.163:55644.service - OpenSSH per-connection server daemon (147.75.109.163:55644). Aug 13 02:09:23.371460 sshd[5102]: Accepted publickey for core from 147.75.109.163 port 55644 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:09:23.374116 sshd-session[5102]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:09:23.380954 systemd-logind[1527]: New session 19 of user core. Aug 13 02:09:23.385872 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 02:09:23.728897 sshd[5104]: Connection closed by 147.75.109.163 port 55644 Aug 13 02:09:23.729506 sshd-session[5102]: pam_unix(sshd:session): session closed for user core Aug 13 02:09:23.734266 systemd-logind[1527]: Session 19 logged out. Waiting for processes to exit. Aug 13 02:09:23.735201 systemd[1]: sshd@22-172.236.122.171:22-147.75.109.163:55644.service: Deactivated successfully. Aug 13 02:09:23.737328 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 02:09:23.740557 systemd-logind[1527]: Removed session 19. Aug 13 02:09:23.938670 kubelet[2718]: E0813 02:09:23.935785 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:09:23.939156 containerd[1542]: time="2025-08-13T02:09:23.938959645Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,}" Aug 13 02:09:23.940291 containerd[1542]: time="2025-08-13T02:09:23.939798701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,}" Aug 13 02:09:24.029514 containerd[1542]: time="2025-08-13T02:09:24.029379287Z" level=error msg="Failed to destroy network for sandbox \"fe311b9a4fe6929422e02481a6c922ed729d8d6748fc2343d603c3a506b4817a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:24.034064 containerd[1542]: time="2025-08-13T02:09:24.034035135Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe311b9a4fe6929422e02481a6c922ed729d8d6748fc2343d603c3a506b4817a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:24.034923 systemd[1]: run-netns-cni\x2dae121ac8\x2d7d00\x2d1a2d\x2d0090\x2db2c51f276390.mount: Deactivated successfully. Aug 13 02:09:24.037167 kubelet[2718]: E0813 02:09:24.037133 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe311b9a4fe6929422e02481a6c922ed729d8d6748fc2343d603c3a506b4817a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:24.037476 kubelet[2718]: E0813 02:09:24.037454 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe311b9a4fe6929422e02481a6c922ed729d8d6748fc2343d603c3a506b4817a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:09:24.037577 kubelet[2718]: E0813 02:09:24.037555 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe311b9a4fe6929422e02481a6c922ed729d8d6748fc2343d603c3a506b4817a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:09:24.037895 kubelet[2718]: E0813 02:09:24.037844 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe311b9a4fe6929422e02481a6c922ed729d8d6748fc2343d603c3a506b4817a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pw6gg" podUID="e73a6876-bbb3-4e11-8a33-1945cf27a944" Aug 13 02:09:24.060341 containerd[1542]: time="2025-08-13T02:09:24.060271998Z" level=error msg="Failed to destroy network for sandbox \"48bb556121d3c350c495d847cf17f9fd2e5db89fe5d94f84ef91d1870be7ec68\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:24.062315 systemd[1]: run-netns-cni\x2d5f4eaa78\x2dcf37\x2d42e1\x2ddcc0\x2debf5b582e50d.mount: Deactivated successfully. Aug 13 02:09:24.066033 containerd[1542]: time="2025-08-13T02:09:24.065976580Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"48bb556121d3c350c495d847cf17f9fd2e5db89fe5d94f84ef91d1870be7ec68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:24.066328 kubelet[2718]: E0813 02:09:24.066259 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48bb556121d3c350c495d847cf17f9fd2e5db89fe5d94f84ef91d1870be7ec68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:24.066368 kubelet[2718]: E0813 02:09:24.066337 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48bb556121d3c350c495d847cf17f9fd2e5db89fe5d94f84ef91d1870be7ec68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:09:24.066368 kubelet[2718]: E0813 02:09:24.066361 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"48bb556121d3c350c495d847cf17f9fd2e5db89fe5d94f84ef91d1870be7ec68\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:09:24.066462 kubelet[2718]: E0813 02:09:24.066419 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"48bb556121d3c350c495d847cf17f9fd2e5db89fe5d94f84ef91d1870be7ec68\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:09:26.938838 kubelet[2718]: E0813 02:09:26.938215 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1264763250: write /var/lib/containerd/tmpmounts/containerd-mount1264763250/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-cdfxj" podUID="e8f51745-7382-4ead-96df-a31572ad4e1f" Aug 13 02:09:28.789611 systemd[1]: Started sshd@23-172.236.122.171:22-147.75.109.163:40340.service - OpenSSH per-connection server daemon (147.75.109.163:40340). Aug 13 02:09:29.125687 sshd[5170]: Accepted publickey for core from 147.75.109.163 port 40340 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:09:29.127158 sshd-session[5170]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:09:29.134360 systemd-logind[1527]: New session 20 of user core. Aug 13 02:09:29.139802 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 02:09:29.432645 sshd[5172]: Connection closed by 147.75.109.163 port 40340 Aug 13 02:09:29.433359 sshd-session[5170]: pam_unix(sshd:session): session closed for user core Aug 13 02:09:29.439467 systemd[1]: sshd@23-172.236.122.171:22-147.75.109.163:40340.service: Deactivated successfully. Aug 13 02:09:29.442825 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 02:09:29.443808 systemd-logind[1527]: Session 20 logged out. Waiting for processes to exit. Aug 13 02:09:29.446097 systemd-logind[1527]: Removed session 20. Aug 13 02:09:29.942326 containerd[1542]: time="2025-08-13T02:09:29.941906886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,}" Aug 13 02:09:30.007215 containerd[1542]: time="2025-08-13T02:09:30.007136815Z" level=error msg="Failed to destroy network for sandbox \"63b30f8b1d31d30e043c9225c0972845577782c80beb9096f161e13db99f749d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:30.009565 systemd[1]: run-netns-cni\x2da287eb7f\x2deb07\x2d6811\x2d86ef\x2dd23d3a69b10d.mount: Deactivated successfully. Aug 13 02:09:30.011039 containerd[1542]: time="2025-08-13T02:09:30.010959757Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"63b30f8b1d31d30e043c9225c0972845577782c80beb9096f161e13db99f749d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:30.011876 kubelet[2718]: E0813 02:09:30.011834 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63b30f8b1d31d30e043c9225c0972845577782c80beb9096f161e13db99f749d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:30.012349 kubelet[2718]: E0813 02:09:30.011900 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63b30f8b1d31d30e043c9225c0972845577782c80beb9096f161e13db99f749d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:09:30.012349 kubelet[2718]: E0813 02:09:30.011923 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63b30f8b1d31d30e043c9225c0972845577782c80beb9096f161e13db99f749d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:09:30.012349 kubelet[2718]: E0813 02:09:30.011976 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63b30f8b1d31d30e043c9225c0972845577782c80beb9096f161e13db99f749d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:09:32.936090 kubelet[2718]: E0813 02:09:32.935700 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:09:32.938304 containerd[1542]: time="2025-08-13T02:09:32.937562701Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,}" Aug 13 02:09:33.018524 containerd[1542]: time="2025-08-13T02:09:33.018464769Z" level=error msg="Failed to destroy network for sandbox \"84d9083a411641797d4ed58cdb4cc020bce3cd5b448d89f082a4a1683d3ef394\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:33.021850 containerd[1542]: time="2025-08-13T02:09:33.021788693Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"84d9083a411641797d4ed58cdb4cc020bce3cd5b448d89f082a4a1683d3ef394\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:33.023084 systemd[1]: run-netns-cni\x2d2cd10736\x2d6bd2\x2dd9bb\x2d3b29\x2d3989351fb9ec.mount: Deactivated successfully. Aug 13 02:09:33.025065 kubelet[2718]: E0813 02:09:33.023899 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84d9083a411641797d4ed58cdb4cc020bce3cd5b448d89f082a4a1683d3ef394\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:33.025065 kubelet[2718]: E0813 02:09:33.023968 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84d9083a411641797d4ed58cdb4cc020bce3cd5b448d89f082a4a1683d3ef394\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:09:33.025065 kubelet[2718]: E0813 02:09:33.023994 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"84d9083a411641797d4ed58cdb4cc020bce3cd5b448d89f082a4a1683d3ef394\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:09:33.025065 kubelet[2718]: E0813 02:09:33.024040 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"84d9083a411641797d4ed58cdb4cc020bce3cd5b448d89f082a4a1683d3ef394\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p5qmw" podUID="26fd4059-1e9c-49a2-9bd9-181be9ad7bcb" Aug 13 02:09:34.500745 systemd[1]: Started sshd@24-172.236.122.171:22-147.75.109.163:40342.service - OpenSSH per-connection server daemon (147.75.109.163:40342). Aug 13 02:09:34.853645 sshd[5235]: Accepted publickey for core from 147.75.109.163 port 40342 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:09:34.855151 sshd-session[5235]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:09:34.860743 systemd-logind[1527]: New session 21 of user core. Aug 13 02:09:34.863782 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 02:09:35.156314 sshd[5237]: Connection closed by 147.75.109.163 port 40342 Aug 13 02:09:35.156945 sshd-session[5235]: pam_unix(sshd:session): session closed for user core Aug 13 02:09:35.164087 systemd[1]: sshd@24-172.236.122.171:22-147.75.109.163:40342.service: Deactivated successfully. Aug 13 02:09:35.166450 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 02:09:35.169636 systemd-logind[1527]: Session 21 logged out. Waiting for processes to exit. Aug 13 02:09:35.170859 systemd-logind[1527]: Removed session 21. Aug 13 02:09:38.935898 kubelet[2718]: E0813 02:09:38.935412 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:09:38.937405 containerd[1542]: time="2025-08-13T02:09:38.937017141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,}" Aug 13 02:09:38.937939 containerd[1542]: time="2025-08-13T02:09:38.937630448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,}" Aug 13 02:09:39.036758 containerd[1542]: time="2025-08-13T02:09:39.036688748Z" level=error msg="Failed to destroy network for sandbox \"b3da2f184337d245478c491218a0947771fc9acf984bf07d4716395cc02fda5c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:39.039383 systemd[1]: run-netns-cni\x2dedf28ef8\x2de24f\x2d79a7\x2dbae1\x2dbf5da50b0efa.mount: Deactivated successfully. Aug 13 02:09:39.040555 containerd[1542]: time="2025-08-13T02:09:39.040467740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3da2f184337d245478c491218a0947771fc9acf984bf07d4716395cc02fda5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:39.041471 kubelet[2718]: E0813 02:09:39.040947 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3da2f184337d245478c491218a0947771fc9acf984bf07d4716395cc02fda5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:39.041471 kubelet[2718]: E0813 02:09:39.041021 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3da2f184337d245478c491218a0947771fc9acf984bf07d4716395cc02fda5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:09:39.041471 kubelet[2718]: E0813 02:09:39.041045 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3da2f184337d245478c491218a0947771fc9acf984bf07d4716395cc02fda5c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:09:39.041471 kubelet[2718]: E0813 02:09:39.041091 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3da2f184337d245478c491218a0947771fc9acf984bf07d4716395cc02fda5c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pw6gg" podUID="e73a6876-bbb3-4e11-8a33-1945cf27a944" Aug 13 02:09:39.059423 containerd[1542]: time="2025-08-13T02:09:39.059328273Z" level=error msg="Failed to destroy network for sandbox \"22b2fb7145b6460ffadc3a6962653bde4282c997ee91e96715d2c439f5f7dab0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:39.063353 systemd[1]: run-netns-cni\x2df48fda49\x2d2959\x2d0497\x2d76bb\x2d4643530c3d79.mount: Deactivated successfully. Aug 13 02:09:39.064210 containerd[1542]: time="2025-08-13T02:09:39.064146650Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"22b2fb7145b6460ffadc3a6962653bde4282c997ee91e96715d2c439f5f7dab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:39.064822 kubelet[2718]: E0813 02:09:39.064774 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22b2fb7145b6460ffadc3a6962653bde4282c997ee91e96715d2c439f5f7dab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:39.064993 kubelet[2718]: E0813 02:09:39.064841 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22b2fb7145b6460ffadc3a6962653bde4282c997ee91e96715d2c439f5f7dab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:09:39.064993 kubelet[2718]: E0813 02:09:39.064871 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"22b2fb7145b6460ffadc3a6962653bde4282c997ee91e96715d2c439f5f7dab0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:09:39.064993 kubelet[2718]: E0813 02:09:39.064916 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"22b2fb7145b6460ffadc3a6962653bde4282c997ee91e96715d2c439f5f7dab0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:09:39.944018 kubelet[2718]: E0813 02:09:39.943938 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1264763250: write /var/lib/containerd/tmpmounts/containerd-mount1264763250/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-cdfxj" podUID="e8f51745-7382-4ead-96df-a31572ad4e1f" Aug 13 02:09:40.219045 systemd[1]: Started sshd@25-172.236.122.171:22-147.75.109.163:45176.service - OpenSSH per-connection server daemon (147.75.109.163:45176). Aug 13 02:09:40.558554 sshd[5307]: Accepted publickey for core from 147.75.109.163 port 45176 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:09:40.560163 sshd-session[5307]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:09:40.566504 systemd-logind[1527]: New session 22 of user core. Aug 13 02:09:40.572748 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 02:09:40.882704 sshd[5309]: Connection closed by 147.75.109.163 port 45176 Aug 13 02:09:40.883626 sshd-session[5307]: pam_unix(sshd:session): session closed for user core Aug 13 02:09:40.889983 systemd[1]: sshd@25-172.236.122.171:22-147.75.109.163:45176.service: Deactivated successfully. Aug 13 02:09:40.892442 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 02:09:40.894877 systemd-logind[1527]: Session 22 logged out. Waiting for processes to exit. Aug 13 02:09:40.896815 systemd-logind[1527]: Removed session 22. Aug 13 02:09:41.937268 containerd[1542]: time="2025-08-13T02:09:41.937207415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,}" Aug 13 02:09:42.009633 containerd[1542]: time="2025-08-13T02:09:42.009548701Z" level=error msg="Failed to destroy network for sandbox \"69003717eb451c60f78d4ed3a4ecd6ae65636ad7259df12e0fe7fa1f3c5be1c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:42.011082 containerd[1542]: time="2025-08-13T02:09:42.010976404Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"69003717eb451c60f78d4ed3a4ecd6ae65636ad7259df12e0fe7fa1f3c5be1c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:42.011303 kubelet[2718]: E0813 02:09:42.011269 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69003717eb451c60f78d4ed3a4ecd6ae65636ad7259df12e0fe7fa1f3c5be1c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:42.012181 kubelet[2718]: E0813 02:09:42.011754 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69003717eb451c60f78d4ed3a4ecd6ae65636ad7259df12e0fe7fa1f3c5be1c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:09:42.012181 kubelet[2718]: E0813 02:09:42.011803 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"69003717eb451c60f78d4ed3a4ecd6ae65636ad7259df12e0fe7fa1f3c5be1c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:09:42.012181 kubelet[2718]: E0813 02:09:42.011913 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"69003717eb451c60f78d4ed3a4ecd6ae65636ad7259df12e0fe7fa1f3c5be1c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:09:42.013365 systemd[1]: run-netns-cni\x2d8e565175\x2dcd1b\x2d228f\x2d518a\x2d54222ad488e3.mount: Deactivated successfully. Aug 13 02:09:42.936522 kubelet[2718]: E0813 02:09:42.936450 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:09:45.946787 systemd[1]: Started sshd@26-172.236.122.171:22-147.75.109.163:45178.service - OpenSSH per-connection server daemon (147.75.109.163:45178). Aug 13 02:09:46.296453 sshd[5347]: Accepted publickey for core from 147.75.109.163 port 45178 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:09:46.298227 sshd-session[5347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:09:46.306636 systemd-logind[1527]: New session 23 of user core. Aug 13 02:09:46.316151 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 02:09:46.603816 sshd[5349]: Connection closed by 147.75.109.163 port 45178 Aug 13 02:09:46.605564 sshd-session[5347]: pam_unix(sshd:session): session closed for user core Aug 13 02:09:46.610670 systemd-logind[1527]: Session 23 logged out. Waiting for processes to exit. Aug 13 02:09:46.611258 systemd[1]: sshd@26-172.236.122.171:22-147.75.109.163:45178.service: Deactivated successfully. Aug 13 02:09:46.613523 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 02:09:46.615503 systemd-logind[1527]: Removed session 23. Aug 13 02:09:47.936907 kubelet[2718]: E0813 02:09:47.936430 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:09:47.940438 containerd[1542]: time="2025-08-13T02:09:47.937309673Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,}" Aug 13 02:09:47.998027 containerd[1542]: time="2025-08-13T02:09:47.997740556Z" level=error msg="Failed to destroy network for sandbox \"4a36cf20c2b8aa7dd069b439c4b421a1f4b5cd756ad9ce135b98ae5a17e2d229\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:48.001306 systemd[1]: run-netns-cni\x2d518fca15\x2d49d4\x2d6900\x2da0d4\x2d9efde86a5e3c.mount: Deactivated successfully. Aug 13 02:09:48.002834 containerd[1542]: time="2025-08-13T02:09:48.002647284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a36cf20c2b8aa7dd069b439c4b421a1f4b5cd756ad9ce135b98ae5a17e2d229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:48.004003 kubelet[2718]: E0813 02:09:48.003975 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a36cf20c2b8aa7dd069b439c4b421a1f4b5cd756ad9ce135b98ae5a17e2d229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:48.004288 kubelet[2718]: E0813 02:09:48.004020 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a36cf20c2b8aa7dd069b439c4b421a1f4b5cd756ad9ce135b98ae5a17e2d229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:09:48.004288 kubelet[2718]: E0813 02:09:48.004060 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a36cf20c2b8aa7dd069b439c4b421a1f4b5cd756ad9ce135b98ae5a17e2d229\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:09:48.004288 kubelet[2718]: E0813 02:09:48.004092 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a36cf20c2b8aa7dd069b439c4b421a1f4b5cd756ad9ce135b98ae5a17e2d229\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p5qmw" podUID="26fd4059-1e9c-49a2-9bd9-181be9ad7bcb" Aug 13 02:09:50.937115 containerd[1542]: time="2025-08-13T02:09:50.937053152Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,}" Aug 13 02:09:51.002894 containerd[1542]: time="2025-08-13T02:09:51.002752883Z" level=error msg="Failed to destroy network for sandbox \"5983b035b7ce1ad20b3f1188eedf393521511fd8df264a17857b4418258ce635\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:51.005530 systemd[1]: run-netns-cni\x2dbc455ab1\x2d6f0c\x2d3d9c\x2d5c21\x2dbec5180a0650.mount: Deactivated successfully. Aug 13 02:09:51.007351 containerd[1542]: time="2025-08-13T02:09:51.007242853Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5983b035b7ce1ad20b3f1188eedf393521511fd8df264a17857b4418258ce635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:51.007839 kubelet[2718]: E0813 02:09:51.007796 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5983b035b7ce1ad20b3f1188eedf393521511fd8df264a17857b4418258ce635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:51.008121 kubelet[2718]: E0813 02:09:51.007847 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5983b035b7ce1ad20b3f1188eedf393521511fd8df264a17857b4418258ce635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:09:51.008121 kubelet[2718]: E0813 02:09:51.007879 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5983b035b7ce1ad20b3f1188eedf393521511fd8df264a17857b4418258ce635\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:09:51.008121 kubelet[2718]: E0813 02:09:51.007940 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5983b035b7ce1ad20b3f1188eedf393521511fd8df264a17857b4418258ce635\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:09:51.667226 systemd[1]: Started sshd@27-172.236.122.171:22-147.75.109.163:43596.service - OpenSSH per-connection server daemon (147.75.109.163:43596). Aug 13 02:09:52.023311 sshd[5415]: Accepted publickey for core from 147.75.109.163 port 43596 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:09:52.024918 sshd-session[5415]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:09:52.033049 systemd-logind[1527]: New session 24 of user core. Aug 13 02:09:52.039948 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 02:09:52.336208 sshd[5417]: Connection closed by 147.75.109.163 port 43596 Aug 13 02:09:52.338052 sshd-session[5415]: pam_unix(sshd:session): session closed for user core Aug 13 02:09:52.341975 systemd-logind[1527]: Session 24 logged out. Waiting for processes to exit. Aug 13 02:09:52.343974 systemd[1]: sshd@27-172.236.122.171:22-147.75.109.163:43596.service: Deactivated successfully. Aug 13 02:09:52.346810 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 02:09:52.349168 systemd-logind[1527]: Removed session 24. Aug 13 02:09:52.936445 containerd[1542]: time="2025-08-13T02:09:52.936396239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,}" Aug 13 02:09:52.987571 containerd[1542]: time="2025-08-13T02:09:52.987475177Z" level=error msg="Failed to destroy network for sandbox \"90a3849a984bbab86a8dff08313523571d8e5d18a1693d2a694f81ed89b6b706\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:52.989670 containerd[1542]: time="2025-08-13T02:09:52.989555858Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a3849a984bbab86a8dff08313523571d8e5d18a1693d2a694f81ed89b6b706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:52.990314 kubelet[2718]: E0813 02:09:52.990174 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a3849a984bbab86a8dff08313523571d8e5d18a1693d2a694f81ed89b6b706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:52.990314 kubelet[2718]: E0813 02:09:52.990252 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a3849a984bbab86a8dff08313523571d8e5d18a1693d2a694f81ed89b6b706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:09:52.990314 kubelet[2718]: E0813 02:09:52.990278 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"90a3849a984bbab86a8dff08313523571d8e5d18a1693d2a694f81ed89b6b706\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:09:52.991880 kubelet[2718]: E0813 02:09:52.990575 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"90a3849a984bbab86a8dff08313523571d8e5d18a1693d2a694f81ed89b6b706\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:09:52.991952 systemd[1]: run-netns-cni\x2dc3f00a1e\x2da87d\x2d26b4\x2d0927\x2d4f80cefdbb93.mount: Deactivated successfully. Aug 13 02:09:53.936950 kubelet[2718]: E0813 02:09:53.936186 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:09:53.937382 containerd[1542]: time="2025-08-13T02:09:53.937253323Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,}" Aug 13 02:09:53.993620 containerd[1542]: time="2025-08-13T02:09:53.993545267Z" level=error msg="Failed to destroy network for sandbox \"578403617ffdf0dc09a04e7269e29096585299ef486ec58f3d04b35d1240bc7c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:53.995602 systemd[1]: run-netns-cni\x2dcbd21a6c\x2de2f1\x2d7891\x2d6aa5\x2d8355aa39eef8.mount: Deactivated successfully. Aug 13 02:09:53.999408 containerd[1542]: time="2025-08-13T02:09:53.999366071Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"578403617ffdf0dc09a04e7269e29096585299ef486ec58f3d04b35d1240bc7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:53.999693 kubelet[2718]: E0813 02:09:53.999650 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"578403617ffdf0dc09a04e7269e29096585299ef486ec58f3d04b35d1240bc7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:09:53.999954 kubelet[2718]: E0813 02:09:53.999710 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"578403617ffdf0dc09a04e7269e29096585299ef486ec58f3d04b35d1240bc7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:09:53.999954 kubelet[2718]: E0813 02:09:53.999732 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"578403617ffdf0dc09a04e7269e29096585299ef486ec58f3d04b35d1240bc7c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:09:53.999954 kubelet[2718]: E0813 02:09:53.999791 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"578403617ffdf0dc09a04e7269e29096585299ef486ec58f3d04b35d1240bc7c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pw6gg" podUID="e73a6876-bbb3-4e11-8a33-1945cf27a944" Aug 13 02:09:54.936931 kubelet[2718]: E0813 02:09:54.936889 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1264763250: write /var/lib/containerd/tmpmounts/containerd-mount1264763250/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-cdfxj" podUID="e8f51745-7382-4ead-96df-a31572ad4e1f" Aug 13 02:09:57.396408 systemd[1]: Started sshd@28-172.236.122.171:22-147.75.109.163:43612.service - OpenSSH per-connection server daemon (147.75.109.163:43612). Aug 13 02:09:57.731361 sshd[5481]: Accepted publickey for core from 147.75.109.163 port 43612 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:09:57.735452 sshd-session[5481]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:09:57.743334 systemd-logind[1527]: New session 25 of user core. Aug 13 02:09:57.748887 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 02:09:58.062623 sshd[5483]: Connection closed by 147.75.109.163 port 43612 Aug 13 02:09:58.063300 sshd-session[5481]: pam_unix(sshd:session): session closed for user core Aug 13 02:09:58.068690 systemd[1]: sshd@28-172.236.122.171:22-147.75.109.163:43612.service: Deactivated successfully. Aug 13 02:09:58.071558 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 02:09:58.072928 systemd-logind[1527]: Session 25 logged out. Waiting for processes to exit. Aug 13 02:09:58.075309 systemd-logind[1527]: Removed session 25. Aug 13 02:10:00.936067 kubelet[2718]: E0813 02:10:00.935777 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:10:00.936484 containerd[1542]: time="2025-08-13T02:10:00.936374458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,}" Aug 13 02:10:00.990409 containerd[1542]: time="2025-08-13T02:10:00.990364363Z" level=error msg="Failed to destroy network for sandbox \"63d723de64a02d9017f82b4cf3063b3be872da87c384ceaece23c5e69430b003\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:10:00.992419 systemd[1]: run-netns-cni\x2d62e9da9e\x2de3b2\x2da637\x2d35a0\x2d388500068c78.mount: Deactivated successfully. Aug 13 02:10:00.994261 containerd[1542]: time="2025-08-13T02:10:00.994225214Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"63d723de64a02d9017f82b4cf3063b3be872da87c384ceaece23c5e69430b003\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:10:00.995203 kubelet[2718]: E0813 02:10:00.994748 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63d723de64a02d9017f82b4cf3063b3be872da87c384ceaece23c5e69430b003\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:10:00.995350 kubelet[2718]: E0813 02:10:00.995311 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63d723de64a02d9017f82b4cf3063b3be872da87c384ceaece23c5e69430b003\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:10:00.995659 kubelet[2718]: E0813 02:10:00.995408 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"63d723de64a02d9017f82b4cf3063b3be872da87c384ceaece23c5e69430b003\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:10:00.995659 kubelet[2718]: E0813 02:10:00.995458 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"63d723de64a02d9017f82b4cf3063b3be872da87c384ceaece23c5e69430b003\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p5qmw" podUID="26fd4059-1e9c-49a2-9bd9-181be9ad7bcb" Aug 13 02:10:03.132542 systemd[1]: Started sshd@29-172.236.122.171:22-147.75.109.163:52940.service - OpenSSH per-connection server daemon (147.75.109.163:52940). Aug 13 02:10:03.467876 sshd[5522]: Accepted publickey for core from 147.75.109.163 port 52940 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:10:03.469386 sshd-session[5522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:10:03.476648 systemd-logind[1527]: New session 26 of user core. Aug 13 02:10:03.480746 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 02:10:03.777423 sshd[5524]: Connection closed by 147.75.109.163 port 52940 Aug 13 02:10:03.778902 sshd-session[5522]: pam_unix(sshd:session): session closed for user core Aug 13 02:10:03.785569 systemd[1]: sshd@29-172.236.122.171:22-147.75.109.163:52940.service: Deactivated successfully. Aug 13 02:10:03.787930 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 02:10:03.789222 systemd-logind[1527]: Session 26 logged out. Waiting for processes to exit. Aug 13 02:10:03.791323 systemd-logind[1527]: Removed session 26. Aug 13 02:10:03.937383 containerd[1542]: time="2025-08-13T02:10:03.937349148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,}" Aug 13 02:10:04.016326 containerd[1542]: time="2025-08-13T02:10:04.014346010Z" level=error msg="Failed to destroy network for sandbox \"2bd63eb0fb0507151373474c9e0bf381a1862e967042b8e60c6b6ebf350870fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:10:04.016758 systemd[1]: run-netns-cni\x2d17b0a1cb\x2dcf62\x2d5eb1\x2d0bd8\x2d294937f99635.mount: Deactivated successfully. Aug 13 02:10:04.017997 containerd[1542]: time="2025-08-13T02:10:04.017762945Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bd63eb0fb0507151373474c9e0bf381a1862e967042b8e60c6b6ebf350870fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:10:04.018193 kubelet[2718]: E0813 02:10:04.018001 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bd63eb0fb0507151373474c9e0bf381a1862e967042b8e60c6b6ebf350870fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:10:04.018193 kubelet[2718]: E0813 02:10:04.018056 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bd63eb0fb0507151373474c9e0bf381a1862e967042b8e60c6b6ebf350870fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:10:04.018193 kubelet[2718]: E0813 02:10:04.018079 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2bd63eb0fb0507151373474c9e0bf381a1862e967042b8e60c6b6ebf350870fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:10:04.018193 kubelet[2718]: E0813 02:10:04.018151 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-r6mhv_calico-system(b671177d-3397-4938-853c-0cced3d0e9f5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2bd63eb0fb0507151373474c9e0bf381a1862e967042b8e60c6b6ebf350870fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-r6mhv" podUID="b671177d-3397-4938-853c-0cced3d0e9f5" Aug 13 02:10:05.938669 containerd[1542]: time="2025-08-13T02:10:05.937857865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,}" Aug 13 02:10:06.008861 containerd[1542]: time="2025-08-13T02:10:06.008793527Z" level=error msg="Failed to destroy network for sandbox \"77155a7263545a0edfafab8723976911b5529ea94deb4970b065037b0bbaefe9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:10:06.012872 systemd[1]: run-netns-cni\x2d23cecc04\x2dbd71\x2d9ab3\x2d2307\x2df9f5aecaaf0a.mount: Deactivated successfully. Aug 13 02:10:06.014362 containerd[1542]: time="2025-08-13T02:10:06.013729611Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"77155a7263545a0edfafab8723976911b5529ea94deb4970b065037b0bbaefe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:10:06.014467 kubelet[2718]: E0813 02:10:06.014243 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77155a7263545a0edfafab8723976911b5529ea94deb4970b065037b0bbaefe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:10:06.014467 kubelet[2718]: E0813 02:10:06.014302 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77155a7263545a0edfafab8723976911b5529ea94deb4970b065037b0bbaefe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:10:06.014467 kubelet[2718]: E0813 02:10:06.014321 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"77155a7263545a0edfafab8723976911b5529ea94deb4970b065037b0bbaefe9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:10:06.014467 kubelet[2718]: E0813 02:10:06.014357 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"77155a7263545a0edfafab8723976911b5529ea94deb4970b065037b0bbaefe9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:10:08.846653 systemd[1]: Started sshd@30-172.236.122.171:22-147.75.109.163:48776.service - OpenSSH per-connection server daemon (147.75.109.163:48776). Aug 13 02:10:08.936022 kubelet[2718]: E0813 02:10:08.935975 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:10:08.936832 containerd[1542]: time="2025-08-13T02:10:08.936769260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,}" Aug 13 02:10:08.938660 containerd[1542]: time="2025-08-13T02:10:08.938094221Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 02:10:09.015633 containerd[1542]: time="2025-08-13T02:10:09.015562355Z" level=error msg="Failed to destroy network for sandbox \"6e256e8066c5c809b2cae0157a63032a6233b7c340f31adab4856c2bdc5705c7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:10:09.018424 containerd[1542]: time="2025-08-13T02:10:09.018377715Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e256e8066c5c809b2cae0157a63032a6233b7c340f31adab4856c2bdc5705c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:10:09.018719 kubelet[2718]: E0813 02:10:09.018654 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e256e8066c5c809b2cae0157a63032a6233b7c340f31adab4856c2bdc5705c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:10:09.018769 kubelet[2718]: E0813 02:10:09.018719 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e256e8066c5c809b2cae0157a63032a6233b7c340f31adab4856c2bdc5705c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:10:09.018769 kubelet[2718]: E0813 02:10:09.018743 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e256e8066c5c809b2cae0157a63032a6233b7c340f31adab4856c2bdc5705c7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:10:09.018895 kubelet[2718]: E0813 02:10:09.018794 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-pw6gg_kube-system(e73a6876-bbb3-4e11-8a33-1945cf27a944)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e256e8066c5c809b2cae0157a63032a6233b7c340f31adab4856c2bdc5705c7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-pw6gg" podUID="e73a6876-bbb3-4e11-8a33-1945cf27a944" Aug 13 02:10:09.019330 systemd[1]: run-netns-cni\x2dcae680a4\x2dc3ec\x2df0c7\x2d71b7\x2d4dffaaa8cd48.mount: Deactivated successfully. Aug 13 02:10:09.188316 sshd[5590]: Accepted publickey for core from 147.75.109.163 port 48776 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:10:09.190458 sshd-session[5590]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:10:09.197492 systemd-logind[1527]: New session 27 of user core. Aug 13 02:10:09.201754 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 02:10:09.498133 sshd[5619]: Connection closed by 147.75.109.163 port 48776 Aug 13 02:10:09.498916 sshd-session[5590]: pam_unix(sshd:session): session closed for user core Aug 13 02:10:09.504018 systemd-logind[1527]: Session 27 logged out. Waiting for processes to exit. Aug 13 02:10:09.504689 systemd[1]: sshd@30-172.236.122.171:22-147.75.109.163:48776.service: Deactivated successfully. Aug 13 02:10:09.507433 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 02:10:09.509863 systemd-logind[1527]: Removed session 27. Aug 13 02:10:11.939772 kubelet[2718]: E0813 02:10:11.939730 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:10:12.937296 kubelet[2718]: E0813 02:10:12.937239 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:10:12.938632 containerd[1542]: time="2025-08-13T02:10:12.938380125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,}" Aug 13 02:10:13.043519 containerd[1542]: time="2025-08-13T02:10:13.043442138Z" level=error msg="Failed to destroy network for sandbox \"586ceec1fcfeaf44ea6a153f14eb8259f9ee0c9b97715ef2a8e0985635796e87\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:10:13.047818 containerd[1542]: time="2025-08-13T02:10:13.046513086Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"586ceec1fcfeaf44ea6a153f14eb8259f9ee0c9b97715ef2a8e0985635796e87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:10:13.046989 systemd[1]: run-netns-cni\x2d03a7652f\x2de475\x2dd9ee\x2d2a94\x2daf03a98dda65.mount: Deactivated successfully. Aug 13 02:10:13.050066 kubelet[2718]: E0813 02:10:13.048811 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"586ceec1fcfeaf44ea6a153f14eb8259f9ee0c9b97715ef2a8e0985635796e87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 02:10:13.050066 kubelet[2718]: E0813 02:10:13.049175 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"586ceec1fcfeaf44ea6a153f14eb8259f9ee0c9b97715ef2a8e0985635796e87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:10:13.050066 kubelet[2718]: E0813 02:10:13.049208 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"586ceec1fcfeaf44ea6a153f14eb8259f9ee0c9b97715ef2a8e0985635796e87\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:10:13.050066 kubelet[2718]: E0813 02:10:13.049260 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-p5qmw_kube-system(26fd4059-1e9c-49a2-9bd9-181be9ad7bcb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"586ceec1fcfeaf44ea6a153f14eb8259f9ee0c9b97715ef2a8e0985635796e87\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-p5qmw" podUID="26fd4059-1e9c-49a2-9bd9-181be9ad7bcb" Aug 13 02:10:13.811979 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount973175438.mount: Deactivated successfully. Aug 13 02:10:13.851374 containerd[1542]: time="2025-08-13T02:10:13.851267908Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:10:13.852527 containerd[1542]: time="2025-08-13T02:10:13.852508489Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 02:10:13.853187 containerd[1542]: time="2025-08-13T02:10:13.853134085Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:10:13.855002 containerd[1542]: time="2025-08-13T02:10:13.854954162Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:10:13.855920 containerd[1542]: time="2025-08-13T02:10:13.855893616Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 4.917747735s" Aug 13 02:10:13.855997 containerd[1542]: time="2025-08-13T02:10:13.855981675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 02:10:13.875002 containerd[1542]: time="2025-08-13T02:10:13.873860420Z" level=info msg="CreateContainer within sandbox \"b8fbca66d16757f0d8830ae86752b685e2ed86c07fd8628668ed375bbac8cad0\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 02:10:13.889562 containerd[1542]: time="2025-08-13T02:10:13.889528371Z" level=info msg="Container 86c92af398b1911d4ecaf673a7372fd9f418693eb51c8702fd8bfe1f17c11143: CDI devices from CRI Config.CDIDevices: []" Aug 13 02:10:13.913236 containerd[1542]: time="2025-08-13T02:10:13.913175775Z" level=info msg="CreateContainer within sandbox \"b8fbca66d16757f0d8830ae86752b685e2ed86c07fd8628668ed375bbac8cad0\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"86c92af398b1911d4ecaf673a7372fd9f418693eb51c8702fd8bfe1f17c11143\"" Aug 13 02:10:13.914627 containerd[1542]: time="2025-08-13T02:10:13.914563165Z" level=info msg="StartContainer for \"86c92af398b1911d4ecaf673a7372fd9f418693eb51c8702fd8bfe1f17c11143\"" Aug 13 02:10:13.916298 containerd[1542]: time="2025-08-13T02:10:13.916257004Z" level=info msg="connecting to shim 86c92af398b1911d4ecaf673a7372fd9f418693eb51c8702fd8bfe1f17c11143" address="unix:///run/containerd/s/1134d6fedd26ca70851a85e307217bbf45d02fc285a9ed9dbeebafeb7ceefd25" protocol=ttrpc version=3 Aug 13 02:10:13.941178 systemd[1]: Started cri-containerd-86c92af398b1911d4ecaf673a7372fd9f418693eb51c8702fd8bfe1f17c11143.scope - libcontainer container 86c92af398b1911d4ecaf673a7372fd9f418693eb51c8702fd8bfe1f17c11143. Aug 13 02:10:14.017793 containerd[1542]: time="2025-08-13T02:10:14.017745294Z" level=info msg="StartContainer for \"86c92af398b1911d4ecaf673a7372fd9f418693eb51c8702fd8bfe1f17c11143\" returns successfully" Aug 13 02:10:14.111820 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 02:10:14.111993 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 02:10:14.565808 systemd[1]: Started sshd@31-172.236.122.171:22-147.75.109.163:48784.service - OpenSSH per-connection server daemon (147.75.109.163:48784). Aug 13 02:10:14.598473 kubelet[2718]: I0813 02:10:14.598370 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-cdfxj" podStartSLOduration=1.303213318 podStartE2EDuration="3m14.598353885s" podCreationTimestamp="2025-08-13 02:07:00 +0000 UTC" firstStartedPulling="2025-08-13 02:07:00.562505227 +0000 UTC m=+20.718396131" lastFinishedPulling="2025-08-13 02:10:13.857645794 +0000 UTC m=+214.013536698" observedRunningTime="2025-08-13 02:10:14.589851194 +0000 UTC m=+214.745742098" watchObservedRunningTime="2025-08-13 02:10:14.598353885 +0000 UTC m=+214.754244789" Aug 13 02:10:14.691171 containerd[1542]: time="2025-08-13T02:10:14.691080040Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86c92af398b1911d4ecaf673a7372fd9f418693eb51c8702fd8bfe1f17c11143\" id:\"7ebe73e268794e34f191e5faa5e9aa283771d768c844bbdc4d675b7fe591c1de\" pid:5743 exit_status:1 exited_at:{seconds:1755051014 nanos:690665303}" Aug 13 02:10:14.916174 sshd[5727]: Accepted publickey for core from 147.75.109.163 port 48784 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:10:14.917975 sshd-session[5727]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:10:14.923872 systemd-logind[1527]: New session 28 of user core. Aug 13 02:10:14.928717 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 02:10:15.219136 sshd[5755]: Connection closed by 147.75.109.163 port 48784 Aug 13 02:10:15.219361 sshd-session[5727]: pam_unix(sshd:session): session closed for user core Aug 13 02:10:15.224759 systemd[1]: sshd@31-172.236.122.171:22-147.75.109.163:48784.service: Deactivated successfully. Aug 13 02:10:15.227884 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 02:10:15.229549 systemd-logind[1527]: Session 28 logged out. Waiting for processes to exit. Aug 13 02:10:15.231470 systemd-logind[1527]: Removed session 28. Aug 13 02:10:15.827219 kubelet[2718]: I0813 02:10:15.827145 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:10:15.827219 kubelet[2718]: I0813 02:10:15.827222 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:10:15.830761 kubelet[2718]: I0813 02:10:15.830403 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:10:15.833401 kubelet[2718]: I0813 02:10:15.833379 2718 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93" size=25052538 runtimeHandler="" Aug 13 02:10:15.834492 containerd[1542]: time="2025-08-13T02:10:15.834431784Z" level=info msg="RemoveImage \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 02:10:15.836130 containerd[1542]: time="2025-08-13T02:10:15.836090722Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator:v1.38.3\"" Aug 13 02:10:15.837221 containerd[1542]: time="2025-08-13T02:10:15.837180335Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\"" Aug 13 02:10:15.837814 containerd[1542]: time="2025-08-13T02:10:15.837779821Z" level=info msg="RemoveImage \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" returns successfully" Aug 13 02:10:15.837944 containerd[1542]: time="2025-08-13T02:10:15.837864040Z" level=info msg="ImageDelete event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 02:10:15.853352 containerd[1542]: time="2025-08-13T02:10:15.852818636Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86c92af398b1911d4ecaf673a7372fd9f418693eb51c8702fd8bfe1f17c11143\" id:\"76136c9d8367b3abfafdaff409ce3970440b74ba6245ee7f71df0ec8a139def2\" pid:5864 exit_status:1 exited_at:{seconds:1755051015 nanos:846705119}" Aug 13 02:10:15.873514 kubelet[2718]: I0813 02:10:15.873495 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:10:15.873739 kubelet[2718]: I0813 02:10:15.873720 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-p5qmw","calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/csi-node-driver-r6mhv","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/kube-controller-manager-172-236-122-171","calico-system/calico-node-cdfxj","kube-system/kube-proxy-s4bl4","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:10:15.874061 kubelet[2718]: E0813 02:10:15.874048 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:10:15.874147 kubelet[2718]: E0813 02:10:15.874137 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:10:15.874214 kubelet[2718]: E0813 02:10:15.874187 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:10:15.874265 kubelet[2718]: E0813 02:10:15.874255 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:10:15.874343 kubelet[2718]: E0813 02:10:15.874332 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:10:15.874460 kubelet[2718]: E0813 02:10:15.874380 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:10:15.874460 kubelet[2718]: E0813 02:10:15.874414 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:10:15.874460 kubelet[2718]: E0813 02:10:15.874422 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:10:15.874460 kubelet[2718]: E0813 02:10:15.874431 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:10:15.874460 kubelet[2718]: E0813 02:10:15.874438 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:10:15.874460 kubelet[2718]: I0813 02:10:15.874448 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:10:15.937629 kubelet[2718]: E0813 02:10:15.936728 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:10:16.198824 systemd-networkd[1461]: vxlan.calico: Link UP Aug 13 02:10:16.198839 systemd-networkd[1461]: vxlan.calico: Gained carrier Aug 13 02:10:17.496872 systemd-networkd[1461]: vxlan.calico: Gained IPv6LL Aug 13 02:10:18.939255 containerd[1542]: time="2025-08-13T02:10:18.938141138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,}" Aug 13 02:10:19.081176 systemd-networkd[1461]: calib1cfaf71a38: Link UP Aug 13 02:10:19.083267 systemd-networkd[1461]: calib1cfaf71a38: Gained carrier Aug 13 02:10:19.112787 containerd[1542]: 2025-08-13 02:10:19.004 [INFO][5984] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--122--171-k8s-csi--node--driver--r6mhv-eth0 csi-node-driver- calico-system b671177d-3397-4938-853c-0cced3d0e9f5 706 0 2025-08-13 02:07:00 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-236-122-171 csi-node-driver-r6mhv eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calib1cfaf71a38 [] [] }} ContainerID="7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" Namespace="calico-system" Pod="csi-node-driver-r6mhv" WorkloadEndpoint="172--236--122--171-k8s-csi--node--driver--r6mhv-" Aug 13 02:10:19.112787 containerd[1542]: 2025-08-13 02:10:19.005 [INFO][5984] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" Namespace="calico-system" Pod="csi-node-driver-r6mhv" WorkloadEndpoint="172--236--122--171-k8s-csi--node--driver--r6mhv-eth0" Aug 13 02:10:19.112787 containerd[1542]: 2025-08-13 02:10:19.036 [INFO][5992] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" HandleID="k8s-pod-network.7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" Workload="172--236--122--171-k8s-csi--node--driver--r6mhv-eth0" Aug 13 02:10:19.113516 containerd[1542]: 2025-08-13 02:10:19.036 [INFO][5992] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" HandleID="k8s-pod-network.7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" Workload="172--236--122--171-k8s-csi--node--driver--r6mhv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5870), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-122-171", "pod":"csi-node-driver-r6mhv", "timestamp":"2025-08-13 02:10:19.036749857 +0000 UTC"}, Hostname:"172-236-122-171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 02:10:19.113516 containerd[1542]: 2025-08-13 02:10:19.036 [INFO][5992] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 02:10:19.113516 containerd[1542]: 2025-08-13 02:10:19.037 [INFO][5992] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 02:10:19.113516 containerd[1542]: 2025-08-13 02:10:19.037 [INFO][5992] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-122-171' Aug 13 02:10:19.113516 containerd[1542]: 2025-08-13 02:10:19.048 [INFO][5992] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" host="172-236-122-171" Aug 13 02:10:19.113516 containerd[1542]: 2025-08-13 02:10:19.053 [INFO][5992] ipam/ipam.go 394: Looking up existing affinities for host host="172-236-122-171" Aug 13 02:10:19.113516 containerd[1542]: 2025-08-13 02:10:19.057 [INFO][5992] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="172-236-122-171" Aug 13 02:10:19.113516 containerd[1542]: 2025-08-13 02:10:19.059 [INFO][5992] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="172-236-122-171" Aug 13 02:10:19.113516 containerd[1542]: 2025-08-13 02:10:19.061 [INFO][5992] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="172-236-122-171" Aug 13 02:10:19.113516 containerd[1542]: 2025-08-13 02:10:19.061 [INFO][5992] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" host="172-236-122-171" Aug 13 02:10:19.115848 containerd[1542]: 2025-08-13 02:10:19.062 [INFO][5992] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479 Aug 13 02:10:19.115848 containerd[1542]: 2025-08-13 02:10:19.066 [INFO][5992] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" host="172-236-122-171" Aug 13 02:10:19.115848 containerd[1542]: 2025-08-13 02:10:19.070 [INFO][5992] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.65/26] block=192.168.99.64/26 handle="k8s-pod-network.7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" host="172-236-122-171" Aug 13 02:10:19.115848 containerd[1542]: 2025-08-13 02:10:19.070 [INFO][5992] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.65/26] handle="k8s-pod-network.7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" host="172-236-122-171" Aug 13 02:10:19.115848 containerd[1542]: 2025-08-13 02:10:19.070 [INFO][5992] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 02:10:19.115848 containerd[1542]: 2025-08-13 02:10:19.070 [INFO][5992] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.65/26] IPv6=[] ContainerID="7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" HandleID="k8s-pod-network.7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" Workload="172--236--122--171-k8s-csi--node--driver--r6mhv-eth0" Aug 13 02:10:19.115961 containerd[1542]: 2025-08-13 02:10:19.075 [INFO][5984] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" Namespace="calico-system" Pod="csi-node-driver-r6mhv" WorkloadEndpoint="172--236--122--171-k8s-csi--node--driver--r6mhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--122--171-k8s-csi--node--driver--r6mhv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b671177d-3397-4938-853c-0cced3d0e9f5", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 2, 7, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-122-171", ContainerID:"", Pod:"csi-node-driver-r6mhv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib1cfaf71a38", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 02:10:19.116016 containerd[1542]: 2025-08-13 02:10:19.075 [INFO][5984] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.65/32] ContainerID="7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" Namespace="calico-system" Pod="csi-node-driver-r6mhv" WorkloadEndpoint="172--236--122--171-k8s-csi--node--driver--r6mhv-eth0" Aug 13 02:10:19.116016 containerd[1542]: 2025-08-13 02:10:19.075 [INFO][5984] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib1cfaf71a38 ContainerID="7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" Namespace="calico-system" Pod="csi-node-driver-r6mhv" WorkloadEndpoint="172--236--122--171-k8s-csi--node--driver--r6mhv-eth0" Aug 13 02:10:19.116016 containerd[1542]: 2025-08-13 02:10:19.082 [INFO][5984] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" Namespace="calico-system" Pod="csi-node-driver-r6mhv" WorkloadEndpoint="172--236--122--171-k8s-csi--node--driver--r6mhv-eth0" Aug 13 02:10:19.116081 containerd[1542]: 2025-08-13 02:10:19.082 [INFO][5984] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" Namespace="calico-system" Pod="csi-node-driver-r6mhv" WorkloadEndpoint="172--236--122--171-k8s-csi--node--driver--r6mhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--122--171-k8s-csi--node--driver--r6mhv-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"b671177d-3397-4938-853c-0cced3d0e9f5", ResourceVersion:"706", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 2, 7, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-122-171", ContainerID:"7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479", Pod:"csi-node-driver-r6mhv", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.99.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calib1cfaf71a38", MAC:"0a:46:23:6e:12:43", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 02:10:19.116127 containerd[1542]: 2025-08-13 02:10:19.099 [INFO][5984] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" Namespace="calico-system" Pod="csi-node-driver-r6mhv" WorkloadEndpoint="172--236--122--171-k8s-csi--node--driver--r6mhv-eth0" Aug 13 02:10:19.163907 containerd[1542]: time="2025-08-13T02:10:19.163747965Z" level=info msg="connecting to shim 7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479" address="unix:///run/containerd/s/9060eb0aa373584c555169317aace954fd9a8f7a84a1c15a64213dcb1735f137" namespace=k8s.io protocol=ttrpc version=3 Aug 13 02:10:19.232032 systemd[1]: Started cri-containerd-7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479.scope - libcontainer container 7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479. Aug 13 02:10:19.268436 containerd[1542]: time="2025-08-13T02:10:19.268153147Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-r6mhv,Uid:b671177d-3397-4938-853c-0cced3d0e9f5,Namespace:calico-system,Attempt:0,} returns sandbox id \"7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479\"" Aug 13 02:10:19.270280 containerd[1542]: time="2025-08-13T02:10:19.270201823Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 02:10:20.079398 containerd[1542]: time="2025-08-13T02:10:20.079091836Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:10:20.080818 containerd[1542]: time="2025-08-13T02:10:20.080038950Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 02:10:20.083616 containerd[1542]: time="2025-08-13T02:10:20.082701872Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:10:20.087684 containerd[1542]: time="2025-08-13T02:10:20.087458749Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:10:20.089874 containerd[1542]: time="2025-08-13T02:10:20.089828513Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 819.572311ms" Aug 13 02:10:20.090378 containerd[1542]: time="2025-08-13T02:10:20.089973002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 02:10:20.098748 containerd[1542]: time="2025-08-13T02:10:20.098712973Z" level=info msg="CreateContainer within sandbox \"7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 02:10:20.110916 containerd[1542]: time="2025-08-13T02:10:20.110842361Z" level=info msg="Container c9f941b0c6a9037bc6a7d50513915d54792d6e75535ae92d033f8de9be98edf2: CDI devices from CRI Config.CDIDevices: []" Aug 13 02:10:20.125174 containerd[1542]: time="2025-08-13T02:10:20.125133635Z" level=info msg="CreateContainer within sandbox \"7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"c9f941b0c6a9037bc6a7d50513915d54792d6e75535ae92d033f8de9be98edf2\"" Aug 13 02:10:20.126236 containerd[1542]: time="2025-08-13T02:10:20.125816480Z" level=info msg="StartContainer for \"c9f941b0c6a9037bc6a7d50513915d54792d6e75535ae92d033f8de9be98edf2\"" Aug 13 02:10:20.129686 containerd[1542]: time="2025-08-13T02:10:20.129664184Z" level=info msg="connecting to shim c9f941b0c6a9037bc6a7d50513915d54792d6e75535ae92d033f8de9be98edf2" address="unix:///run/containerd/s/9060eb0aa373584c555169317aace954fd9a8f7a84a1c15a64213dcb1735f137" protocol=ttrpc version=3 Aug 13 02:10:20.153828 systemd[1]: Started cri-containerd-c9f941b0c6a9037bc6a7d50513915d54792d6e75535ae92d033f8de9be98edf2.scope - libcontainer container c9f941b0c6a9037bc6a7d50513915d54792d6e75535ae92d033f8de9be98edf2. Aug 13 02:10:20.197176 containerd[1542]: time="2025-08-13T02:10:20.197104329Z" level=info msg="StartContainer for \"c9f941b0c6a9037bc6a7d50513915d54792d6e75535ae92d033f8de9be98edf2\" returns successfully" Aug 13 02:10:20.200332 containerd[1542]: time="2025-08-13T02:10:20.200268828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 02:10:20.287454 systemd[1]: Started sshd@32-172.236.122.171:22-147.75.109.163:59160.service - OpenSSH per-connection server daemon (147.75.109.163:59160). Aug 13 02:10:20.628655 sshd[6091]: Accepted publickey for core from 147.75.109.163 port 59160 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:10:20.631264 sshd-session[6091]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:10:20.639236 systemd-logind[1527]: New session 29 of user core. Aug 13 02:10:20.644735 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 02:10:20.825933 systemd-networkd[1461]: calib1cfaf71a38: Gained IPv6LL Aug 13 02:10:20.937453 containerd[1542]: time="2025-08-13T02:10:20.936629974Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,}" Aug 13 02:10:20.958942 sshd[6093]: Connection closed by 147.75.109.163 port 59160 Aug 13 02:10:20.957491 sshd-session[6091]: pam_unix(sshd:session): session closed for user core Aug 13 02:10:20.965519 systemd[1]: sshd@32-172.236.122.171:22-147.75.109.163:59160.service: Deactivated successfully. Aug 13 02:10:20.968819 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 02:10:20.970670 systemd-logind[1527]: Session 29 logged out. Waiting for processes to exit. Aug 13 02:10:20.972487 systemd-logind[1527]: Removed session 29. Aug 13 02:10:21.114288 systemd-networkd[1461]: calidc0649e6b5c: Link UP Aug 13 02:10:21.114476 systemd-networkd[1461]: calidc0649e6b5c: Gained carrier Aug 13 02:10:21.137816 containerd[1542]: 2025-08-13 02:10:21.012 [INFO][6105] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--122--171-k8s-calico--kube--controllers--7c47cf6bcb--c9c87-eth0 calico-kube-controllers-7c47cf6bcb- calico-system 8d84c12b-cfd9-49af-bb2e-a10173126a4c 799 0 2025-08-13 02:07:00 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7c47cf6bcb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-236-122-171 calico-kube-controllers-7c47cf6bcb-c9c87 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calidc0649e6b5c [] [] }} ContainerID="c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" Namespace="calico-system" Pod="calico-kube-controllers-7c47cf6bcb-c9c87" WorkloadEndpoint="172--236--122--171-k8s-calico--kube--controllers--7c47cf6bcb--c9c87-" Aug 13 02:10:21.137816 containerd[1542]: 2025-08-13 02:10:21.013 [INFO][6105] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" Namespace="calico-system" Pod="calico-kube-controllers-7c47cf6bcb-c9c87" WorkloadEndpoint="172--236--122--171-k8s-calico--kube--controllers--7c47cf6bcb--c9c87-eth0" Aug 13 02:10:21.137816 containerd[1542]: 2025-08-13 02:10:21.053 [INFO][6122] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" HandleID="k8s-pod-network.c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" Workload="172--236--122--171-k8s-calico--kube--controllers--7c47cf6bcb--c9c87-eth0" Aug 13 02:10:21.139520 containerd[1542]: 2025-08-13 02:10:21.053 [INFO][6122] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" HandleID="k8s-pod-network.c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" Workload="172--236--122--171-k8s-calico--kube--controllers--7c47cf6bcb--c9c87-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7bc0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-236-122-171", "pod":"calico-kube-controllers-7c47cf6bcb-c9c87", "timestamp":"2025-08-13 02:10:21.053519657 +0000 UTC"}, Hostname:"172-236-122-171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 02:10:21.139520 containerd[1542]: 2025-08-13 02:10:21.054 [INFO][6122] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 02:10:21.139520 containerd[1542]: 2025-08-13 02:10:21.054 [INFO][6122] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 02:10:21.139520 containerd[1542]: 2025-08-13 02:10:21.054 [INFO][6122] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-122-171' Aug 13 02:10:21.139520 containerd[1542]: 2025-08-13 02:10:21.060 [INFO][6122] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" host="172-236-122-171" Aug 13 02:10:21.139520 containerd[1542]: 2025-08-13 02:10:21.068 [INFO][6122] ipam/ipam.go 394: Looking up existing affinities for host host="172-236-122-171" Aug 13 02:10:21.139520 containerd[1542]: 2025-08-13 02:10:21.074 [INFO][6122] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="172-236-122-171" Aug 13 02:10:21.139520 containerd[1542]: 2025-08-13 02:10:21.079 [INFO][6122] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="172-236-122-171" Aug 13 02:10:21.139520 containerd[1542]: 2025-08-13 02:10:21.083 [INFO][6122] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="172-236-122-171" Aug 13 02:10:21.140551 containerd[1542]: 2025-08-13 02:10:21.083 [INFO][6122] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" host="172-236-122-171" Aug 13 02:10:21.140551 containerd[1542]: 2025-08-13 02:10:21.087 [INFO][6122] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db Aug 13 02:10:21.140551 containerd[1542]: 2025-08-13 02:10:21.095 [INFO][6122] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" host="172-236-122-171" Aug 13 02:10:21.140551 containerd[1542]: 2025-08-13 02:10:21.103 [INFO][6122] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.66/26] block=192.168.99.64/26 handle="k8s-pod-network.c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" host="172-236-122-171" Aug 13 02:10:21.140551 containerd[1542]: 2025-08-13 02:10:21.103 [INFO][6122] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.66/26] handle="k8s-pod-network.c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" host="172-236-122-171" Aug 13 02:10:21.140551 containerd[1542]: 2025-08-13 02:10:21.104 [INFO][6122] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 02:10:21.140551 containerd[1542]: 2025-08-13 02:10:21.104 [INFO][6122] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.66/26] IPv6=[] ContainerID="c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" HandleID="k8s-pod-network.c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" Workload="172--236--122--171-k8s-calico--kube--controllers--7c47cf6bcb--c9c87-eth0" Aug 13 02:10:21.141364 containerd[1542]: 2025-08-13 02:10:21.110 [INFO][6105] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" Namespace="calico-system" Pod="calico-kube-controllers-7c47cf6bcb-c9c87" WorkloadEndpoint="172--236--122--171-k8s-calico--kube--controllers--7c47cf6bcb--c9c87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--122--171-k8s-calico--kube--controllers--7c47cf6bcb--c9c87-eth0", GenerateName:"calico-kube-controllers-7c47cf6bcb-", Namespace:"calico-system", SelfLink:"", UID:"8d84c12b-cfd9-49af-bb2e-a10173126a4c", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 2, 7, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c47cf6bcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-122-171", ContainerID:"", Pod:"calico-kube-controllers-7c47cf6bcb-c9c87", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc0649e6b5c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 02:10:21.141418 containerd[1542]: 2025-08-13 02:10:21.110 [INFO][6105] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.66/32] ContainerID="c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" Namespace="calico-system" Pod="calico-kube-controllers-7c47cf6bcb-c9c87" WorkloadEndpoint="172--236--122--171-k8s-calico--kube--controllers--7c47cf6bcb--c9c87-eth0" Aug 13 02:10:21.141418 containerd[1542]: 2025-08-13 02:10:21.110 [INFO][6105] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidc0649e6b5c ContainerID="c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" Namespace="calico-system" Pod="calico-kube-controllers-7c47cf6bcb-c9c87" WorkloadEndpoint="172--236--122--171-k8s-calico--kube--controllers--7c47cf6bcb--c9c87-eth0" Aug 13 02:10:21.141418 containerd[1542]: 2025-08-13 02:10:21.113 [INFO][6105] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" Namespace="calico-system" Pod="calico-kube-controllers-7c47cf6bcb-c9c87" WorkloadEndpoint="172--236--122--171-k8s-calico--kube--controllers--7c47cf6bcb--c9c87-eth0" Aug 13 02:10:21.141475 containerd[1542]: 2025-08-13 02:10:21.114 [INFO][6105] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" Namespace="calico-system" Pod="calico-kube-controllers-7c47cf6bcb-c9c87" WorkloadEndpoint="172--236--122--171-k8s-calico--kube--controllers--7c47cf6bcb--c9c87-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--122--171-k8s-calico--kube--controllers--7c47cf6bcb--c9c87-eth0", GenerateName:"calico-kube-controllers-7c47cf6bcb-", Namespace:"calico-system", SelfLink:"", UID:"8d84c12b-cfd9-49af-bb2e-a10173126a4c", ResourceVersion:"799", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 2, 7, 0, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7c47cf6bcb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-122-171", ContainerID:"c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db", Pod:"calico-kube-controllers-7c47cf6bcb-c9c87", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.99.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calidc0649e6b5c", MAC:"1a:a8:14:24:bd:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 02:10:21.141521 containerd[1542]: 2025-08-13 02:10:21.128 [INFO][6105] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" Namespace="calico-system" Pod="calico-kube-controllers-7c47cf6bcb-c9c87" WorkloadEndpoint="172--236--122--171-k8s-calico--kube--controllers--7c47cf6bcb--c9c87-eth0" Aug 13 02:10:21.205398 containerd[1542]: time="2025-08-13T02:10:21.205259777Z" level=info msg="connecting to shim c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db" address="unix:///run/containerd/s/abb70ca124f0c66d90cb25064dc7ff6726a5673ff36cba9a5efa08561657bbe5" namespace=k8s.io protocol=ttrpc version=3 Aug 13 02:10:21.244950 systemd[1]: Started cri-containerd-c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db.scope - libcontainer container c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db. Aug 13 02:10:21.361118 containerd[1542]: time="2025-08-13T02:10:21.361056019Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:10:21.363098 containerd[1542]: time="2025-08-13T02:10:21.363071906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 02:10:21.364683 containerd[1542]: time="2025-08-13T02:10:21.364389967Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:10:21.369090 containerd[1542]: time="2025-08-13T02:10:21.369069065Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7c47cf6bcb-c9c87,Uid:8d84c12b-cfd9-49af-bb2e-a10173126a4c,Namespace:calico-system,Attempt:0,} returns sandbox id \"c81de72da8318e1079b62311473387cca9c17d633fcf9ce0337cb3eccd3924db\"" Aug 13 02:10:21.369282 containerd[1542]: time="2025-08-13T02:10:21.369265044Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:10:21.373718 containerd[1542]: time="2025-08-13T02:10:21.373646765Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.173335038s" Aug 13 02:10:21.374212 containerd[1542]: time="2025-08-13T02:10:21.374180951Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 02:10:21.377109 containerd[1542]: time="2025-08-13T02:10:21.377075372Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 02:10:21.379821 containerd[1542]: time="2025-08-13T02:10:21.378797400Z" level=info msg="CreateContainer within sandbox \"7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 02:10:21.394089 containerd[1542]: time="2025-08-13T02:10:21.393774199Z" level=info msg="Container 7c0fb2d198de9f754d0ff53056ddca2a7a6e2d190851f2ecf8b24ce59917fd28: CDI devices from CRI Config.CDIDevices: []" Aug 13 02:10:21.399857 containerd[1542]: time="2025-08-13T02:10:21.399836299Z" level=info msg="CreateContainer within sandbox \"7e85ce337b33ae60d77b08e1429391f7b61226cc43847f6063c8ebe224822479\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"7c0fb2d198de9f754d0ff53056ddca2a7a6e2d190851f2ecf8b24ce59917fd28\"" Aug 13 02:10:21.399868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2684493475.mount: Deactivated successfully. Aug 13 02:10:21.400933 containerd[1542]: time="2025-08-13T02:10:21.400913411Z" level=info msg="StartContainer for \"7c0fb2d198de9f754d0ff53056ddca2a7a6e2d190851f2ecf8b24ce59917fd28\"" Aug 13 02:10:21.403936 containerd[1542]: time="2025-08-13T02:10:21.403906391Z" level=info msg="connecting to shim 7c0fb2d198de9f754d0ff53056ddca2a7a6e2d190851f2ecf8b24ce59917fd28" address="unix:///run/containerd/s/9060eb0aa373584c555169317aace954fd9a8f7a84a1c15a64213dcb1735f137" protocol=ttrpc version=3 Aug 13 02:10:21.427837 systemd[1]: Started cri-containerd-7c0fb2d198de9f754d0ff53056ddca2a7a6e2d190851f2ecf8b24ce59917fd28.scope - libcontainer container 7c0fb2d198de9f754d0ff53056ddca2a7a6e2d190851f2ecf8b24ce59917fd28. Aug 13 02:10:21.494527 containerd[1542]: time="2025-08-13T02:10:21.494377683Z" level=info msg="StartContainer for \"7c0fb2d198de9f754d0ff53056ddca2a7a6e2d190851f2ecf8b24ce59917fd28\" returns successfully" Aug 13 02:10:21.621792 kubelet[2718]: I0813 02:10:21.621445 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-r6mhv" podStartSLOduration=199.514773049 podStartE2EDuration="3m21.62142404s" podCreationTimestamp="2025-08-13 02:07:00 +0000 UTC" firstStartedPulling="2025-08-13 02:10:19.269507067 +0000 UTC m=+219.425397971" lastFinishedPulling="2025-08-13 02:10:21.376158058 +0000 UTC m=+221.532048962" observedRunningTime="2025-08-13 02:10:21.607639822 +0000 UTC m=+221.763530726" watchObservedRunningTime="2025-08-13 02:10:21.62142404 +0000 UTC m=+221.777314944" Aug 13 02:10:22.174778 kubelet[2718]: I0813 02:10:22.174417 2718 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 02:10:22.174778 kubelet[2718]: I0813 02:10:22.174506 2718 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 02:10:22.569773 containerd[1542]: time="2025-08-13T02:10:22.569715733Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/80/fs/usr/bin/kube-controllers: no space left on device" Aug 13 02:10:22.571250 containerd[1542]: time="2025-08-13T02:10:22.569813593Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 02:10:22.571288 kubelet[2718]: E0813 02:10:22.570541 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/80/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 02:10:22.571288 kubelet[2718]: E0813 02:10:22.570624 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/80/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 02:10:22.571288 kubelet[2718]: E0813 02:10:22.570779 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bmxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/80/fs/usr/bin/kube-controllers: no space left on device" logger="UnhandledError" Aug 13 02:10:22.572020 kubelet[2718]: E0813 02:10:22.571989 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/80/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:10:22.602479 kubelet[2718]: E0813 02:10:22.602426 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/80/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:10:22.616745 systemd-networkd[1461]: calidc0649e6b5c: Gained IPv6LL Aug 13 02:10:23.936158 kubelet[2718]: E0813 02:10:23.935815 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:10:23.936982 containerd[1542]: time="2025-08-13T02:10:23.936919376Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,}" Aug 13 02:10:24.072571 systemd-networkd[1461]: calif1d750848e8: Link UP Aug 13 02:10:24.073962 systemd-networkd[1461]: calif1d750848e8: Gained carrier Aug 13 02:10:24.105622 containerd[1542]: 2025-08-13 02:10:23.980 [INFO][6225] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--122--171-k8s-coredns--668d6bf9bc--pw6gg-eth0 coredns-668d6bf9bc- kube-system e73a6876-bbb3-4e11-8a33-1945cf27a944 800 0 2025-08-13 02:06:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-122-171 coredns-668d6bf9bc-pw6gg eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calif1d750848e8 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" Namespace="kube-system" Pod="coredns-668d6bf9bc-pw6gg" WorkloadEndpoint="172--236--122--171-k8s-coredns--668d6bf9bc--pw6gg-" Aug 13 02:10:24.105622 containerd[1542]: 2025-08-13 02:10:23.981 [INFO][6225] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" Namespace="kube-system" Pod="coredns-668d6bf9bc-pw6gg" WorkloadEndpoint="172--236--122--171-k8s-coredns--668d6bf9bc--pw6gg-eth0" Aug 13 02:10:24.105622 containerd[1542]: 2025-08-13 02:10:24.021 [INFO][6233] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" HandleID="k8s-pod-network.3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" Workload="172--236--122--171-k8s-coredns--668d6bf9bc--pw6gg-eth0" Aug 13 02:10:24.105833 containerd[1542]: 2025-08-13 02:10:24.021 [INFO][6233] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" HandleID="k8s-pod-network.3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" Workload="172--236--122--171-k8s-coredns--668d6bf9bc--pw6gg-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5120), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-122-171", "pod":"coredns-668d6bf9bc-pw6gg", "timestamp":"2025-08-13 02:10:24.021303715 +0000 UTC"}, Hostname:"172-236-122-171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 02:10:24.105833 containerd[1542]: 2025-08-13 02:10:24.021 [INFO][6233] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 02:10:24.105833 containerd[1542]: 2025-08-13 02:10:24.021 [INFO][6233] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 02:10:24.105833 containerd[1542]: 2025-08-13 02:10:24.021 [INFO][6233] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-122-171' Aug 13 02:10:24.105833 containerd[1542]: 2025-08-13 02:10:24.030 [INFO][6233] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" host="172-236-122-171" Aug 13 02:10:24.105833 containerd[1542]: 2025-08-13 02:10:24.043 [INFO][6233] ipam/ipam.go 394: Looking up existing affinities for host host="172-236-122-171" Aug 13 02:10:24.105833 containerd[1542]: 2025-08-13 02:10:24.047 [INFO][6233] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="172-236-122-171" Aug 13 02:10:24.105833 containerd[1542]: 2025-08-13 02:10:24.050 [INFO][6233] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="172-236-122-171" Aug 13 02:10:24.105833 containerd[1542]: 2025-08-13 02:10:24.053 [INFO][6233] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="172-236-122-171" Aug 13 02:10:24.105833 containerd[1542]: 2025-08-13 02:10:24.053 [INFO][6233] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" host="172-236-122-171" Aug 13 02:10:24.106302 containerd[1542]: 2025-08-13 02:10:24.054 [INFO][6233] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d Aug 13 02:10:24.106302 containerd[1542]: 2025-08-13 02:10:24.058 [INFO][6233] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" host="172-236-122-171" Aug 13 02:10:24.106302 containerd[1542]: 2025-08-13 02:10:24.065 [INFO][6233] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.67/26] block=192.168.99.64/26 handle="k8s-pod-network.3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" host="172-236-122-171" Aug 13 02:10:24.106302 containerd[1542]: 2025-08-13 02:10:24.065 [INFO][6233] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.67/26] handle="k8s-pod-network.3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" host="172-236-122-171" Aug 13 02:10:24.106302 containerd[1542]: 2025-08-13 02:10:24.065 [INFO][6233] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 02:10:24.106302 containerd[1542]: 2025-08-13 02:10:24.065 [INFO][6233] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.67/26] IPv6=[] ContainerID="3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" HandleID="k8s-pod-network.3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" Workload="172--236--122--171-k8s-coredns--668d6bf9bc--pw6gg-eth0" Aug 13 02:10:24.106428 containerd[1542]: 2025-08-13 02:10:24.068 [INFO][6225] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" Namespace="kube-system" Pod="coredns-668d6bf9bc-pw6gg" WorkloadEndpoint="172--236--122--171-k8s-coredns--668d6bf9bc--pw6gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--122--171-k8s-coredns--668d6bf9bc--pw6gg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e73a6876-bbb3-4e11-8a33-1945cf27a944", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 2, 6, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-122-171", ContainerID:"", Pod:"coredns-668d6bf9bc-pw6gg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1d750848e8", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 02:10:24.106428 containerd[1542]: 2025-08-13 02:10:24.068 [INFO][6225] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.67/32] ContainerID="3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" Namespace="kube-system" Pod="coredns-668d6bf9bc-pw6gg" WorkloadEndpoint="172--236--122--171-k8s-coredns--668d6bf9bc--pw6gg-eth0" Aug 13 02:10:24.106428 containerd[1542]: 2025-08-13 02:10:24.068 [INFO][6225] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif1d750848e8 ContainerID="3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" Namespace="kube-system" Pod="coredns-668d6bf9bc-pw6gg" WorkloadEndpoint="172--236--122--171-k8s-coredns--668d6bf9bc--pw6gg-eth0" Aug 13 02:10:24.106428 containerd[1542]: 2025-08-13 02:10:24.074 [INFO][6225] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" Namespace="kube-system" Pod="coredns-668d6bf9bc-pw6gg" WorkloadEndpoint="172--236--122--171-k8s-coredns--668d6bf9bc--pw6gg-eth0" Aug 13 02:10:24.106428 containerd[1542]: 2025-08-13 02:10:24.075 [INFO][6225] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" Namespace="kube-system" Pod="coredns-668d6bf9bc-pw6gg" WorkloadEndpoint="172--236--122--171-k8s-coredns--668d6bf9bc--pw6gg-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--122--171-k8s-coredns--668d6bf9bc--pw6gg-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"e73a6876-bbb3-4e11-8a33-1945cf27a944", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 2, 6, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-122-171", ContainerID:"3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d", Pod:"coredns-668d6bf9bc-pw6gg", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calif1d750848e8", MAC:"82:fa:e9:0d:ec:bb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 02:10:24.106428 containerd[1542]: 2025-08-13 02:10:24.092 [INFO][6225] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" Namespace="kube-system" Pod="coredns-668d6bf9bc-pw6gg" WorkloadEndpoint="172--236--122--171-k8s-coredns--668d6bf9bc--pw6gg-eth0" Aug 13 02:10:24.159615 containerd[1542]: time="2025-08-13T02:10:24.159550649Z" level=info msg="connecting to shim 3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d" address="unix:///run/containerd/s/c2f38e6baec166800e9bc59fb32a5b2fc64d13fec7aeea0dc9850ef4b3fba7c7" namespace=k8s.io protocol=ttrpc version=3 Aug 13 02:10:24.201901 systemd[1]: Started cri-containerd-3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d.scope - libcontainer container 3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d. Aug 13 02:10:24.271306 containerd[1542]: time="2025-08-13T02:10:24.271245268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-pw6gg,Uid:e73a6876-bbb3-4e11-8a33-1945cf27a944,Namespace:kube-system,Attempt:0,} returns sandbox id \"3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d\"" Aug 13 02:10:24.272853 kubelet[2718]: E0813 02:10:24.272822 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:10:24.274705 containerd[1542]: time="2025-08-13T02:10:24.274640846Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 02:10:25.028831 systemd[1]: Started sshd@33-172.236.122.171:22-165.154.201.122:33404.service - OpenSSH per-connection server daemon (165.154.201.122:33404). Aug 13 02:10:25.107542 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount329697371.mount: Deactivated successfully. Aug 13 02:10:25.368778 systemd-networkd[1461]: calif1d750848e8: Gained IPv6LL Aug 13 02:10:25.945159 kubelet[2718]: I0813 02:10:25.945092 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:10:25.945159 kubelet[2718]: I0813 02:10:25.945142 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:10:25.952580 kubelet[2718]: I0813 02:10:25.952426 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:10:25.979197 kubelet[2718]: I0813 02:10:25.979151 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:10:25.979746 kubelet[2718]: I0813 02:10:25.979723 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-p5qmw","calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/calico-typha-67c8447dcf-wsn77","calico-system/calico-node-cdfxj","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","calico-system/csi-node-driver-r6mhv","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:10:25.980045 kubelet[2718]: E0813 02:10:25.979971 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:10:25.980164 kubelet[2718]: E0813 02:10:25.980125 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:10:25.980413 kubelet[2718]: E0813 02:10:25.980215 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:10:25.980413 kubelet[2718]: E0813 02:10:25.980262 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:10:25.980413 kubelet[2718]: E0813 02:10:25.980276 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:10:25.980413 kubelet[2718]: E0813 02:10:25.980285 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:10:25.980413 kubelet[2718]: E0813 02:10:25.980295 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:10:25.980748 kubelet[2718]: E0813 02:10:25.980505 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:10:25.980748 kubelet[2718]: E0813 02:10:25.980522 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:10:25.980748 kubelet[2718]: E0813 02:10:25.980532 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:10:25.980748 kubelet[2718]: I0813 02:10:25.980542 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:10:26.020796 systemd[1]: Started sshd@34-172.236.122.171:22-147.75.109.163:59170.service - OpenSSH per-connection server daemon (147.75.109.163:59170). Aug 13 02:10:26.212051 containerd[1542]: time="2025-08-13T02:10:26.211499781Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:10:26.213322 containerd[1542]: time="2025-08-13T02:10:26.213283010Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 02:10:26.214627 containerd[1542]: time="2025-08-13T02:10:26.214193824Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:10:26.218616 containerd[1542]: time="2025-08-13T02:10:26.217066175Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 02:10:26.218616 containerd[1542]: time="2025-08-13T02:10:26.218468426Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.94379339s" Aug 13 02:10:26.218616 containerd[1542]: time="2025-08-13T02:10:26.218535485Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 02:10:26.221348 containerd[1542]: time="2025-08-13T02:10:26.221300567Z" level=info msg="CreateContainer within sandbox \"3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 02:10:26.231945 containerd[1542]: time="2025-08-13T02:10:26.230912584Z" level=info msg="Container 2cfd42efa842bd3a7cd40ece45ef50eda3d03f024b45a52dd03ec91b5f1ab207: CDI devices from CRI Config.CDIDevices: []" Aug 13 02:10:26.245842 containerd[1542]: time="2025-08-13T02:10:26.245788466Z" level=info msg="CreateContainer within sandbox \"3118100dbba6add4757e7fb4f4c272b22ddd53e092238b9d44797d20ff9b781d\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2cfd42efa842bd3a7cd40ece45ef50eda3d03f024b45a52dd03ec91b5f1ab207\"" Aug 13 02:10:26.247907 containerd[1542]: time="2025-08-13T02:10:26.247886832Z" level=info msg="StartContainer for \"2cfd42efa842bd3a7cd40ece45ef50eda3d03f024b45a52dd03ec91b5f1ab207\"" Aug 13 02:10:26.249293 containerd[1542]: time="2025-08-13T02:10:26.249155564Z" level=info msg="connecting to shim 2cfd42efa842bd3a7cd40ece45ef50eda3d03f024b45a52dd03ec91b5f1ab207" address="unix:///run/containerd/s/c2f38e6baec166800e9bc59fb32a5b2fc64d13fec7aeea0dc9850ef4b3fba7c7" protocol=ttrpc version=3 Aug 13 02:10:26.274720 systemd[1]: Started cri-containerd-2cfd42efa842bd3a7cd40ece45ef50eda3d03f024b45a52dd03ec91b5f1ab207.scope - libcontainer container 2cfd42efa842bd3a7cd40ece45ef50eda3d03f024b45a52dd03ec91b5f1ab207. Aug 13 02:10:26.323034 containerd[1542]: time="2025-08-13T02:10:26.322992069Z" level=info msg="StartContainer for \"2cfd42efa842bd3a7cd40ece45ef50eda3d03f024b45a52dd03ec91b5f1ab207\" returns successfully" Aug 13 02:10:26.378211 sshd[6350]: Accepted publickey for core from 147.75.109.163 port 59170 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:10:26.381932 sshd-session[6350]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:10:26.388758 systemd-logind[1527]: New session 30 of user core. Aug 13 02:10:26.396834 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 02:10:26.559674 sshd[6294]: Received disconnect from 165.154.201.122 port 33404:11: Bye Bye [preauth] Aug 13 02:10:26.559674 sshd[6294]: Disconnected from authenticating user root 165.154.201.122 port 33404 [preauth] Aug 13 02:10:26.561356 systemd[1]: sshd@33-172.236.122.171:22-165.154.201.122:33404.service: Deactivated successfully. Aug 13 02:10:26.616156 kubelet[2718]: E0813 02:10:26.616106 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:10:26.674839 kubelet[2718]: I0813 02:10:26.674742 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-pw6gg" podStartSLOduration=217.728771732 podStartE2EDuration="3m39.674715538s" podCreationTimestamp="2025-08-13 02:06:47 +0000 UTC" firstStartedPulling="2025-08-13 02:10:24.273890421 +0000 UTC m=+224.429781325" lastFinishedPulling="2025-08-13 02:10:26.219834227 +0000 UTC m=+226.375725131" observedRunningTime="2025-08-13 02:10:26.634108095 +0000 UTC m=+226.789998999" watchObservedRunningTime="2025-08-13 02:10:26.674715538 +0000 UTC m=+226.830606442" Aug 13 02:10:26.773442 sshd[6384]: Connection closed by 147.75.109.163 port 59170 Aug 13 02:10:26.776193 sshd-session[6350]: pam_unix(sshd:session): session closed for user core Aug 13 02:10:26.784351 systemd[1]: sshd@34-172.236.122.171:22-147.75.109.163:59170.service: Deactivated successfully. Aug 13 02:10:26.789139 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 02:10:26.791628 systemd-logind[1527]: Session 30 logged out. Waiting for processes to exit. Aug 13 02:10:26.796104 systemd-logind[1527]: Removed session 30. Aug 13 02:10:26.936321 kubelet[2718]: E0813 02:10:26.935964 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:10:26.937854 containerd[1542]: time="2025-08-13T02:10:26.937819070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,}" Aug 13 02:10:27.094360 systemd-networkd[1461]: calid0744b21c68: Link UP Aug 13 02:10:27.096376 systemd-networkd[1461]: calid0744b21c68: Gained carrier Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.001 [INFO][6407] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--236--122--171-k8s-coredns--668d6bf9bc--p5qmw-eth0 coredns-668d6bf9bc- kube-system 26fd4059-1e9c-49a2-9bd9-181be9ad7bcb 801 0 2025-08-13 02:06:47 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-236-122-171 coredns-668d6bf9bc-p5qmw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calid0744b21c68 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5qmw" WorkloadEndpoint="172--236--122--171-k8s-coredns--668d6bf9bc--p5qmw-" Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.001 [INFO][6407] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5qmw" WorkloadEndpoint="172--236--122--171-k8s-coredns--668d6bf9bc--p5qmw-eth0" Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.037 [INFO][6421] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" HandleID="k8s-pod-network.ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" Workload="172--236--122--171-k8s-coredns--668d6bf9bc--p5qmw-eth0" Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.038 [INFO][6421] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" HandleID="k8s-pod-network.ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" Workload="172--236--122--171-k8s-coredns--668d6bf9bc--p5qmw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-236-122-171", "pod":"coredns-668d6bf9bc-p5qmw", "timestamp":"2025-08-13 02:10:27.037958574 +0000 UTC"}, Hostname:"172-236-122-171", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.038 [INFO][6421] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.038 [INFO][6421] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.038 [INFO][6421] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-236-122-171' Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.045 [INFO][6421] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" host="172-236-122-171" Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.050 [INFO][6421] ipam/ipam.go 394: Looking up existing affinities for host host="172-236-122-171" Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.056 [INFO][6421] ipam/ipam.go 511: Trying affinity for 192.168.99.64/26 host="172-236-122-171" Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.057 [INFO][6421] ipam/ipam.go 158: Attempting to load block cidr=192.168.99.64/26 host="172-236-122-171" Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.068 [INFO][6421] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.99.64/26 host="172-236-122-171" Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.069 [INFO][6421] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.99.64/26 handle="k8s-pod-network.ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" host="172-236-122-171" Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.071 [INFO][6421] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97 Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.074 [INFO][6421] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.99.64/26 handle="k8s-pod-network.ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" host="172-236-122-171" Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.080 [INFO][6421] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.99.68/26] block=192.168.99.64/26 handle="k8s-pod-network.ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" host="172-236-122-171" Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.081 [INFO][6421] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.99.68/26] handle="k8s-pod-network.ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" host="172-236-122-171" Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.081 [INFO][6421] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 02:10:27.118817 containerd[1542]: 2025-08-13 02:10:27.081 [INFO][6421] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.99.68/26] IPv6=[] ContainerID="ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" HandleID="k8s-pod-network.ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" Workload="172--236--122--171-k8s-coredns--668d6bf9bc--p5qmw-eth0" Aug 13 02:10:27.120004 containerd[1542]: 2025-08-13 02:10:27.086 [INFO][6407] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5qmw" WorkloadEndpoint="172--236--122--171-k8s-coredns--668d6bf9bc--p5qmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--122--171-k8s-coredns--668d6bf9bc--p5qmw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"26fd4059-1e9c-49a2-9bd9-181be9ad7bcb", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 2, 6, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-122-171", ContainerID:"", Pod:"coredns-668d6bf9bc-p5qmw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid0744b21c68", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 02:10:27.120004 containerd[1542]: 2025-08-13 02:10:27.086 [INFO][6407] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.99.68/32] ContainerID="ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5qmw" WorkloadEndpoint="172--236--122--171-k8s-coredns--668d6bf9bc--p5qmw-eth0" Aug 13 02:10:27.120004 containerd[1542]: 2025-08-13 02:10:27.086 [INFO][6407] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid0744b21c68 ContainerID="ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5qmw" WorkloadEndpoint="172--236--122--171-k8s-coredns--668d6bf9bc--p5qmw-eth0" Aug 13 02:10:27.120004 containerd[1542]: 2025-08-13 02:10:27.097 [INFO][6407] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5qmw" WorkloadEndpoint="172--236--122--171-k8s-coredns--668d6bf9bc--p5qmw-eth0" Aug 13 02:10:27.120004 containerd[1542]: 2025-08-13 02:10:27.098 [INFO][6407] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5qmw" WorkloadEndpoint="172--236--122--171-k8s-coredns--668d6bf9bc--p5qmw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--236--122--171-k8s-coredns--668d6bf9bc--p5qmw-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"26fd4059-1e9c-49a2-9bd9-181be9ad7bcb", ResourceVersion:"801", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 2, 6, 47, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-236-122-171", ContainerID:"ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97", Pod:"coredns-668d6bf9bc-p5qmw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.99.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calid0744b21c68", MAC:"4e:9f:dc:7c:b5:8b", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 02:10:27.120004 containerd[1542]: 2025-08-13 02:10:27.112 [INFO][6407] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" Namespace="kube-system" Pod="coredns-668d6bf9bc-p5qmw" WorkloadEndpoint="172--236--122--171-k8s-coredns--668d6bf9bc--p5qmw-eth0" Aug 13 02:10:27.165335 containerd[1542]: time="2025-08-13T02:10:27.164339817Z" level=info msg="connecting to shim ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97" address="unix:///run/containerd/s/d8f514bca9891d140f6ccf2968564152b06dc5ad30d4e8c940aec6569e4b8bf3" namespace=k8s.io protocol=ttrpc version=3 Aug 13 02:10:27.201965 systemd[1]: Started cri-containerd-ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97.scope - libcontainer container ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97. Aug 13 02:10:27.269152 containerd[1542]: time="2025-08-13T02:10:27.269107522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-p5qmw,Uid:26fd4059-1e9c-49a2-9bd9-181be9ad7bcb,Namespace:kube-system,Attempt:0,} returns sandbox id \"ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97\"" Aug 13 02:10:27.271427 kubelet[2718]: E0813 02:10:27.271181 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:10:27.274295 containerd[1542]: time="2025-08-13T02:10:27.274254859Z" level=info msg="CreateContainer within sandbox \"ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 02:10:27.294876 containerd[1542]: time="2025-08-13T02:10:27.294835224Z" level=info msg="Container e3a232c712f6002ed59286646ce315991e1d54c64a00b63d3736c591e4842cce: CDI devices from CRI Config.CDIDevices: []" Aug 13 02:10:27.295287 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1330599568.mount: Deactivated successfully. Aug 13 02:10:27.302539 containerd[1542]: time="2025-08-13T02:10:27.302490694Z" level=info msg="CreateContainer within sandbox \"ed42f0fd0af1fad3d0a62032bdbd37790e5524bc11f77cc7f275713b820b2d97\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"e3a232c712f6002ed59286646ce315991e1d54c64a00b63d3736c591e4842cce\"" Aug 13 02:10:27.303622 containerd[1542]: time="2025-08-13T02:10:27.303088890Z" level=info msg="StartContainer for \"e3a232c712f6002ed59286646ce315991e1d54c64a00b63d3736c591e4842cce\"" Aug 13 02:10:27.304550 containerd[1542]: time="2025-08-13T02:10:27.304497801Z" level=info msg="connecting to shim e3a232c712f6002ed59286646ce315991e1d54c64a00b63d3736c591e4842cce" address="unix:///run/containerd/s/d8f514bca9891d140f6ccf2968564152b06dc5ad30d4e8c940aec6569e4b8bf3" protocol=ttrpc version=3 Aug 13 02:10:27.333932 systemd[1]: Started cri-containerd-e3a232c712f6002ed59286646ce315991e1d54c64a00b63d3736c591e4842cce.scope - libcontainer container e3a232c712f6002ed59286646ce315991e1d54c64a00b63d3736c591e4842cce. Aug 13 02:10:27.387832 containerd[1542]: time="2025-08-13T02:10:27.387735587Z" level=info msg="StartContainer for \"e3a232c712f6002ed59286646ce315991e1d54c64a00b63d3736c591e4842cce\" returns successfully" Aug 13 02:10:27.621625 kubelet[2718]: E0813 02:10:27.620726 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:10:27.621625 kubelet[2718]: E0813 02:10:27.620925 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:10:27.649403 kubelet[2718]: I0813 02:10:27.649327 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-p5qmw" podStartSLOduration=220.649306976 podStartE2EDuration="3m40.649306976s" podCreationTimestamp="2025-08-13 02:06:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 02:10:27.633797467 +0000 UTC m=+227.789688381" watchObservedRunningTime="2025-08-13 02:10:27.649306976 +0000 UTC m=+227.805197880" Aug 13 02:10:28.623630 kubelet[2718]: E0813 02:10:28.622995 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:10:28.623630 kubelet[2718]: E0813 02:10:28.623460 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:10:28.696888 systemd-networkd[1461]: calid0744b21c68: Gained IPv6LL Aug 13 02:10:29.625448 kubelet[2718]: E0813 02:10:29.625395 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:10:31.839335 systemd[1]: Started sshd@35-172.236.122.171:22-147.75.109.163:36996.service - OpenSSH per-connection server daemon (147.75.109.163:36996). Aug 13 02:10:32.196518 sshd[6519]: Accepted publickey for core from 147.75.109.163 port 36996 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:10:32.200538 sshd-session[6519]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:10:32.211481 systemd-logind[1527]: New session 31 of user core. Aug 13 02:10:32.217743 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 13 02:10:32.528521 sshd[6526]: Connection closed by 147.75.109.163 port 36996 Aug 13 02:10:32.528872 sshd-session[6519]: pam_unix(sshd:session): session closed for user core Aug 13 02:10:32.540287 systemd[1]: sshd@35-172.236.122.171:22-147.75.109.163:36996.service: Deactivated successfully. Aug 13 02:10:32.543936 systemd[1]: session-31.scope: Deactivated successfully. Aug 13 02:10:32.545472 systemd-logind[1527]: Session 31 logged out. Waiting for processes to exit. Aug 13 02:10:32.548323 systemd-logind[1527]: Removed session 31. Aug 13 02:10:33.941234 containerd[1542]: time="2025-08-13T02:10:33.940846206Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 02:10:34.517624 containerd[1542]: time="2025-08-13T02:10:34.517550623Z" level=error msg="failed to cleanup \"extract-388025976-qfZV sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 02:10:34.518128 containerd[1542]: time="2025-08-13T02:10:34.518097680Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 02:10:34.518386 containerd[1542]: time="2025-08-13T02:10:34.518370928Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=22024426" Aug 13 02:10:34.518662 kubelet[2718]: E0813 02:10:34.518614 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 02:10:34.519360 kubelet[2718]: E0813 02:10:34.519314 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 02:10:34.523777 kubelet[2718]: E0813 02:10:34.519671 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bmxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 02:10:34.524030 kubelet[2718]: E0813 02:10:34.523933 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:10:36.023446 kubelet[2718]: I0813 02:10:36.023393 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:10:36.023446 kubelet[2718]: I0813 02:10:36.023435 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:10:36.029846 kubelet[2718]: I0813 02:10:36.026564 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:10:36.052640 kubelet[2718]: I0813 02:10:36.052615 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:10:36.052798 kubelet[2718]: I0813 02:10:36.052775 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/coredns-668d6bf9bc-p5qmw","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/calico-node-cdfxj","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","calico-system/csi-node-driver-r6mhv","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:10:36.052928 kubelet[2718]: E0813 02:10:36.052810 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:10:36.052928 kubelet[2718]: E0813 02:10:36.052825 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:10:36.052928 kubelet[2718]: E0813 02:10:36.052835 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:10:36.052928 kubelet[2718]: E0813 02:10:36.052844 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:10:36.052928 kubelet[2718]: E0813 02:10:36.052853 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:10:36.052928 kubelet[2718]: E0813 02:10:36.052863 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:10:36.052928 kubelet[2718]: E0813 02:10:36.052872 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:10:36.052928 kubelet[2718]: E0813 02:10:36.052884 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:10:36.052928 kubelet[2718]: E0813 02:10:36.052892 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:10:36.052928 kubelet[2718]: E0813 02:10:36.052900 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:10:36.052928 kubelet[2718]: I0813 02:10:36.052909 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:10:36.936500 kubelet[2718]: E0813 02:10:36.936457 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:10:37.592305 systemd[1]: Started sshd@36-172.236.122.171:22-147.75.109.163:36998.service - OpenSSH per-connection server daemon (147.75.109.163:36998). Aug 13 02:10:37.936857 sshd[6549]: Accepted publickey for core from 147.75.109.163 port 36998 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:10:37.938451 sshd-session[6549]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:10:37.945292 systemd-logind[1527]: New session 32 of user core. Aug 13 02:10:37.950725 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 13 02:10:38.258127 sshd[6551]: Connection closed by 147.75.109.163 port 36998 Aug 13 02:10:38.260004 sshd-session[6549]: pam_unix(sshd:session): session closed for user core Aug 13 02:10:38.264700 systemd[1]: sshd@36-172.236.122.171:22-147.75.109.163:36998.service: Deactivated successfully. Aug 13 02:10:38.267045 systemd[1]: session-32.scope: Deactivated successfully. Aug 13 02:10:38.270113 systemd-logind[1527]: Session 32 logged out. Waiting for processes to exit. Aug 13 02:10:38.271489 systemd-logind[1527]: Removed session 32. Aug 13 02:10:43.324997 systemd[1]: Started sshd@37-172.236.122.171:22-147.75.109.163:52718.service - OpenSSH per-connection server daemon (147.75.109.163:52718). Aug 13 02:10:43.673737 sshd[6565]: Accepted publickey for core from 147.75.109.163 port 52718 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:10:43.675919 sshd-session[6565]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:10:43.683222 systemd-logind[1527]: New session 33 of user core. Aug 13 02:10:43.688725 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 13 02:10:43.983330 sshd[6567]: Connection closed by 147.75.109.163 port 52718 Aug 13 02:10:43.984310 sshd-session[6565]: pam_unix(sshd:session): session closed for user core Aug 13 02:10:43.989983 systemd[1]: sshd@37-172.236.122.171:22-147.75.109.163:52718.service: Deactivated successfully. Aug 13 02:10:43.992291 systemd[1]: session-33.scope: Deactivated successfully. Aug 13 02:10:43.993150 systemd-logind[1527]: Session 33 logged out. Waiting for processes to exit. Aug 13 02:10:43.996208 systemd-logind[1527]: Removed session 33. Aug 13 02:10:45.683726 containerd[1542]: time="2025-08-13T02:10:45.683468238Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86c92af398b1911d4ecaf673a7372fd9f418693eb51c8702fd8bfe1f17c11143\" id:\"c5ed93945ef52230c3bb9c1aa4bef304b0924457ba34c4633c61c6f290002a29\" pid:6590 exited_at:{seconds:1755051045 nanos:681840367}" Aug 13 02:10:45.941622 kubelet[2718]: E0813 02:10:45.939587 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:10:46.088012 kubelet[2718]: I0813 02:10:46.087909 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:10:46.088580 kubelet[2718]: I0813 02:10:46.088220 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:10:46.090895 kubelet[2718]: I0813 02:10:46.090824 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:10:46.112541 kubelet[2718]: I0813 02:10:46.112480 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:10:46.113194 kubelet[2718]: I0813 02:10:46.113158 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/coredns-668d6bf9bc-p5qmw","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/calico-node-cdfxj","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","calico-system/csi-node-driver-r6mhv","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:10:46.113358 kubelet[2718]: E0813 02:10:46.113326 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:10:46.113358 kubelet[2718]: E0813 02:10:46.113356 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:10:46.113650 kubelet[2718]: E0813 02:10:46.113369 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:10:46.113650 kubelet[2718]: E0813 02:10:46.113513 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:10:46.113650 kubelet[2718]: E0813 02:10:46.113527 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:10:46.113650 kubelet[2718]: E0813 02:10:46.113535 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:10:46.113650 kubelet[2718]: E0813 02:10:46.113625 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:10:46.113650 kubelet[2718]: E0813 02:10:46.113639 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:10:46.113650 kubelet[2718]: E0813 02:10:46.113647 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:10:46.113819 kubelet[2718]: E0813 02:10:46.113678 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:10:46.113819 kubelet[2718]: I0813 02:10:46.113774 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:10:47.936690 kubelet[2718]: E0813 02:10:47.936047 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:10:49.040026 systemd[1]: Started sshd@38-172.236.122.171:22-147.75.109.163:56062.service - OpenSSH per-connection server daemon (147.75.109.163:56062). Aug 13 02:10:49.371675 sshd[6607]: Accepted publickey for core from 147.75.109.163 port 56062 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:10:49.372163 sshd-session[6607]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:10:49.381889 systemd-logind[1527]: New session 34 of user core. Aug 13 02:10:49.385737 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 13 02:10:49.670863 sshd[6609]: Connection closed by 147.75.109.163 port 56062 Aug 13 02:10:49.672322 sshd-session[6607]: pam_unix(sshd:session): session closed for user core Aug 13 02:10:49.678650 systemd-logind[1527]: Session 34 logged out. Waiting for processes to exit. Aug 13 02:10:49.679067 systemd[1]: sshd@38-172.236.122.171:22-147.75.109.163:56062.service: Deactivated successfully. Aug 13 02:10:49.681975 systemd[1]: session-34.scope: Deactivated successfully. Aug 13 02:10:49.685762 systemd-logind[1527]: Removed session 34. Aug 13 02:10:52.936257 kubelet[2718]: E0813 02:10:52.936218 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:10:54.738786 systemd[1]: Started sshd@39-172.236.122.171:22-147.75.109.163:56074.service - OpenSSH per-connection server daemon (147.75.109.163:56074). Aug 13 02:10:55.079834 sshd[6621]: Accepted publickey for core from 147.75.109.163 port 56074 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:10:55.082383 sshd-session[6621]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:10:55.088617 systemd-logind[1527]: New session 35 of user core. Aug 13 02:10:55.097738 systemd[1]: Started session-35.scope - Session 35 of User core. Aug 13 02:10:55.396491 sshd[6623]: Connection closed by 147.75.109.163 port 56074 Aug 13 02:10:55.398074 sshd-session[6621]: pam_unix(sshd:session): session closed for user core Aug 13 02:10:55.404276 systemd[1]: sshd@39-172.236.122.171:22-147.75.109.163:56074.service: Deactivated successfully. Aug 13 02:10:55.407211 systemd[1]: session-35.scope: Deactivated successfully. Aug 13 02:10:55.408469 systemd-logind[1527]: Session 35 logged out. Waiting for processes to exit. Aug 13 02:10:55.410132 systemd-logind[1527]: Removed session 35. Aug 13 02:10:56.144533 kubelet[2718]: I0813 02:10:56.144498 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:10:56.144943 kubelet[2718]: I0813 02:10:56.144547 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:10:56.146722 kubelet[2718]: I0813 02:10:56.146706 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:10:56.165142 kubelet[2718]: I0813 02:10:56.165121 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:10:56.165273 kubelet[2718]: I0813 02:10:56.165252 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/coredns-668d6bf9bc-pw6gg","kube-system/coredns-668d6bf9bc-p5qmw","calico-system/calico-node-cdfxj","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","calico-system/csi-node-driver-r6mhv","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:10:56.165407 kubelet[2718]: E0813 02:10:56.165284 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:10:56.165407 kubelet[2718]: E0813 02:10:56.165297 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:10:56.165407 kubelet[2718]: E0813 02:10:56.165305 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:10:56.165407 kubelet[2718]: E0813 02:10:56.165313 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:10:56.165407 kubelet[2718]: E0813 02:10:56.165322 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:10:56.165407 kubelet[2718]: E0813 02:10:56.165329 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:10:56.165407 kubelet[2718]: E0813 02:10:56.165336 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:10:56.165407 kubelet[2718]: E0813 02:10:56.165346 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:10:56.165407 kubelet[2718]: E0813 02:10:56.165353 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:10:56.165407 kubelet[2718]: E0813 02:10:56.165360 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:10:56.165407 kubelet[2718]: I0813 02:10:56.165369 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:10:59.943065 containerd[1542]: time="2025-08-13T02:10:59.943027437Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 02:11:00.460330 systemd[1]: Started sshd@40-172.236.122.171:22-147.75.109.163:58516.service - OpenSSH per-connection server daemon (147.75.109.163:58516). Aug 13 02:11:00.644294 containerd[1542]: time="2025-08-13T02:11:00.644216785Z" level=error msg="failed to cleanup \"extract-474736996-gWPP sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 02:11:00.645294 containerd[1542]: time="2025-08-13T02:11:00.645264319Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 02:11:00.645473 containerd[1542]: time="2025-08-13T02:11:00.645378648Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=22024426" Aug 13 02:11:00.645801 kubelet[2718]: E0813 02:11:00.645759 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 02:11:00.646271 kubelet[2718]: E0813 02:11:00.646245 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 02:11:00.646501 kubelet[2718]: E0813 02:11:00.646462 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bmxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 02:11:00.648212 kubelet[2718]: E0813 02:11:00.648164 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:11:00.801255 sshd[6641]: Accepted publickey for core from 147.75.109.163 port 58516 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:11:00.803545 sshd-session[6641]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:11:00.808863 systemd-logind[1527]: New session 36 of user core. Aug 13 02:11:00.816736 systemd[1]: Started session-36.scope - Session 36 of User core. Aug 13 02:11:01.128689 sshd[6643]: Connection closed by 147.75.109.163 port 58516 Aug 13 02:11:01.129427 sshd-session[6641]: pam_unix(sshd:session): session closed for user core Aug 13 02:11:01.135954 systemd[1]: sshd@40-172.236.122.171:22-147.75.109.163:58516.service: Deactivated successfully. Aug 13 02:11:01.139587 systemd[1]: session-36.scope: Deactivated successfully. Aug 13 02:11:01.141329 systemd-logind[1527]: Session 36 logged out. Waiting for processes to exit. Aug 13 02:11:01.144444 systemd-logind[1527]: Removed session 36. Aug 13 02:11:03.924776 update_engine[1529]: I20250813 02:11:03.924700 1529 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 02:11:03.924776 update_engine[1529]: I20250813 02:11:03.924758 1529 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 02:11:03.925447 update_engine[1529]: I20250813 02:11:03.924991 1529 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 02:11:03.925802 update_engine[1529]: I20250813 02:11:03.925765 1529 omaha_request_params.cc:62] Current group set to beta Aug 13 02:11:03.926153 update_engine[1529]: I20250813 02:11:03.925882 1529 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 02:11:03.926153 update_engine[1529]: I20250813 02:11:03.925899 1529 update_attempter.cc:643] Scheduling an action processor start. Aug 13 02:11:03.926153 update_engine[1529]: I20250813 02:11:03.925916 1529 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 02:11:03.926153 update_engine[1529]: I20250813 02:11:03.925942 1529 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 02:11:03.926153 update_engine[1529]: I20250813 02:11:03.926002 1529 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 02:11:03.926153 update_engine[1529]: I20250813 02:11:03.926012 1529 omaha_request_action.cc:272] Request: Aug 13 02:11:03.926153 update_engine[1529]: Aug 13 02:11:03.926153 update_engine[1529]: Aug 13 02:11:03.926153 update_engine[1529]: Aug 13 02:11:03.926153 update_engine[1529]: Aug 13 02:11:03.926153 update_engine[1529]: Aug 13 02:11:03.926153 update_engine[1529]: Aug 13 02:11:03.926153 update_engine[1529]: Aug 13 02:11:03.926153 update_engine[1529]: Aug 13 02:11:03.926153 update_engine[1529]: I20250813 02:11:03.926018 1529 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 02:11:03.931656 update_engine[1529]: I20250813 02:11:03.929807 1529 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 02:11:03.931790 locksmithd[1571]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 02:11:03.932798 update_engine[1529]: I20250813 02:11:03.932010 1529 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 02:11:03.953966 update_engine[1529]: E20250813 02:11:03.953838 1529 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 02:11:03.953966 update_engine[1529]: I20250813 02:11:03.953933 1529 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 02:11:06.197055 systemd[1]: Started sshd@41-172.236.122.171:22-147.75.109.163:58528.service - OpenSSH per-connection server daemon (147.75.109.163:58528). Aug 13 02:11:06.219769 kubelet[2718]: I0813 02:11:06.218765 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:11:06.219769 kubelet[2718]: I0813 02:11:06.218825 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:11:06.222547 kubelet[2718]: I0813 02:11:06.222515 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:11:06.252114 kubelet[2718]: I0813 02:11:06.252053 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:11:06.252301 kubelet[2718]: I0813 02:11:06.252261 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/coredns-668d6bf9bc-p5qmw","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/calico-node-cdfxj","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","calico-system/csi-node-driver-r6mhv","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:11:06.252392 kubelet[2718]: E0813 02:11:06.252306 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:11:06.252392 kubelet[2718]: E0813 02:11:06.252322 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:11:06.252392 kubelet[2718]: E0813 02:11:06.252333 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:11:06.252392 kubelet[2718]: E0813 02:11:06.252342 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:11:06.252392 kubelet[2718]: E0813 02:11:06.252353 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:11:06.252392 kubelet[2718]: E0813 02:11:06.252362 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:11:06.252392 kubelet[2718]: E0813 02:11:06.252370 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:11:06.252392 kubelet[2718]: E0813 02:11:06.252384 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:11:06.252392 kubelet[2718]: E0813 02:11:06.252393 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:11:06.252392 kubelet[2718]: E0813 02:11:06.252401 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:11:06.252674 kubelet[2718]: I0813 02:11:06.252411 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:11:06.544101 sshd[6656]: Accepted publickey for core from 147.75.109.163 port 58528 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:11:06.546043 sshd-session[6656]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:11:06.551514 systemd-logind[1527]: New session 37 of user core. Aug 13 02:11:06.557759 systemd[1]: Started session-37.scope - Session 37 of User core. Aug 13 02:11:06.849665 sshd[6658]: Connection closed by 147.75.109.163 port 58528 Aug 13 02:11:06.850762 sshd-session[6656]: pam_unix(sshd:session): session closed for user core Aug 13 02:11:06.855998 systemd-logind[1527]: Session 37 logged out. Waiting for processes to exit. Aug 13 02:11:06.856863 systemd[1]: sshd@41-172.236.122.171:22-147.75.109.163:58528.service: Deactivated successfully. Aug 13 02:11:06.859407 systemd[1]: session-37.scope: Deactivated successfully. Aug 13 02:11:06.861574 systemd-logind[1527]: Removed session 37. Aug 13 02:11:11.915773 systemd[1]: Started sshd@42-172.236.122.171:22-147.75.109.163:58068.service - OpenSSH per-connection server daemon (147.75.109.163:58068). Aug 13 02:11:12.255280 sshd[6673]: Accepted publickey for core from 147.75.109.163 port 58068 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:11:12.256189 sshd-session[6673]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:11:12.262762 systemd-logind[1527]: New session 38 of user core. Aug 13 02:11:12.267766 systemd[1]: Started session-38.scope - Session 38 of User core. Aug 13 02:11:12.573817 sshd[6675]: Connection closed by 147.75.109.163 port 58068 Aug 13 02:11:12.574721 sshd-session[6673]: pam_unix(sshd:session): session closed for user core Aug 13 02:11:12.581373 systemd-logind[1527]: Session 38 logged out. Waiting for processes to exit. Aug 13 02:11:12.581530 systemd[1]: sshd@42-172.236.122.171:22-147.75.109.163:58068.service: Deactivated successfully. Aug 13 02:11:12.586071 systemd[1]: session-38.scope: Deactivated successfully. Aug 13 02:11:12.593378 systemd-logind[1527]: Removed session 38. Aug 13 02:11:12.935804 kubelet[2718]: E0813 02:11:12.935657 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:11:13.923764 update_engine[1529]: I20250813 02:11:13.923554 1529 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 02:11:13.924215 update_engine[1529]: I20250813 02:11:13.923887 1529 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 02:11:13.924215 update_engine[1529]: I20250813 02:11:13.924147 1529 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 02:11:13.924932 update_engine[1529]: E20250813 02:11:13.924898 1529 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 02:11:13.924992 update_engine[1529]: I20250813 02:11:13.924945 1529 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 13 02:11:15.658645 containerd[1542]: time="2025-08-13T02:11:15.658559883Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86c92af398b1911d4ecaf673a7372fd9f418693eb51c8702fd8bfe1f17c11143\" id:\"af4aaa76f2724d3137908e990c850a281bb3e4077b4fc066711bd7eef7273534\" pid:6699 exited_at:{seconds:1755051075 nanos:658080986}" Aug 13 02:11:16.272074 kubelet[2718]: I0813 02:11:16.272033 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:11:16.272552 kubelet[2718]: I0813 02:11:16.272094 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:11:16.274277 kubelet[2718]: I0813 02:11:16.274250 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:11:16.295235 kubelet[2718]: I0813 02:11:16.294916 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:11:16.295235 kubelet[2718]: I0813 02:11:16.295101 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/coredns-668d6bf9bc-p5qmw","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/calico-node-cdfxj","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","calico-system/csi-node-driver-r6mhv","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:11:16.295235 kubelet[2718]: E0813 02:11:16.295130 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:11:16.295235 kubelet[2718]: E0813 02:11:16.295143 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:11:16.295235 kubelet[2718]: E0813 02:11:16.295151 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:11:16.295235 kubelet[2718]: E0813 02:11:16.295159 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:11:16.295235 kubelet[2718]: E0813 02:11:16.295168 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:11:16.295235 kubelet[2718]: E0813 02:11:16.295177 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:11:16.295235 kubelet[2718]: E0813 02:11:16.295185 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:11:16.295235 kubelet[2718]: E0813 02:11:16.295194 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:11:16.295235 kubelet[2718]: E0813 02:11:16.295202 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:11:16.295235 kubelet[2718]: E0813 02:11:16.295210 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:11:16.295235 kubelet[2718]: I0813 02:11:16.295219 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:11:17.635345 systemd[1]: Started sshd@43-172.236.122.171:22-147.75.109.163:58078.service - OpenSSH per-connection server daemon (147.75.109.163:58078). Aug 13 02:11:17.967693 sshd[6711]: Accepted publickey for core from 147.75.109.163 port 58078 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:11:17.968871 sshd-session[6711]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:11:17.974212 systemd-logind[1527]: New session 39 of user core. Aug 13 02:11:17.978810 systemd[1]: Started session-39.scope - Session 39 of User core. Aug 13 02:11:18.277620 sshd[6715]: Connection closed by 147.75.109.163 port 58078 Aug 13 02:11:18.280399 sshd-session[6711]: pam_unix(sshd:session): session closed for user core Aug 13 02:11:18.284800 systemd[1]: sshd@43-172.236.122.171:22-147.75.109.163:58078.service: Deactivated successfully. Aug 13 02:11:18.287441 systemd[1]: session-39.scope: Deactivated successfully. Aug 13 02:11:18.289043 systemd-logind[1527]: Session 39 logged out. Waiting for processes to exit. Aug 13 02:11:18.292088 systemd-logind[1527]: Removed session 39. Aug 13 02:11:23.342421 systemd[1]: Started sshd@44-172.236.122.171:22-147.75.109.163:57792.service - OpenSSH per-connection server daemon (147.75.109.163:57792). Aug 13 02:11:23.682429 sshd[6728]: Accepted publickey for core from 147.75.109.163 port 57792 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:11:23.684617 sshd-session[6728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:11:23.691573 systemd-logind[1527]: New session 40 of user core. Aug 13 02:11:23.701791 systemd[1]: Started session-40.scope - Session 40 of User core. Aug 13 02:11:23.922790 update_engine[1529]: I20250813 02:11:23.922631 1529 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 02:11:23.923194 update_engine[1529]: I20250813 02:11:23.922987 1529 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 02:11:23.923297 update_engine[1529]: I20250813 02:11:23.923259 1529 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 02:11:23.924048 update_engine[1529]: E20250813 02:11:23.924026 1529 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 02:11:23.924177 update_engine[1529]: I20250813 02:11:23.924157 1529 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 13 02:11:24.004510 sshd[6730]: Connection closed by 147.75.109.163 port 57792 Aug 13 02:11:24.006042 sshd-session[6728]: pam_unix(sshd:session): session closed for user core Aug 13 02:11:24.009536 systemd-logind[1527]: Session 40 logged out. Waiting for processes to exit. Aug 13 02:11:24.010551 systemd[1]: sshd@44-172.236.122.171:22-147.75.109.163:57792.service: Deactivated successfully. Aug 13 02:11:24.013296 systemd[1]: session-40.scope: Deactivated successfully. Aug 13 02:11:24.018644 systemd-logind[1527]: Removed session 40. Aug 13 02:11:26.314300 kubelet[2718]: I0813 02:11:26.314264 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:11:26.314900 kubelet[2718]: I0813 02:11:26.314320 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:11:26.316874 kubelet[2718]: I0813 02:11:26.316846 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:11:26.334211 kubelet[2718]: I0813 02:11:26.334147 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:11:26.334479 kubelet[2718]: I0813 02:11:26.334440 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/coredns-668d6bf9bc-p5qmw","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/calico-node-cdfxj","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","calico-system/csi-node-driver-r6mhv","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:11:26.334717 kubelet[2718]: E0813 02:11:26.334660 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:11:26.334717 kubelet[2718]: E0813 02:11:26.334679 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:11:26.334887 kubelet[2718]: E0813 02:11:26.334690 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:11:26.334887 kubelet[2718]: E0813 02:11:26.334821 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:11:26.334887 kubelet[2718]: E0813 02:11:26.334831 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:11:26.334887 kubelet[2718]: E0813 02:11:26.334839 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:11:26.334887 kubelet[2718]: E0813 02:11:26.334847 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:11:26.334887 kubelet[2718]: E0813 02:11:26.334858 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:11:26.335161 kubelet[2718]: E0813 02:11:26.334979 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:11:26.335161 kubelet[2718]: E0813 02:11:26.334992 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:11:26.335161 kubelet[2718]: I0813 02:11:26.335001 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:11:27.938060 kubelet[2718]: E0813 02:11:27.937846 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:11:28.935334 kubelet[2718]: E0813 02:11:28.935286 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:11:29.064079 systemd[1]: Started sshd@45-172.236.122.171:22-147.75.109.163:42246.service - OpenSSH per-connection server daemon (147.75.109.163:42246). Aug 13 02:11:29.398163 sshd[6741]: Accepted publickey for core from 147.75.109.163 port 42246 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:11:29.400002 sshd-session[6741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:11:29.404417 systemd-logind[1527]: New session 41 of user core. Aug 13 02:11:29.410741 systemd[1]: Started session-41.scope - Session 41 of User core. Aug 13 02:11:29.711430 sshd[6743]: Connection closed by 147.75.109.163 port 42246 Aug 13 02:11:29.712278 sshd-session[6741]: pam_unix(sshd:session): session closed for user core Aug 13 02:11:29.716733 systemd[1]: sshd@45-172.236.122.171:22-147.75.109.163:42246.service: Deactivated successfully. Aug 13 02:11:29.718759 systemd[1]: session-41.scope: Deactivated successfully. Aug 13 02:11:29.719687 systemd-logind[1527]: Session 41 logged out. Waiting for processes to exit. Aug 13 02:11:29.721472 systemd-logind[1527]: Removed session 41. Aug 13 02:11:33.927629 update_engine[1529]: I20250813 02:11:33.926661 1529 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 02:11:33.927629 update_engine[1529]: I20250813 02:11:33.927103 1529 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 02:11:33.927629 update_engine[1529]: I20250813 02:11:33.927379 1529 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 02:11:33.928404 update_engine[1529]: E20250813 02:11:33.928117 1529 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 02:11:33.928404 update_engine[1529]: I20250813 02:11:33.928158 1529 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 02:11:33.928404 update_engine[1529]: I20250813 02:11:33.928166 1529 omaha_request_action.cc:617] Omaha request response: Aug 13 02:11:33.928404 update_engine[1529]: E20250813 02:11:33.928246 1529 omaha_request_action.cc:636] Omaha request network transfer failed. Aug 13 02:11:33.928404 update_engine[1529]: I20250813 02:11:33.928275 1529 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Aug 13 02:11:33.928404 update_engine[1529]: I20250813 02:11:33.928282 1529 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 02:11:33.928404 update_engine[1529]: I20250813 02:11:33.928287 1529 update_attempter.cc:306] Processing Done. Aug 13 02:11:33.928404 update_engine[1529]: E20250813 02:11:33.928300 1529 update_attempter.cc:619] Update failed. Aug 13 02:11:33.928404 update_engine[1529]: I20250813 02:11:33.928306 1529 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Aug 13 02:11:33.928404 update_engine[1529]: I20250813 02:11:33.928311 1529 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Aug 13 02:11:33.928404 update_engine[1529]: I20250813 02:11:33.928318 1529 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Aug 13 02:11:33.928990 update_engine[1529]: I20250813 02:11:33.928797 1529 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 02:11:33.928990 update_engine[1529]: I20250813 02:11:33.928825 1529 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 02:11:33.928990 update_engine[1529]: I20250813 02:11:33.928831 1529 omaha_request_action.cc:272] Request: Aug 13 02:11:33.928990 update_engine[1529]: Aug 13 02:11:33.928990 update_engine[1529]: Aug 13 02:11:33.928990 update_engine[1529]: Aug 13 02:11:33.928990 update_engine[1529]: Aug 13 02:11:33.928990 update_engine[1529]: Aug 13 02:11:33.928990 update_engine[1529]: Aug 13 02:11:33.928990 update_engine[1529]: I20250813 02:11:33.928838 1529 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 02:11:33.929574 locksmithd[1571]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Aug 13 02:11:33.930334 update_engine[1529]: I20250813 02:11:33.929100 1529 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 02:11:33.930334 update_engine[1529]: I20250813 02:11:33.929277 1529 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 02:11:33.930334 update_engine[1529]: E20250813 02:11:33.929991 1529 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 02:11:33.930334 update_engine[1529]: I20250813 02:11:33.930140 1529 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 02:11:33.930334 update_engine[1529]: I20250813 02:11:33.930147 1529 omaha_request_action.cc:617] Omaha request response: Aug 13 02:11:33.930334 update_engine[1529]: I20250813 02:11:33.930153 1529 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 02:11:33.930910 update_engine[1529]: I20250813 02:11:33.930158 1529 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 02:11:33.930910 update_engine[1529]: I20250813 02:11:33.930557 1529 update_attempter.cc:306] Processing Done. Aug 13 02:11:33.930910 update_engine[1529]: I20250813 02:11:33.930568 1529 update_attempter.cc:310] Error event sent. Aug 13 02:11:33.930910 update_engine[1529]: I20250813 02:11:33.930577 1529 update_check_scheduler.cc:74] Next update check in 47m36s Aug 13 02:11:33.931080 locksmithd[1571]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Aug 13 02:11:34.778727 systemd[1]: Started sshd@46-172.236.122.171:22-147.75.109.163:42248.service - OpenSSH per-connection server daemon (147.75.109.163:42248). Aug 13 02:11:35.123242 sshd[6755]: Accepted publickey for core from 147.75.109.163 port 42248 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:11:35.124646 sshd-session[6755]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:11:35.130407 systemd-logind[1527]: New session 42 of user core. Aug 13 02:11:35.136751 systemd[1]: Started session-42.scope - Session 42 of User core. Aug 13 02:11:35.430038 sshd[6757]: Connection closed by 147.75.109.163 port 42248 Aug 13 02:11:35.430783 sshd-session[6755]: pam_unix(sshd:session): session closed for user core Aug 13 02:11:35.434801 systemd[1]: sshd@46-172.236.122.171:22-147.75.109.163:42248.service: Deactivated successfully. Aug 13 02:11:35.439197 systemd[1]: session-42.scope: Deactivated successfully. Aug 13 02:11:35.443068 systemd-logind[1527]: Session 42 logged out. Waiting for processes to exit. Aug 13 02:11:35.445476 systemd-logind[1527]: Removed session 42. Aug 13 02:11:35.895045 containerd[1542]: time="2025-08-13T02:11:35.894954809Z" level=warning msg="container event discarded" container=137a58b372d61626b210f0cba11b764d0abfef60ec202176f36b2812433ed26d type=CONTAINER_CREATED_EVENT Aug 13 02:11:35.907243 containerd[1542]: time="2025-08-13T02:11:35.907192994Z" level=warning msg="container event discarded" container=137a58b372d61626b210f0cba11b764d0abfef60ec202176f36b2812433ed26d type=CONTAINER_STARTED_EVENT Aug 13 02:11:35.937678 kubelet[2718]: E0813 02:11:35.937571 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:11:35.939812 containerd[1542]: time="2025-08-13T02:11:35.938228018Z" level=warning msg="container event discarded" container=a4c616ece71e63776f71028093692ce62128d02fc45f7b97f12f4f50f233a211 type=CONTAINER_CREATED_EVENT Aug 13 02:11:35.953970 containerd[1542]: time="2025-08-13T02:11:35.953941205Z" level=warning msg="container event discarded" container=557b5e0c662024645ca962a532047f30386677862fc00453dcdce47105b368d8 type=CONTAINER_CREATED_EVENT Aug 13 02:11:35.953970 containerd[1542]: time="2025-08-13T02:11:35.953966555Z" level=warning msg="container event discarded" container=557b5e0c662024645ca962a532047f30386677862fc00453dcdce47105b368d8 type=CONTAINER_STARTED_EVENT Aug 13 02:11:35.970172 containerd[1542]: time="2025-08-13T02:11:35.970138628Z" level=warning msg="container event discarded" container=e2205c91467123e706bf2d032faf1b746ba173fac74b989cb011b2ac1b42d4cb type=CONTAINER_CREATED_EVENT Aug 13 02:11:35.970172 containerd[1542]: time="2025-08-13T02:11:35.970156778Z" level=warning msg="container event discarded" container=e2205c91467123e706bf2d032faf1b746ba173fac74b989cb011b2ac1b42d4cb type=CONTAINER_STARTED_EVENT Aug 13 02:11:35.970172 containerd[1542]: time="2025-08-13T02:11:35.970165998Z" level=warning msg="container event discarded" container=8581aadafe0537fa57df985018d153d8340ff3b2b943983a26b2e2d6513291d0 type=CONTAINER_CREATED_EVENT Aug 13 02:11:35.990608 containerd[1542]: time="2025-08-13T02:11:35.990545130Z" level=warning msg="container event discarded" container=078ce7f1dd0187be5124efdf74c604c53098e028cf2ed857a52c6580a3ec7adb type=CONTAINER_CREATED_EVENT Aug 13 02:11:36.053707 containerd[1542]: time="2025-08-13T02:11:36.053624634Z" level=warning msg="container event discarded" container=a4c616ece71e63776f71028093692ce62128d02fc45f7b97f12f4f50f233a211 type=CONTAINER_STARTED_EVENT Aug 13 02:11:36.085857 containerd[1542]: time="2025-08-13T02:11:36.085820233Z" level=warning msg="container event discarded" container=8581aadafe0537fa57df985018d153d8340ff3b2b943983a26b2e2d6513291d0 type=CONTAINER_STARTED_EVENT Aug 13 02:11:36.147917 containerd[1542]: time="2025-08-13T02:11:36.147813773Z" level=warning msg="container event discarded" container=078ce7f1dd0187be5124efdf74c604c53098e028cf2ed857a52c6580a3ec7adb type=CONTAINER_STARTED_EVENT Aug 13 02:11:36.354798 kubelet[2718]: I0813 02:11:36.354770 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:11:36.354798 kubelet[2718]: I0813 02:11:36.354804 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:11:36.358048 kubelet[2718]: I0813 02:11:36.358007 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:11:36.382325 kubelet[2718]: I0813 02:11:36.382293 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:11:36.382785 kubelet[2718]: I0813 02:11:36.382488 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/coredns-668d6bf9bc-pw6gg","kube-system/coredns-668d6bf9bc-p5qmw","calico-system/calico-node-cdfxj","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","calico-system/csi-node-driver-r6mhv","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:11:36.382785 kubelet[2718]: E0813 02:11:36.382540 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:11:36.382785 kubelet[2718]: E0813 02:11:36.382553 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:11:36.382785 kubelet[2718]: E0813 02:11:36.382566 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:11:36.382785 kubelet[2718]: E0813 02:11:36.382609 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:11:36.382785 kubelet[2718]: E0813 02:11:36.382619 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:11:36.382785 kubelet[2718]: E0813 02:11:36.382627 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:11:36.382785 kubelet[2718]: E0813 02:11:36.382634 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:11:36.382785 kubelet[2718]: E0813 02:11:36.382644 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:11:36.382785 kubelet[2718]: E0813 02:11:36.382652 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:11:36.382785 kubelet[2718]: E0813 02:11:36.382660 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:11:36.382785 kubelet[2718]: I0813 02:11:36.382689 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:11:37.937381 kubelet[2718]: E0813 02:11:37.936302 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:11:38.935353 kubelet[2718]: E0813 02:11:38.935321 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:11:39.944979 kubelet[2718]: I0813 02:11:39.944888 2718 image_gc_manager.go:383] "Disk usage on image filesystem is over the high threshold, trying to free bytes down to the low threshold" usage=100 highThreshold=85 amountToFree=411531673 lowThreshold=80 Aug 13 02:11:39.944979 kubelet[2718]: E0813 02:11:39.944963 2718 kubelet.go:1551] "Image garbage collection failed multiple times in a row" err="Failed to garbage collect required amount of images. Attempted to free 411531673 bytes, but only found 0 bytes eligible to free." Aug 13 02:11:40.492102 systemd[1]: Started sshd@47-172.236.122.171:22-147.75.109.163:42130.service - OpenSSH per-connection server daemon (147.75.109.163:42130). Aug 13 02:11:40.828879 sshd[6777]: Accepted publickey for core from 147.75.109.163 port 42130 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:11:40.830391 sshd-session[6777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:11:40.837546 systemd-logind[1527]: New session 43 of user core. Aug 13 02:11:40.841718 systemd[1]: Started session-43.scope - Session 43 of User core. Aug 13 02:11:40.937718 containerd[1542]: time="2025-08-13T02:11:40.937662534Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 02:11:41.140958 sshd[6779]: Connection closed by 147.75.109.163 port 42130 Aug 13 02:11:41.141754 sshd-session[6777]: pam_unix(sshd:session): session closed for user core Aug 13 02:11:41.146145 systemd[1]: sshd@47-172.236.122.171:22-147.75.109.163:42130.service: Deactivated successfully. Aug 13 02:11:41.148702 systemd[1]: session-43.scope: Deactivated successfully. Aug 13 02:11:41.149985 systemd-logind[1527]: Session 43 logged out. Waiting for processes to exit. Aug 13 02:11:41.152206 systemd-logind[1527]: Removed session 43. Aug 13 02:11:41.631896 containerd[1542]: time="2025-08-13T02:11:41.631825175Z" level=error msg="failed to cleanup \"extract-492643259-7Q1U sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 02:11:41.633737 containerd[1542]: time="2025-08-13T02:11:41.633708615Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=22024426" Aug 13 02:11:41.634661 containerd[1542]: time="2025-08-13T02:11:41.634614551Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 02:11:41.634983 kubelet[2718]: E0813 02:11:41.634895 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 02:11:41.634983 kubelet[2718]: E0813 02:11:41.634985 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 02:11:41.635643 kubelet[2718]: E0813 02:11:41.635413 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5bmxt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7c47cf6bcb-c9c87_calico-system(8d84c12b-cfd9-49af-bb2e-a10173126a4c): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 02:11:41.636656 kubelet[2718]: E0813 02:11:41.636545 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:11:43.936480 kubelet[2718]: E0813 02:11:43.935834 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:11:45.640062 containerd[1542]: time="2025-08-13T02:11:45.639999685Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86c92af398b1911d4ecaf673a7372fd9f418693eb51c8702fd8bfe1f17c11143\" id:\"9c3eb285270a2452e47469f4b65c6e2a148fea13d59c441855374a9d0996c177\" pid:6803 exited_at:{seconds:1755051105 nanos:639636757}" Aug 13 02:11:46.200401 systemd[1]: Started sshd@48-172.236.122.171:22-147.75.109.163:42132.service - OpenSSH per-connection server daemon (147.75.109.163:42132). Aug 13 02:11:46.402288 kubelet[2718]: I0813 02:11:46.402260 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:11:46.402288 kubelet[2718]: I0813 02:11:46.402291 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:11:46.403995 kubelet[2718]: I0813 02:11:46.403937 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:11:46.424829 kubelet[2718]: I0813 02:11:46.424758 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:11:46.424942 kubelet[2718]: I0813 02:11:46.424894 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/coredns-668d6bf9bc-p5qmw","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/calico-node-cdfxj","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","calico-system/csi-node-driver-r6mhv","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:11:46.424942 kubelet[2718]: E0813 02:11:46.424922 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:11:46.424942 kubelet[2718]: E0813 02:11:46.424933 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:11:46.424942 kubelet[2718]: E0813 02:11:46.424942 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:11:46.425077 kubelet[2718]: E0813 02:11:46.424949 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:11:46.425077 kubelet[2718]: E0813 02:11:46.424958 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:11:46.425077 kubelet[2718]: E0813 02:11:46.424965 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:11:46.425077 kubelet[2718]: E0813 02:11:46.424972 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:11:46.425077 kubelet[2718]: E0813 02:11:46.424983 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:11:46.425077 kubelet[2718]: E0813 02:11:46.424990 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:11:46.425077 kubelet[2718]: E0813 02:11:46.424997 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:11:46.425077 kubelet[2718]: I0813 02:11:46.425005 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:11:46.538358 sshd[6815]: Accepted publickey for core from 147.75.109.163 port 42132 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:11:46.540298 sshd-session[6815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:11:46.545705 systemd-logind[1527]: New session 44 of user core. Aug 13 02:11:46.551796 systemd[1]: Started session-44.scope - Session 44 of User core. Aug 13 02:11:46.861216 sshd[6817]: Connection closed by 147.75.109.163 port 42132 Aug 13 02:11:46.862615 sshd-session[6815]: pam_unix(sshd:session): session closed for user core Aug 13 02:11:46.866630 systemd[1]: sshd@48-172.236.122.171:22-147.75.109.163:42132.service: Deactivated successfully. Aug 13 02:11:46.868492 systemd[1]: session-44.scope: Deactivated successfully. Aug 13 02:11:46.870190 systemd-logind[1527]: Session 44 logged out. Waiting for processes to exit. Aug 13 02:11:46.872147 systemd-logind[1527]: Removed session 44. Aug 13 02:11:47.566780 containerd[1542]: time="2025-08-13T02:11:47.566669805Z" level=warning msg="container event discarded" container=9bfac0bddf29d19494b26e221103c4cb7df9e726dfb25491fdb4f1eb635bfc47 type=CONTAINER_CREATED_EVENT Aug 13 02:11:47.566780 containerd[1542]: time="2025-08-13T02:11:47.566748935Z" level=warning msg="container event discarded" container=9bfac0bddf29d19494b26e221103c4cb7df9e726dfb25491fdb4f1eb635bfc47 type=CONTAINER_STARTED_EVENT Aug 13 02:11:47.585332 containerd[1542]: time="2025-08-13T02:11:47.585281198Z" level=warning msg="container event discarded" container=bf251838e1c2083246e6f389d829ca721cce59cb3b5daa5a784b931c75894f79 type=CONTAINER_CREATED_EVENT Aug 13 02:11:47.645422 containerd[1542]: time="2025-08-13T02:11:47.645336525Z" level=warning msg="container event discarded" container=bf251838e1c2083246e6f389d829ca721cce59cb3b5daa5a784b931c75894f79 type=CONTAINER_STARTED_EVENT Aug 13 02:11:47.822677 containerd[1542]: time="2025-08-13T02:11:47.822513630Z" level=warning msg="container event discarded" container=c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d type=CONTAINER_CREATED_EVENT Aug 13 02:11:47.822677 containerd[1542]: time="2025-08-13T02:11:47.822547470Z" level=warning msg="container event discarded" container=c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d type=CONTAINER_STARTED_EVENT Aug 13 02:11:48.835269 systemd[1]: Started sshd@49-172.236.122.171:22-165.154.201.122:59700.service - OpenSSH per-connection server daemon (165.154.201.122:59700). Aug 13 02:11:50.367364 sshd[6835]: Received disconnect from 165.154.201.122 port 59700:11: Bye Bye [preauth] Aug 13 02:11:50.367364 sshd[6835]: Disconnected from authenticating user root 165.154.201.122 port 59700 [preauth] Aug 13 02:11:50.369533 systemd[1]: sshd@49-172.236.122.171:22-165.154.201.122:59700.service: Deactivated successfully. Aug 13 02:11:50.439771 containerd[1542]: time="2025-08-13T02:11:50.439716816Z" level=warning msg="container event discarded" container=889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a type=CONTAINER_CREATED_EVENT Aug 13 02:11:50.495976 containerd[1542]: time="2025-08-13T02:11:50.495937864Z" level=warning msg="container event discarded" container=889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a type=CONTAINER_STARTED_EVENT Aug 13 02:11:51.925849 systemd[1]: Started sshd@50-172.236.122.171:22-147.75.109.163:46388.service - OpenSSH per-connection server daemon (147.75.109.163:46388). Aug 13 02:11:52.098412 systemd[1]: Started sshd@51-172.236.122.171:22-14.103.122.187:35112.service - OpenSSH per-connection server daemon (14.103.122.187:35112). Aug 13 02:11:52.254820 sshd[6841]: Accepted publickey for core from 147.75.109.163 port 46388 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:11:52.256512 sshd-session[6841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:11:52.260673 systemd-logind[1527]: New session 45 of user core. Aug 13 02:11:52.267711 systemd[1]: Started session-45.scope - Session 45 of User core. Aug 13 02:11:52.548038 sshd[6846]: Connection closed by 147.75.109.163 port 46388 Aug 13 02:11:52.549693 sshd-session[6841]: pam_unix(sshd:session): session closed for user core Aug 13 02:11:52.553154 systemd[1]: sshd@50-172.236.122.171:22-147.75.109.163:46388.service: Deactivated successfully. Aug 13 02:11:52.555355 systemd[1]: session-45.scope: Deactivated successfully. Aug 13 02:11:52.556741 systemd-logind[1527]: Session 45 logged out. Waiting for processes to exit. Aug 13 02:11:52.558790 systemd-logind[1527]: Removed session 45. Aug 13 02:11:53.397235 sshd[6844]: Received disconnect from 14.103.122.187 port 35112:11: Bye Bye [preauth] Aug 13 02:11:53.397235 sshd[6844]: Disconnected from authenticating user root 14.103.122.187 port 35112 [preauth] Aug 13 02:11:53.399789 systemd[1]: sshd@51-172.236.122.171:22-14.103.122.187:35112.service: Deactivated successfully. Aug 13 02:11:54.936212 kubelet[2718]: E0813 02:11:54.936178 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:11:55.938346 kubelet[2718]: E0813 02:11:55.938287 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:11:56.465134 kubelet[2718]: I0813 02:11:56.465099 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:11:56.465545 kubelet[2718]: I0813 02:11:56.465271 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:11:56.467678 kubelet[2718]: I0813 02:11:56.467664 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:11:56.492106 kubelet[2718]: I0813 02:11:56.492079 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:11:56.492310 kubelet[2718]: I0813 02:11:56.492293 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/coredns-668d6bf9bc-p5qmw","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/calico-node-cdfxj","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","calico-system/csi-node-driver-r6mhv","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:11:56.492410 kubelet[2718]: E0813 02:11:56.492398 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:11:56.492549 kubelet[2718]: E0813 02:11:56.492457 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:11:56.492549 kubelet[2718]: E0813 02:11:56.492469 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:11:56.492549 kubelet[2718]: E0813 02:11:56.492479 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:11:56.492549 kubelet[2718]: E0813 02:11:56.492488 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:11:56.492549 kubelet[2718]: E0813 02:11:56.492494 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:11:56.492549 kubelet[2718]: E0813 02:11:56.492502 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:11:56.492549 kubelet[2718]: E0813 02:11:56.492514 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:11:56.492549 kubelet[2718]: E0813 02:11:56.492523 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:11:56.492549 kubelet[2718]: E0813 02:11:56.492531 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:11:56.492549 kubelet[2718]: I0813 02:11:56.492539 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:11:57.616405 systemd[1]: Started sshd@52-172.236.122.171:22-147.75.109.163:46390.service - OpenSSH per-connection server daemon (147.75.109.163:46390). Aug 13 02:11:57.957339 sshd[6874]: Accepted publickey for core from 147.75.109.163 port 46390 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:11:57.958611 sshd-session[6874]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:11:57.963458 systemd-logind[1527]: New session 46 of user core. Aug 13 02:11:57.970710 systemd[1]: Started session-46.scope - Session 46 of User core. Aug 13 02:11:58.271664 sshd[6876]: Connection closed by 147.75.109.163 port 46390 Aug 13 02:11:58.273089 sshd-session[6874]: pam_unix(sshd:session): session closed for user core Aug 13 02:11:58.276457 systemd-logind[1527]: Session 46 logged out. Waiting for processes to exit. Aug 13 02:11:58.277318 systemd[1]: sshd@52-172.236.122.171:22-147.75.109.163:46390.service: Deactivated successfully. Aug 13 02:11:58.281172 systemd[1]: session-46.scope: Deactivated successfully. Aug 13 02:11:58.284056 systemd-logind[1527]: Removed session 46. Aug 13 02:12:00.308000 containerd[1542]: time="2025-08-13T02:12:00.307935811Z" level=warning msg="container event discarded" container=d455610107366cbcc93bc60280d9920d0c35dfd542d4aada989959d014fae7b4 type=CONTAINER_CREATED_EVENT Aug 13 02:12:00.308000 containerd[1542]: time="2025-08-13T02:12:00.307976801Z" level=warning msg="container event discarded" container=d455610107366cbcc93bc60280d9920d0c35dfd542d4aada989959d014fae7b4 type=CONTAINER_STARTED_EVENT Aug 13 02:12:00.570744 containerd[1542]: time="2025-08-13T02:12:00.570469076Z" level=warning msg="container event discarded" container=b8fbca66d16757f0d8830ae86752b685e2ed86c07fd8628668ed375bbac8cad0 type=CONTAINER_CREATED_EVENT Aug 13 02:12:00.570744 containerd[1542]: time="2025-08-13T02:12:00.570515706Z" level=warning msg="container event discarded" container=b8fbca66d16757f0d8830ae86752b685e2ed86c07fd8628668ed375bbac8cad0 type=CONTAINER_STARTED_EVENT Aug 13 02:12:01.543793 containerd[1542]: time="2025-08-13T02:12:01.543724424Z" level=warning msg="container event discarded" container=8b02ab9bb15f144dcde9953640ebf41a5002c97efc812194a2fe4a3df71ea980 type=CONTAINER_CREATED_EVENT Aug 13 02:12:01.630925 containerd[1542]: time="2025-08-13T02:12:01.630860748Z" level=warning msg="container event discarded" container=8b02ab9bb15f144dcde9953640ebf41a5002c97efc812194a2fe4a3df71ea980 type=CONTAINER_STARTED_EVENT Aug 13 02:12:02.156605 containerd[1542]: time="2025-08-13T02:12:02.156522680Z" level=warning msg="container event discarded" container=e45ceaa457815629371fe359688809fb7cabeb078eb1372ffd2bd8e63d683f75 type=CONTAINER_CREATED_EVENT Aug 13 02:12:02.236707 containerd[1542]: time="2025-08-13T02:12:02.236651631Z" level=warning msg="container event discarded" container=e45ceaa457815629371fe359688809fb7cabeb078eb1372ffd2bd8e63d683f75 type=CONTAINER_STARTED_EVENT Aug 13 02:12:02.322609 containerd[1542]: time="2025-08-13T02:12:02.322542182Z" level=warning msg="container event discarded" container=e45ceaa457815629371fe359688809fb7cabeb078eb1372ffd2bd8e63d683f75 type=CONTAINER_STOPPED_EVENT Aug 13 02:12:03.334845 systemd[1]: Started sshd@53-172.236.122.171:22-147.75.109.163:57540.service - OpenSSH per-connection server daemon (147.75.109.163:57540). Aug 13 02:12:03.678430 sshd[6888]: Accepted publickey for core from 147.75.109.163 port 57540 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:12:03.680545 sshd-session[6888]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:12:03.686643 systemd-logind[1527]: New session 47 of user core. Aug 13 02:12:03.692713 systemd[1]: Started session-47.scope - Session 47 of User core. Aug 13 02:12:03.993540 sshd[6890]: Connection closed by 147.75.109.163 port 57540 Aug 13 02:12:03.993928 sshd-session[6888]: pam_unix(sshd:session): session closed for user core Aug 13 02:12:04.000515 systemd[1]: sshd@53-172.236.122.171:22-147.75.109.163:57540.service: Deactivated successfully. Aug 13 02:12:04.003566 systemd[1]: session-47.scope: Deactivated successfully. Aug 13 02:12:04.007965 systemd-logind[1527]: Session 47 logged out. Waiting for processes to exit. Aug 13 02:12:04.010431 systemd-logind[1527]: Removed session 47. Aug 13 02:12:04.994993 containerd[1542]: time="2025-08-13T02:12:04.994820529Z" level=warning msg="container event discarded" container=0dc0d0a49b528a825e6bbdc2e02aa7bfff9f45bad017d991804d8a6486a852be type=CONTAINER_CREATED_EVENT Aug 13 02:12:05.059312 containerd[1542]: time="2025-08-13T02:12:05.059260051Z" level=warning msg="container event discarded" container=0dc0d0a49b528a825e6bbdc2e02aa7bfff9f45bad017d991804d8a6486a852be type=CONTAINER_STARTED_EVENT Aug 13 02:12:05.700727 containerd[1542]: time="2025-08-13T02:12:05.700665696Z" level=warning msg="container event discarded" container=0dc0d0a49b528a825e6bbdc2e02aa7bfff9f45bad017d991804d8a6486a852be type=CONTAINER_STOPPED_EVENT Aug 13 02:12:06.513214 kubelet[2718]: I0813 02:12:06.513183 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:12:06.513214 kubelet[2718]: I0813 02:12:06.513218 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:12:06.514570 kubelet[2718]: I0813 02:12:06.514548 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:12:06.526312 kubelet[2718]: I0813 02:12:06.526289 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:12:06.526493 kubelet[2718]: I0813 02:12:06.526464 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/coredns-668d6bf9bc-p5qmw","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/calico-node-cdfxj","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","calico-system/csi-node-driver-r6mhv","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:12:06.526608 kubelet[2718]: E0813 02:12:06.526581 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:12:06.526685 kubelet[2718]: E0813 02:12:06.526674 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:12:06.526732 kubelet[2718]: E0813 02:12:06.526724 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:12:06.526780 kubelet[2718]: E0813 02:12:06.526771 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:12:06.526822 kubelet[2718]: E0813 02:12:06.526814 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:12:06.526862 kubelet[2718]: E0813 02:12:06.526854 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:12:06.526901 kubelet[2718]: E0813 02:12:06.526893 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:12:06.526952 kubelet[2718]: E0813 02:12:06.526943 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:12:06.526993 kubelet[2718]: E0813 02:12:06.526985 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:12:06.527033 kubelet[2718]: E0813 02:12:06.527025 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:12:06.527072 kubelet[2718]: I0813 02:12:06.527065 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:12:09.056524 systemd[1]: Started sshd@54-172.236.122.171:22-147.75.109.163:51850.service - OpenSSH per-connection server daemon (147.75.109.163:51850). Aug 13 02:12:09.393080 sshd[6902]: Accepted publickey for core from 147.75.109.163 port 51850 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:12:09.394526 sshd-session[6902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:12:09.399215 systemd-logind[1527]: New session 48 of user core. Aug 13 02:12:09.406747 systemd[1]: Started session-48.scope - Session 48 of User core. Aug 13 02:12:09.705948 sshd[6904]: Connection closed by 147.75.109.163 port 51850 Aug 13 02:12:09.706883 sshd-session[6902]: pam_unix(sshd:session): session closed for user core Aug 13 02:12:09.711397 systemd-logind[1527]: Session 48 logged out. Waiting for processes to exit. Aug 13 02:12:09.712422 systemd[1]: sshd@54-172.236.122.171:22-147.75.109.163:51850.service: Deactivated successfully. Aug 13 02:12:09.714317 systemd[1]: session-48.scope: Deactivated successfully. Aug 13 02:12:09.717070 systemd-logind[1527]: Removed session 48. Aug 13 02:12:09.943776 kubelet[2718]: E0813 02:12:09.943726 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:12:14.770777 systemd[1]: Started sshd@55-172.236.122.171:22-147.75.109.163:51862.service - OpenSSH per-connection server daemon (147.75.109.163:51862). Aug 13 02:12:14.935655 kubelet[2718]: E0813 02:12:14.935628 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:12:15.109126 sshd[6916]: Accepted publickey for core from 147.75.109.163 port 51862 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:12:15.110489 sshd-session[6916]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:12:15.115082 systemd-logind[1527]: New session 49 of user core. Aug 13 02:12:15.124732 systemd[1]: Started session-49.scope - Session 49 of User core. Aug 13 02:12:15.411966 sshd[6918]: Connection closed by 147.75.109.163 port 51862 Aug 13 02:12:15.412196 sshd-session[6916]: pam_unix(sshd:session): session closed for user core Aug 13 02:12:15.416269 systemd[1]: sshd@55-172.236.122.171:22-147.75.109.163:51862.service: Deactivated successfully. Aug 13 02:12:15.418374 systemd[1]: session-49.scope: Deactivated successfully. Aug 13 02:12:15.419983 systemd-logind[1527]: Session 49 logged out. Waiting for processes to exit. Aug 13 02:12:15.421423 systemd-logind[1527]: Removed session 49. Aug 13 02:12:15.657830 containerd[1542]: time="2025-08-13T02:12:15.657788478Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86c92af398b1911d4ecaf673a7372fd9f418693eb51c8702fd8bfe1f17c11143\" id:\"c9be508fedb96c4a60debd67aa8a900b99e676c131f8c62944b8d149f0299b73\" pid:6943 exited_at:{seconds:1755051135 nanos:656288205}" Aug 13 02:12:16.555649 kubelet[2718]: I0813 02:12:16.555606 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:12:16.555649 kubelet[2718]: I0813 02:12:16.555642 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:12:16.557892 kubelet[2718]: I0813 02:12:16.557852 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:12:16.572052 kubelet[2718]: I0813 02:12:16.572010 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:12:16.572181 kubelet[2718]: I0813 02:12:16.572146 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/coredns-668d6bf9bc-p5qmw","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/calico-node-cdfxj","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","calico-system/csi-node-driver-r6mhv","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:12:16.572181 kubelet[2718]: E0813 02:12:16.572179 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:12:16.572302 kubelet[2718]: E0813 02:12:16.572192 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:12:16.572302 kubelet[2718]: E0813 02:12:16.572201 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:12:16.572302 kubelet[2718]: E0813 02:12:16.572208 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:12:16.572302 kubelet[2718]: E0813 02:12:16.572218 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:12:16.572302 kubelet[2718]: E0813 02:12:16.572225 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:12:16.572302 kubelet[2718]: E0813 02:12:16.572232 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:12:16.572302 kubelet[2718]: E0813 02:12:16.572241 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:12:16.572302 kubelet[2718]: E0813 02:12:16.572249 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:12:16.572302 kubelet[2718]: E0813 02:12:16.572256 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:12:16.572302 kubelet[2718]: I0813 02:12:16.572265 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:12:20.475026 systemd[1]: Started sshd@56-172.236.122.171:22-147.75.109.163:58454.service - OpenSSH per-connection server daemon (147.75.109.163:58454). Aug 13 02:12:20.815671 sshd[6957]: Accepted publickey for core from 147.75.109.163 port 58454 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:12:20.817265 sshd-session[6957]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:12:20.821654 systemd-logind[1527]: New session 50 of user core. Aug 13 02:12:20.823845 systemd[1]: Started session-50.scope - Session 50 of User core. Aug 13 02:12:20.937470 kubelet[2718]: E0813 02:12:20.936901 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:12:21.142923 sshd[6959]: Connection closed by 147.75.109.163 port 58454 Aug 13 02:12:21.144554 sshd-session[6957]: pam_unix(sshd:session): session closed for user core Aug 13 02:12:21.148532 systemd[1]: sshd@56-172.236.122.171:22-147.75.109.163:58454.service: Deactivated successfully. Aug 13 02:12:21.150964 systemd[1]: session-50.scope: Deactivated successfully. Aug 13 02:12:21.152306 systemd-logind[1527]: Session 50 logged out. Waiting for processes to exit. Aug 13 02:12:21.154580 systemd-logind[1527]: Removed session 50. Aug 13 02:12:26.211409 systemd[1]: Started sshd@57-172.236.122.171:22-147.75.109.163:58458.service - OpenSSH per-connection server daemon (147.75.109.163:58458). Aug 13 02:12:26.550645 sshd[6973]: Accepted publickey for core from 147.75.109.163 port 58458 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:12:26.552091 sshd-session[6973]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:12:26.557288 systemd-logind[1527]: New session 51 of user core. Aug 13 02:12:26.564735 systemd[1]: Started session-51.scope - Session 51 of User core. Aug 13 02:12:26.604031 kubelet[2718]: I0813 02:12:26.603988 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:12:26.604390 kubelet[2718]: I0813 02:12:26.604046 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:12:26.606165 kubelet[2718]: I0813 02:12:26.606142 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:12:26.622152 kubelet[2718]: I0813 02:12:26.622134 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:12:26.622454 kubelet[2718]: I0813 02:12:26.622438 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/coredns-668d6bf9bc-p5qmw","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/calico-node-cdfxj","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","calico-system/csi-node-driver-r6mhv","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:12:26.622580 kubelet[2718]: E0813 02:12:26.622569 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:12:26.622739 kubelet[2718]: E0813 02:12:26.622639 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:12:26.622739 kubelet[2718]: E0813 02:12:26.622650 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:12:26.622739 kubelet[2718]: E0813 02:12:26.622659 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:12:26.622739 kubelet[2718]: E0813 02:12:26.622669 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:12:26.622739 kubelet[2718]: E0813 02:12:26.622677 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:12:26.622739 kubelet[2718]: E0813 02:12:26.622686 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:12:26.622921 kubelet[2718]: E0813 02:12:26.622880 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:12:26.622921 kubelet[2718]: E0813 02:12:26.622894 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:12:26.622921 kubelet[2718]: E0813 02:12:26.622901 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:12:26.622921 kubelet[2718]: I0813 02:12:26.622911 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:12:26.861581 sshd[6975]: Connection closed by 147.75.109.163 port 58458 Aug 13 02:12:26.862839 sshd-session[6973]: pam_unix(sshd:session): session closed for user core Aug 13 02:12:26.867168 systemd[1]: sshd@57-172.236.122.171:22-147.75.109.163:58458.service: Deactivated successfully. Aug 13 02:12:26.869534 systemd[1]: session-51.scope: Deactivated successfully. Aug 13 02:12:26.870554 systemd-logind[1527]: Session 51 logged out. Waiting for processes to exit. Aug 13 02:12:26.872743 systemd-logind[1527]: Removed session 51. Aug 13 02:12:31.928784 systemd[1]: Started sshd@58-172.236.122.171:22-147.75.109.163:42186.service - OpenSSH per-connection server daemon (147.75.109.163:42186). Aug 13 02:12:32.262706 sshd[6986]: Accepted publickey for core from 147.75.109.163 port 42186 ssh2: RSA SHA256:exoNnO2Oq2Iy6Xf6WaegRXx7UNf8nTQL5Vm2watipik Aug 13 02:12:32.264062 sshd-session[6986]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 02:12:32.269087 systemd-logind[1527]: New session 52 of user core. Aug 13 02:12:32.274733 systemd[1]: Started session-52.scope - Session 52 of User core. Aug 13 02:12:32.588948 sshd[6988]: Connection closed by 147.75.109.163 port 42186 Aug 13 02:12:32.590747 sshd-session[6986]: pam_unix(sshd:session): session closed for user core Aug 13 02:12:32.595830 systemd[1]: sshd@58-172.236.122.171:22-147.75.109.163:42186.service: Deactivated successfully. Aug 13 02:12:32.599426 systemd[1]: session-52.scope: Deactivated successfully. Aug 13 02:12:32.600850 systemd-logind[1527]: Session 52 logged out. Waiting for processes to exit. Aug 13 02:12:32.602800 systemd-logind[1527]: Removed session 52. Aug 13 02:12:34.338267 containerd[1542]: time="2025-08-13T02:12:34.338188683Z" level=warning msg="container event discarded" container=889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a type=CONTAINER_STOPPED_EVENT Aug 13 02:12:34.393977 containerd[1542]: time="2025-08-13T02:12:34.393919387Z" level=warning msg="container event discarded" container=c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d type=CONTAINER_STOPPED_EVENT Aug 13 02:12:35.169407 containerd[1542]: time="2025-08-13T02:12:35.169350992Z" level=warning msg="container event discarded" container=889a71525439d331d4e5748d987e7898240f32388c37ef85698875c3cec4282a type=CONTAINER_DELETED_EVENT Aug 13 02:12:35.937571 kubelet[2718]: E0813 02:12:35.937195 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" podUID="8d84c12b-cfd9-49af-bb2e-a10173126a4c" Aug 13 02:12:36.647506 kubelet[2718]: I0813 02:12:36.647474 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 02:12:36.647506 kubelet[2718]: I0813 02:12:36.647508 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 02:12:36.649233 kubelet[2718]: I0813 02:12:36.649215 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 02:12:36.664047 kubelet[2718]: I0813 02:12:36.664014 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 02:12:36.664163 kubelet[2718]: I0813 02:12:36.664142 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7c47cf6bcb-c9c87","calico-system/calico-typha-67c8447dcf-wsn77","kube-system/coredns-668d6bf9bc-p5qmw","kube-system/coredns-668d6bf9bc-pw6gg","calico-system/calico-node-cdfxj","kube-system/kube-controller-manager-172-236-122-171","kube-system/kube-proxy-s4bl4","calico-system/csi-node-driver-r6mhv","kube-system/kube-apiserver-172-236-122-171","kube-system/kube-scheduler-172-236-122-171"] Aug 13 02:12:36.664263 kubelet[2718]: E0813 02:12:36.664172 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7c47cf6bcb-c9c87" Aug 13 02:12:36.664263 kubelet[2718]: E0813 02:12:36.664184 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-67c8447dcf-wsn77" Aug 13 02:12:36.664263 kubelet[2718]: E0813 02:12:36.664192 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-p5qmw" Aug 13 02:12:36.664263 kubelet[2718]: E0813 02:12:36.664200 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-pw6gg" Aug 13 02:12:36.664263 kubelet[2718]: E0813 02:12:36.664208 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-cdfxj" Aug 13 02:12:36.664263 kubelet[2718]: E0813 02:12:36.664216 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-236-122-171" Aug 13 02:12:36.664263 kubelet[2718]: E0813 02:12:36.664225 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-s4bl4" Aug 13 02:12:36.664263 kubelet[2718]: E0813 02:12:36.664235 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-r6mhv" Aug 13 02:12:36.664263 kubelet[2718]: E0813 02:12:36.664245 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-236-122-171" Aug 13 02:12:36.664263 kubelet[2718]: E0813 02:12:36.664254 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-236-122-171" Aug 13 02:12:36.664263 kubelet[2718]: I0813 02:12:36.664262 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 02:12:38.936746 kubelet[2718]: E0813 02:12:38.936360 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:12:39.958455 containerd[1542]: time="2025-08-13T02:12:39.958376332Z" level=warning msg="container event discarded" container=c8d4440f154b4962b783d6ce716a8d0fb14accf61d4280ad1fa7e9d8af99d14d type=CONTAINER_DELETED_EVENT Aug 13 02:12:41.936352 kubelet[2718]: E0813 02:12:41.936313 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 02:12:45.656461 containerd[1542]: time="2025-08-13T02:12:45.656308795Z" level=info msg="TaskExit event in podsandbox handler container_id:\"86c92af398b1911d4ecaf673a7372fd9f418693eb51c8702fd8bfe1f17c11143\" id:\"cb0015e60ba221f706a91ad947462ab8013dcd55ec36fd20c95303d1bae0f132\" pid:7013 exited_at:{seconds:1755051165 nanos:655686488}"