Aug 13 01:01:43.866974 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 01:01:43.866998 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:01:43.867007 kernel: BIOS-provided physical RAM map: Aug 13 01:01:43.867015 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 01:01:43.867021 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 01:01:43.867027 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 01:01:43.867033 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 01:01:43.867039 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 01:01:43.867044 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 01:01:43.867050 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 01:01:43.867056 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 01:01:43.867062 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 01:01:43.867070 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 01:01:43.867075 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 01:01:43.867082 kernel: NX (Execute Disable) protection: active Aug 13 01:01:43.867089 kernel: APIC: Static calls initialized Aug 13 01:01:43.867095 kernel: SMBIOS 2.8 present. Aug 13 01:01:43.867103 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 01:01:43.867109 kernel: DMI: Memory slots populated: 1/1 Aug 13 01:01:43.867115 kernel: Hypervisor detected: KVM Aug 13 01:01:43.867121 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 01:01:43.867127 kernel: kvm-clock: using sched offset of 5719266620 cycles Aug 13 01:01:43.867133 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 01:01:43.867140 kernel: tsc: Detected 2000.000 MHz processor Aug 13 01:01:43.867146 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:01:43.867153 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:01:43.867159 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 01:01:43.867167 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 01:01:43.867174 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:01:43.867180 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 01:01:43.867186 kernel: Using GB pages for direct mapping Aug 13 01:01:43.867192 kernel: ACPI: Early table checksum verification disabled Aug 13 01:01:43.867198 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 01:01:43.867205 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:01:43.867211 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:01:43.867217 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:01:43.867225 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 01:01:43.867232 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:01:43.867238 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:01:43.867244 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:01:43.867253 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:01:43.867260 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 01:01:43.867268 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 01:01:43.867275 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 01:01:43.867281 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 01:01:43.867288 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 01:01:43.867294 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 01:01:43.867301 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 01:01:43.867308 kernel: No NUMA configuration found Aug 13 01:01:43.867314 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 01:01:43.867322 kernel: NODE_DATA(0) allocated [mem 0x17fff6dc0-0x17fffdfff] Aug 13 01:01:43.867329 kernel: Zone ranges: Aug 13 01:01:43.867335 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:01:43.867342 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 01:01:43.867348 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:01:43.867355 kernel: Device empty Aug 13 01:01:43.867361 kernel: Movable zone start for each node Aug 13 01:01:43.867368 kernel: Early memory node ranges Aug 13 01:01:43.867374 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 01:01:43.867380 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 01:01:43.867389 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:01:43.867396 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 01:01:43.867402 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:01:43.867409 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 01:01:43.867415 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 01:01:43.867422 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 01:01:43.867428 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 01:01:43.867435 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:01:43.867441 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 01:01:43.867450 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 01:01:43.867456 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:01:43.867462 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 01:01:43.867469 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 01:01:43.867476 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:01:43.867482 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 01:01:43.867488 kernel: TSC deadline timer available Aug 13 01:01:43.867495 kernel: CPU topo: Max. logical packages: 1 Aug 13 01:01:43.867501 kernel: CPU topo: Max. logical dies: 1 Aug 13 01:01:43.867509 kernel: CPU topo: Max. dies per package: 1 Aug 13 01:01:43.867516 kernel: CPU topo: Max. threads per core: 1 Aug 13 01:01:43.867522 kernel: CPU topo: Num. cores per package: 2 Aug 13 01:01:43.867529 kernel: CPU topo: Num. threads per package: 2 Aug 13 01:01:43.867535 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 01:01:43.867542 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 01:01:43.867548 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 01:01:43.867555 kernel: kvm-guest: setup PV sched yield Aug 13 01:01:43.867561 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 01:01:43.867569 kernel: Booting paravirtualized kernel on KVM Aug 13 01:01:43.867576 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:01:43.867582 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 01:01:43.867589 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 01:01:43.867595 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 01:01:43.867602 kernel: pcpu-alloc: [0] 0 1 Aug 13 01:01:43.867608 kernel: kvm-guest: PV spinlocks enabled Aug 13 01:01:43.867615 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 01:01:43.867622 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:01:43.867631 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:01:43.867637 kernel: random: crng init done Aug 13 01:01:43.867644 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 01:01:43.867650 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:01:43.867657 kernel: Fallback order for Node 0: 0 Aug 13 01:01:43.867663 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Aug 13 01:01:43.867670 kernel: Policy zone: Normal Aug 13 01:01:43.867676 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:01:43.867684 kernel: software IO TLB: area num 2. Aug 13 01:01:43.867691 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 01:01:43.867697 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 01:01:43.867704 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 01:01:43.867710 kernel: Dynamic Preempt: voluntary Aug 13 01:01:43.867717 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 01:01:43.867724 kernel: rcu: RCU event tracing is enabled. Aug 13 01:01:43.867731 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 01:01:43.867738 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 01:01:43.867744 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:01:43.867752 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:01:43.867759 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:01:43.867765 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 01:01:43.867772 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:01:43.867785 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:01:43.867794 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:01:43.867800 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 01:01:43.867807 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 01:01:43.867814 kernel: Console: colour VGA+ 80x25 Aug 13 01:01:43.867821 kernel: printk: legacy console [tty0] enabled Aug 13 01:01:43.867828 kernel: printk: legacy console [ttyS0] enabled Aug 13 01:01:43.867836 kernel: ACPI: Core revision 20240827 Aug 13 01:01:43.867843 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 01:01:43.867850 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:01:43.867857 kernel: x2apic enabled Aug 13 01:01:43.867864 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 01:01:43.867873 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 01:01:43.867880 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 01:01:43.867886 kernel: kvm-guest: setup PV IPIs Aug 13 01:01:43.867893 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:01:43.867900 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Aug 13 01:01:43.867907 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Aug 13 01:01:43.867914 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 01:01:43.867921 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 01:01:43.867928 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 01:01:43.867936 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:01:43.867943 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 01:01:43.867950 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 01:01:43.867985 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 01:01:43.867993 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:01:43.867999 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 01:01:43.868006 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 01:01:43.868014 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 01:01:43.868023 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 01:01:43.868030 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 01:01:43.868037 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:01:43.868044 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:01:43.868050 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:01:43.868057 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:01:43.868064 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 01:01:43.868071 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:01:43.868077 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 01:01:43.868086 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 01:01:43.868092 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:01:43.868099 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:01:43.868106 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 01:01:43.868112 kernel: landlock: Up and running. Aug 13 01:01:43.868119 kernel: SELinux: Initializing. Aug 13 01:01:43.868126 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:01:43.868133 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:01:43.868139 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 01:01:43.868148 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 01:01:43.868155 kernel: ... version: 0 Aug 13 01:01:43.868161 kernel: ... bit width: 48 Aug 13 01:01:43.868168 kernel: ... generic registers: 6 Aug 13 01:01:43.868174 kernel: ... value mask: 0000ffffffffffff Aug 13 01:01:43.868181 kernel: ... max period: 00007fffffffffff Aug 13 01:01:43.868188 kernel: ... fixed-purpose events: 0 Aug 13 01:01:43.868194 kernel: ... event mask: 000000000000003f Aug 13 01:01:43.868201 kernel: signal: max sigframe size: 3376 Aug 13 01:01:43.868209 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:01:43.868216 kernel: rcu: Max phase no-delay instances is 400. Aug 13 01:01:43.868223 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 01:01:43.868230 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:01:43.868236 kernel: smpboot: x86: Booting SMP configuration: Aug 13 01:01:43.868243 kernel: .... node #0, CPUs: #1 Aug 13 01:01:43.868250 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:01:43.868256 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Aug 13 01:01:43.868263 kernel: Memory: 3961048K/4193772K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 227296K reserved, 0K cma-reserved) Aug 13 01:01:43.868272 kernel: devtmpfs: initialized Aug 13 01:01:43.868278 kernel: x86/mm: Memory block size: 128MB Aug 13 01:01:43.868285 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:01:43.868292 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 01:01:43.868299 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:01:43.868305 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:01:43.868312 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:01:43.868319 kernel: audit: type=2000 audit(1755046901.595:1): state=initialized audit_enabled=0 res=1 Aug 13 01:01:43.868326 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:01:43.868334 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:01:43.868341 kernel: cpuidle: using governor menu Aug 13 01:01:43.868348 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:01:43.868354 kernel: dca service started, version 1.12.1 Aug 13 01:01:43.868361 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Aug 13 01:01:43.868368 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 01:01:43.868375 kernel: PCI: Using configuration type 1 for base access Aug 13 01:01:43.868381 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:01:43.868388 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:01:43.868397 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 01:01:43.868403 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:01:43.868410 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 01:01:43.868417 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:01:43.868423 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:01:43.868430 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:01:43.868437 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 01:01:43.868443 kernel: ACPI: Interpreter enabled Aug 13 01:01:43.868450 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 01:01:43.868458 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:01:43.868465 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:01:43.868472 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 01:01:43.868478 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 01:01:43.868485 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 01:01:43.868662 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:01:43.868778 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 01:01:43.868889 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 01:01:43.868902 kernel: PCI host bridge to bus 0000:00 Aug 13 01:01:43.869756 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:01:43.869866 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:01:43.869991 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:01:43.870092 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 01:01:43.870189 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 01:01:43.870284 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 01:01:43.870386 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 01:01:43.870513 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 01:01:43.870640 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 01:01:43.870751 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Aug 13 01:01:43.870857 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Aug 13 01:01:43.870983 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Aug 13 01:01:43.871099 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:01:43.871282 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 01:01:43.871394 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Aug 13 01:01:43.872116 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Aug 13 01:01:43.872232 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 01:01:43.872352 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 01:01:43.872460 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Aug 13 01:01:43.872593 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Aug 13 01:01:43.872702 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 01:01:43.872808 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Aug 13 01:01:43.872924 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 01:01:43.874061 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 01:01:43.874187 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 01:01:43.874486 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Aug 13 01:01:43.874591 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Aug 13 01:01:43.874714 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 01:01:43.874821 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Aug 13 01:01:43.874831 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 01:01:43.874838 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 01:01:43.874845 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:01:43.874852 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 01:01:43.874862 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 01:01:43.874869 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 01:01:43.874876 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 01:01:43.874882 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 01:01:43.874889 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 01:01:43.874896 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 01:01:43.874902 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 01:01:43.874910 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 01:01:43.874917 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 01:01:43.874925 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 01:01:43.874932 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 01:01:43.874939 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 01:01:43.874945 kernel: iommu: Default domain type: Translated Aug 13 01:01:43.875992 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:01:43.876001 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:01:43.876009 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:01:43.876016 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 01:01:43.876023 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 01:01:43.876149 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 01:01:43.876257 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 01:01:43.876363 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:01:43.876372 kernel: vgaarb: loaded Aug 13 01:01:43.876379 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 01:01:43.876386 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 01:01:43.876393 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 01:01:43.876400 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:01:43.876410 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:01:43.876417 kernel: pnp: PnP ACPI init Aug 13 01:01:43.876535 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 01:01:43.876546 kernel: pnp: PnP ACPI: found 5 devices Aug 13 01:01:43.876553 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:01:43.876560 kernel: NET: Registered PF_INET protocol family Aug 13 01:01:43.876567 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:01:43.876574 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 01:01:43.876583 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:01:43.876590 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:01:43.876597 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 01:01:43.876603 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 01:01:43.876610 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:01:43.876617 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:01:43.876624 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:01:43.876630 kernel: NET: Registered PF_XDP protocol family Aug 13 01:01:43.876730 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:01:43.876839 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:01:43.876937 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:01:43.877052 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 01:01:43.877156 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 01:01:43.877252 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 01:01:43.877260 kernel: PCI: CLS 0 bytes, default 64 Aug 13 01:01:43.877267 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 01:01:43.877274 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 01:01:43.877285 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Aug 13 01:01:43.877292 kernel: Initialise system trusted keyrings Aug 13 01:01:43.877299 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 01:01:43.877305 kernel: Key type asymmetric registered Aug 13 01:01:43.877312 kernel: Asymmetric key parser 'x509' registered Aug 13 01:01:43.877319 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 01:01:43.877327 kernel: io scheduler mq-deadline registered Aug 13 01:01:43.877333 kernel: io scheduler kyber registered Aug 13 01:01:43.877340 kernel: io scheduler bfq registered Aug 13 01:01:43.877346 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:01:43.877356 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 01:01:43.877363 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 01:01:43.877369 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:01:43.877376 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:01:43.877383 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 01:01:43.877390 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:01:43.877396 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:01:43.877403 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 01:01:43.877512 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 01:01:43.877622 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 01:01:43.877722 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T01:01:43 UTC (1755046903) Aug 13 01:01:43.877826 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 01:01:43.877839 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 01:01:43.877846 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:01:43.877852 kernel: Segment Routing with IPv6 Aug 13 01:01:43.877859 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:01:43.877868 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:01:43.877875 kernel: Key type dns_resolver registered Aug 13 01:01:43.877882 kernel: IPI shorthand broadcast: enabled Aug 13 01:01:43.877889 kernel: sched_clock: Marking stable (2869003060, 223142990)->(3135655670, -43509620) Aug 13 01:01:43.877896 kernel: registered taskstats version 1 Aug 13 01:01:43.877902 kernel: Loading compiled-in X.509 certificates Aug 13 01:01:43.877909 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 01:01:43.877916 kernel: Demotion targets for Node 0: null Aug 13 01:01:43.877923 kernel: Key type .fscrypt registered Aug 13 01:01:43.877931 kernel: Key type fscrypt-provisioning registered Aug 13 01:01:43.877938 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:01:43.877945 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:01:43.880062 kernel: ima: No architecture policies found Aug 13 01:01:43.880074 kernel: clk: Disabling unused clocks Aug 13 01:01:43.880082 kernel: Warning: unable to open an initial console. Aug 13 01:01:43.880089 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 01:01:43.880096 kernel: Write protecting the kernel read-only data: 24576k Aug 13 01:01:43.880118 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 01:01:43.880129 kernel: Run /init as init process Aug 13 01:01:43.880136 kernel: with arguments: Aug 13 01:01:43.880143 kernel: /init Aug 13 01:01:43.880149 kernel: with environment: Aug 13 01:01:43.880157 kernel: HOME=/ Aug 13 01:01:43.880176 kernel: TERM=linux Aug 13 01:01:43.880185 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:01:43.880193 systemd[1]: Successfully made /usr/ read-only. Aug 13 01:01:43.880203 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:01:43.880213 systemd[1]: Detected virtualization kvm. Aug 13 01:01:43.880221 systemd[1]: Detected architecture x86-64. Aug 13 01:01:43.880228 systemd[1]: Running in initrd. Aug 13 01:01:43.880235 systemd[1]: No hostname configured, using default hostname. Aug 13 01:01:43.880243 systemd[1]: Hostname set to . Aug 13 01:01:43.880250 systemd[1]: Initializing machine ID from random generator. Aug 13 01:01:43.880257 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:01:43.880444 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:01:43.880451 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:01:43.880459 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 01:01:43.880467 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:01:43.880474 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 01:01:43.880483 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 01:01:43.880491 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 01:01:43.880500 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 01:01:43.880508 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:01:43.880515 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:01:43.880524 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:01:43.880532 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:01:43.880539 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:01:43.880546 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:01:43.880554 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:01:43.880563 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:01:43.880570 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 01:01:43.880577 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 01:01:43.880585 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:01:43.880592 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:01:43.880599 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:01:43.880607 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:01:43.880616 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 01:01:43.880623 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:01:43.880631 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 01:01:43.880638 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 01:01:43.880646 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:01:43.880653 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:01:43.880660 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:01:43.880670 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:01:43.880699 systemd-journald[206]: Collecting audit messages is disabled. Aug 13 01:01:43.880718 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 01:01:43.880728 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:01:43.880736 systemd-journald[206]: Journal started Aug 13 01:01:43.880754 systemd-journald[206]: Runtime Journal (/run/log/journal/7e8a1b82c03441ab94489747eaba76a8) is 8M, max 78.5M, 70.5M free. Aug 13 01:01:43.883000 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:01:43.888000 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:01:43.893145 systemd-modules-load[207]: Inserted module 'overlay' Aug 13 01:01:43.894077 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:01:43.976146 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:01:43.976175 kernel: Bridge firewalling registered Aug 13 01:01:43.896094 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:01:43.929992 systemd-modules-load[207]: Inserted module 'br_netfilter' Aug 13 01:01:43.933992 systemd-tmpfiles[219]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 01:01:43.979246 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:01:43.980598 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:01:43.981449 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:01:43.982662 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:01:43.986870 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:01:43.990050 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:01:43.996052 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:01:44.007574 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:01:44.009204 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:01:44.013104 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 01:01:44.027074 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:01:44.027996 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:01:44.051732 dracut-cmdline[243]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:01:44.072040 systemd-resolved[244]: Positive Trust Anchors: Aug 13 01:01:44.072709 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:01:44.072737 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:01:44.078043 systemd-resolved[244]: Defaulting to hostname 'linux'. Aug 13 01:01:44.080282 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:01:44.080879 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:01:44.155009 kernel: SCSI subsystem initialized Aug 13 01:01:44.164030 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:01:44.174998 kernel: iscsi: registered transport (tcp) Aug 13 01:01:44.194992 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:01:44.195034 kernel: QLogic iSCSI HBA Driver Aug 13 01:01:44.218711 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:01:44.235570 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:01:44.239416 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:01:44.300478 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 01:01:44.302842 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 01:01:44.358987 kernel: raid6: avx2x4 gen() 31060 MB/s Aug 13 01:01:44.376984 kernel: raid6: avx2x2 gen() 31548 MB/s Aug 13 01:01:44.395383 kernel: raid6: avx2x1 gen() 23528 MB/s Aug 13 01:01:44.395403 kernel: raid6: using algorithm avx2x2 gen() 31548 MB/s Aug 13 01:01:44.414772 kernel: raid6: .... xor() 29928 MB/s, rmw enabled Aug 13 01:01:44.414813 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:01:44.434992 kernel: xor: automatically using best checksumming function avx Aug 13 01:01:44.588002 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 01:01:44.598413 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:01:44.601226 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:01:44.627987 systemd-udevd[454]: Using default interface naming scheme 'v255'. Aug 13 01:01:44.632637 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:01:44.635921 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 01:01:44.660449 dracut-pre-trigger[463]: rd.md=0: removing MD RAID activation Aug 13 01:01:44.690815 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:01:44.693719 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:01:44.757946 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:01:44.761095 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 01:01:44.821992 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Aug 13 01:01:44.825003 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:01:44.829977 kernel: scsi host0: Virtio SCSI HBA Aug 13 01:01:44.840036 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 01:01:44.849018 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 01:01:44.862041 kernel: AES CTR mode by8 optimization enabled Aug 13 01:01:45.017094 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:01:45.017462 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:01:45.021659 kernel: libata version 3.00 loaded. Aug 13 01:01:45.022259 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:01:45.026344 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:01:45.046734 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 01:01:45.046952 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 01:01:45.047313 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 01:01:45.047450 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 01:01:45.047580 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 01:01:45.047709 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:01:45.047727 kernel: GPT:9289727 != 9297919 Aug 13 01:01:45.047736 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:01:45.047746 kernel: GPT:9289727 != 9297919 Aug 13 01:01:45.047754 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:01:45.047763 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:01:45.038640 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:01:45.052058 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 01:01:45.056990 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 01:01:45.062974 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 01:01:45.062998 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 01:01:45.063152 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 01:01:45.063283 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 01:01:45.068771 kernel: scsi host1: ahci Aug 13 01:01:45.068937 kernel: scsi host2: ahci Aug 13 01:01:45.071987 kernel: scsi host3: ahci Aug 13 01:01:45.074977 kernel: scsi host4: ahci Aug 13 01:01:45.075374 kernel: scsi host5: ahci Aug 13 01:01:45.075994 kernel: scsi host6: ahci Aug 13 01:01:45.080413 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Aug 13 01:01:45.080435 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Aug 13 01:01:45.083012 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Aug 13 01:01:45.083033 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Aug 13 01:01:45.083048 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Aug 13 01:01:45.083058 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Aug 13 01:01:45.138907 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 01:01:45.188966 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:01:45.198984 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 01:01:45.217143 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 01:01:45.217738 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 01:01:45.227468 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:01:45.229616 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 01:01:45.251587 disk-uuid[624]: Primary Header is updated. Aug 13 01:01:45.251587 disk-uuid[624]: Secondary Entries is updated. Aug 13 01:01:45.251587 disk-uuid[624]: Secondary Header is updated. Aug 13 01:01:45.262974 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:01:45.283985 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:01:45.403832 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 01:01:45.403900 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 01:01:45.403912 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 01:01:45.403923 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 01:01:45.403933 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 01:01:45.403943 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 01:01:45.446254 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 01:01:45.463773 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:01:45.464407 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:01:45.465691 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:01:45.467748 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 01:01:45.486694 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:01:46.282168 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:01:46.282926 disk-uuid[625]: The operation has completed successfully. Aug 13 01:01:46.332268 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:01:46.332398 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 01:01:46.361165 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 01:01:46.376250 sh[652]: Success Aug 13 01:01:46.394756 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:01:46.394792 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:01:46.395368 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 01:01:46.405981 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 01:01:46.447618 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 01:01:46.452020 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 01:01:46.462349 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 01:01:46.474030 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 01:01:46.474055 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (664) Aug 13 01:01:46.476982 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 01:01:46.479222 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:01:46.481922 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 01:01:46.489131 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 01:01:46.489944 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:01:46.490811 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 01:01:46.491467 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 01:01:46.496896 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 01:01:46.519987 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (699) Aug 13 01:01:46.524219 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:01:46.524244 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:01:46.524256 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:01:46.533059 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:01:46.534530 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 01:01:46.535883 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 01:01:46.617231 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:01:46.626117 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:01:46.636212 ignition[762]: Ignition 2.21.0 Aug 13 01:01:46.636906 ignition[762]: Stage: fetch-offline Aug 13 01:01:46.636947 ignition[762]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:01:46.636981 ignition[762]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:01:46.637066 ignition[762]: parsed url from cmdline: "" Aug 13 01:01:46.637070 ignition[762]: no config URL provided Aug 13 01:01:46.637075 ignition[762]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:01:46.642708 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:01:46.637083 ignition[762]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:01:46.637088 ignition[762]: failed to fetch config: resource requires networking Aug 13 01:01:46.637422 ignition[762]: Ignition finished successfully Aug 13 01:01:46.662387 systemd-networkd[838]: lo: Link UP Aug 13 01:01:46.662399 systemd-networkd[838]: lo: Gained carrier Aug 13 01:01:46.663864 systemd-networkd[838]: Enumeration completed Aug 13 01:01:46.663977 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:01:46.664638 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:01:46.664642 systemd-networkd[838]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:01:46.665476 systemd[1]: Reached target network.target - Network. Aug 13 01:01:46.669034 systemd-networkd[838]: eth0: Link UP Aug 13 01:01:46.669213 systemd-networkd[838]: eth0: Gained carrier Aug 13 01:01:46.669222 systemd-networkd[838]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:01:46.669256 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 01:01:46.689766 ignition[843]: Ignition 2.21.0 Aug 13 01:01:46.689783 ignition[843]: Stage: fetch Aug 13 01:01:46.689911 ignition[843]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:01:46.689923 ignition[843]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:01:46.690027 ignition[843]: parsed url from cmdline: "" Aug 13 01:01:46.690032 ignition[843]: no config URL provided Aug 13 01:01:46.690037 ignition[843]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:01:46.690045 ignition[843]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:01:46.690085 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 01:01:46.690292 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:01:46.890934 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 01:01:46.891122 ignition[843]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:01:47.188031 systemd-networkd[838]: eth0: DHCPv4 address 172.233.209.21/24, gateway 172.233.209.1 acquired from 23.205.167.223 Aug 13 01:01:47.291295 ignition[843]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 01:01:47.382082 ignition[843]: PUT result: OK Aug 13 01:01:47.382161 ignition[843]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 01:01:47.493210 ignition[843]: GET result: OK Aug 13 01:01:47.494698 ignition[843]: parsing config with SHA512: 67efc3524924e4b66e182bbfc0f96298eeade231d68b76349287cae18cbcc893cb7b377ee2b5425945c46ff2cbfc61e9d9f7a9e029f7cdb3396c3bd12fbb22a6 Aug 13 01:01:47.500940 unknown[843]: fetched base config from "system" Aug 13 01:01:47.501569 unknown[843]: fetched base config from "system" Aug 13 01:01:47.501577 unknown[843]: fetched user config from "akamai" Aug 13 01:01:47.501833 ignition[843]: fetch: fetch complete Aug 13 01:01:47.501838 ignition[843]: fetch: fetch passed Aug 13 01:01:47.501879 ignition[843]: Ignition finished successfully Aug 13 01:01:47.505547 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 01:01:47.527060 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 01:01:47.563541 ignition[851]: Ignition 2.21.0 Aug 13 01:01:47.563551 ignition[851]: Stage: kargs Aug 13 01:01:47.563653 ignition[851]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:01:47.563663 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:01:47.566161 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 01:01:47.564179 ignition[851]: kargs: kargs passed Aug 13 01:01:47.564214 ignition[851]: Ignition finished successfully Aug 13 01:01:47.569085 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 01:01:47.595314 ignition[857]: Ignition 2.21.0 Aug 13 01:01:47.595324 ignition[857]: Stage: disks Aug 13 01:01:47.595422 ignition[857]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:01:47.595432 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:01:47.597238 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 01:01:47.595942 ignition[857]: disks: disks passed Aug 13 01:01:47.598583 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 01:01:47.595995 ignition[857]: Ignition finished successfully Aug 13 01:01:47.599401 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 01:01:47.600359 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:01:47.601526 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:01:47.602535 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:01:47.604468 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 01:01:47.626553 systemd-fsck[866]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 01:01:47.628400 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 01:01:47.630805 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 01:01:47.729991 kernel: EXT4-fs (sda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 01:01:47.730992 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 01:01:47.731814 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 01:01:47.733640 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:01:47.736778 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 01:01:47.738031 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 01:01:47.738072 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:01:47.738094 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:01:47.745806 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 01:01:47.748085 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 01:01:47.757985 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (874) Aug 13 01:01:47.762358 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:01:47.762380 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:01:47.763362 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:01:47.774202 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:01:47.794056 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:01:47.797990 initrd-setup-root[905]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:01:47.801918 initrd-setup-root[912]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:01:47.805791 initrd-setup-root[919]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:01:47.883896 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 01:01:47.886261 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 01:01:47.887919 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 01:01:47.902618 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 01:01:47.905001 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:01:47.919598 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 01:01:47.927741 ignition[986]: INFO : Ignition 2.21.0 Aug 13 01:01:47.927741 ignition[986]: INFO : Stage: mount Aug 13 01:01:47.928965 ignition[986]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:01:47.928965 ignition[986]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:01:47.928965 ignition[986]: INFO : mount: mount passed Aug 13 01:01:47.928965 ignition[986]: INFO : Ignition finished successfully Aug 13 01:01:47.930174 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 01:01:47.932457 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 01:01:48.025129 systemd-networkd[838]: eth0: Gained IPv6LL Aug 13 01:01:48.733810 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:01:48.758977 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (999) Aug 13 01:01:48.762634 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:01:48.762704 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:01:48.765640 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:01:48.770417 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:01:48.802771 ignition[1016]: INFO : Ignition 2.21.0 Aug 13 01:01:48.802771 ignition[1016]: INFO : Stage: files Aug 13 01:01:48.804270 ignition[1016]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:01:48.804270 ignition[1016]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:01:48.804270 ignition[1016]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:01:48.806921 ignition[1016]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:01:48.806921 ignition[1016]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:01:48.810037 ignition[1016]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:01:48.810940 ignition[1016]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:01:48.810940 ignition[1016]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:01:48.810495 unknown[1016]: wrote ssh authorized keys file for user: core Aug 13 01:01:48.813282 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:01:48.813282 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 01:01:49.285256 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 01:01:50.482376 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:01:50.482376 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:01:50.484918 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:01:50.484918 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:01:50.484918 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:01:50.484918 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:01:50.484918 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:01:50.484918 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:01:50.484918 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:01:50.484918 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:01:50.484918 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:01:50.484918 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:01:50.493362 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:01:50.493362 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:01:50.493362 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 01:01:51.006034 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 01:01:51.403852 ignition[1016]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:01:51.403852 ignition[1016]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 01:01:51.405901 ignition[1016]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:01:51.406923 ignition[1016]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:01:51.406923 ignition[1016]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 01:01:51.406923 ignition[1016]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 13 01:01:51.406923 ignition[1016]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:01:51.406923 ignition[1016]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:01:51.406923 ignition[1016]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 13 01:01:51.406923 ignition[1016]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:01:51.406923 ignition[1016]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:01:51.406923 ignition[1016]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:01:51.417447 ignition[1016]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:01:51.417447 ignition[1016]: INFO : files: files passed Aug 13 01:01:51.417447 ignition[1016]: INFO : Ignition finished successfully Aug 13 01:01:51.412888 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 01:01:51.417070 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 01:01:51.421093 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 01:01:51.429633 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:01:51.429737 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 01:01:51.436395 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:01:51.437380 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:01:51.438383 initrd-setup-root-after-ignition[1046]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:01:51.439082 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:01:51.440288 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 01:01:51.442082 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 01:01:51.472808 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:01:51.472920 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 01:01:51.474372 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 01:01:51.475211 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 01:01:51.476403 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 01:01:51.477181 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 01:01:51.512774 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:01:51.514792 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 01:01:51.528688 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:01:51.529347 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:01:51.530595 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 01:01:51.531761 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:01:51.531895 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:01:51.533068 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 01:01:51.533822 systemd[1]: Stopped target basic.target - Basic System. Aug 13 01:01:51.534970 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 01:01:51.535976 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:01:51.537013 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 01:01:51.538123 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:01:51.539370 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 01:01:51.540508 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:01:51.541828 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 01:01:51.542941 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 01:01:51.544202 systemd[1]: Stopped target swap.target - Swaps. Aug 13 01:01:51.545254 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:01:51.545347 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:01:51.546630 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:01:51.547416 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:01:51.548408 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 01:01:51.548714 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:01:51.549685 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:01:51.549775 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 01:01:51.551308 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:01:51.551453 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:01:51.552169 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:01:51.552296 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 01:01:51.555031 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 01:01:51.556184 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:01:51.556289 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:01:51.559094 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 01:01:51.562541 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:01:51.562692 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:01:51.564370 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:01:51.564476 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:01:51.568839 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:01:51.569159 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 01:01:51.585044 ignition[1070]: INFO : Ignition 2.21.0 Aug 13 01:01:51.585044 ignition[1070]: INFO : Stage: umount Aug 13 01:01:51.588126 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:01:51.588126 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:01:51.588126 ignition[1070]: INFO : umount: umount passed Aug 13 01:01:51.588126 ignition[1070]: INFO : Ignition finished successfully Aug 13 01:01:51.589513 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:01:51.589648 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 01:01:51.612066 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:01:51.612642 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:01:51.612760 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 01:01:51.614177 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:01:51.614256 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 01:01:51.614844 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:01:51.614894 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 01:01:51.615868 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 01:01:51.615916 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 01:01:51.616861 systemd[1]: Stopped target network.target - Network. Aug 13 01:01:51.617782 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:01:51.617832 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:01:51.618817 systemd[1]: Stopped target paths.target - Path Units. Aug 13 01:01:51.619783 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:01:51.624001 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:01:51.624913 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 01:01:51.626075 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 01:01:51.627573 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:01:51.627614 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:01:51.628579 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:01:51.628615 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:01:51.629579 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:01:51.629627 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 01:01:51.630576 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 01:01:51.630618 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 01:01:51.631600 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:01:51.631647 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 01:01:51.632710 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 01:01:51.633798 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 01:01:51.639818 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:01:51.639946 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 01:01:51.643908 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 01:01:51.644274 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:01:51.644388 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 01:01:51.646174 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 01:01:51.646903 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 01:01:51.647737 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:01:51.647776 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:01:51.649847 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 01:01:51.651277 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:01:51.651328 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:01:51.651890 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:01:51.651931 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:01:51.654038 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:01:51.654084 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 01:01:51.655006 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 01:01:51.655054 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:01:51.656457 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:01:51.657805 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:01:51.657865 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:01:51.670505 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:01:51.671117 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 01:01:51.676574 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:01:51.676790 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:01:51.677919 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:01:51.677981 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 01:01:51.678938 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:01:51.678997 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:01:51.680047 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:01:51.680094 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:01:51.681710 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:01:51.681755 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 01:01:51.682869 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:01:51.682915 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:01:51.686051 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 01:01:51.687045 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 01:01:51.687094 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:01:51.688733 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:01:51.688787 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:01:51.690849 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:01:51.690898 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:01:51.693320 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 13 01:01:51.693373 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 01:01:51.693419 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:01:51.700362 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:01:51.700467 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 01:01:51.701803 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 01:01:51.703687 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 01:01:51.718872 systemd[1]: Switching root. Aug 13 01:01:51.750097 systemd-journald[206]: Journal stopped Aug 13 01:01:52.809660 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Aug 13 01:01:52.809683 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:01:52.809694 kernel: SELinux: policy capability open_perms=1 Aug 13 01:01:52.809706 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:01:52.809714 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:01:52.809723 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:01:52.809732 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:01:52.809740 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:01:52.809749 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:01:52.809757 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 01:01:52.809768 kernel: audit: type=1403 audit(1755046911.910:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:01:52.809777 systemd[1]: Successfully loaded SELinux policy in 80.541ms. Aug 13 01:01:52.809787 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.858ms. Aug 13 01:01:52.809798 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:01:52.809808 systemd[1]: Detected virtualization kvm. Aug 13 01:01:52.809819 systemd[1]: Detected architecture x86-64. Aug 13 01:01:52.809828 systemd[1]: Detected first boot. Aug 13 01:01:52.809838 systemd[1]: Initializing machine ID from random generator. Aug 13 01:01:52.809847 zram_generator::config[1118]: No configuration found. Aug 13 01:01:52.809857 kernel: Guest personality initialized and is inactive Aug 13 01:01:52.809866 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 01:01:52.809875 kernel: Initialized host personality Aug 13 01:01:52.809885 kernel: NET: Registered PF_VSOCK protocol family Aug 13 01:01:52.809894 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:01:52.809906 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 01:01:52.809917 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:01:52.809926 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 01:01:52.809936 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:01:52.809945 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 01:01:52.809971 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 01:01:52.809982 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 01:01:52.809992 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 01:01:52.810002 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 01:01:52.810011 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 01:01:52.810021 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 01:01:52.810031 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 01:01:52.810043 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:01:52.810052 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:01:52.810062 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 01:01:52.810071 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 01:01:52.810084 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 01:01:52.810094 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:01:52.810104 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 01:01:52.810114 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:01:52.810125 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:01:52.810135 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 01:01:52.810145 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 01:01:52.810155 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 01:01:52.810164 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 01:01:52.810174 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:01:52.810184 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:01:52.810194 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:01:52.810205 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:01:52.810215 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 01:01:52.810224 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 01:01:52.810234 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 01:01:52.810244 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:01:52.810256 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:01:52.810265 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:01:52.810275 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 01:01:52.810285 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 01:01:52.810294 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 01:01:52.810304 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 01:01:52.810314 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:01:52.810324 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 01:01:52.810335 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 01:01:52.810345 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 01:01:52.810355 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:01:52.810364 systemd[1]: Reached target machines.target - Containers. Aug 13 01:01:52.810374 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 01:01:52.810384 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:01:52.810393 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:01:52.810403 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 01:01:52.810415 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:01:52.810424 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:01:52.810434 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:01:52.810444 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 01:01:52.810453 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:01:52.810464 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:01:52.810474 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:01:52.810483 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 01:01:52.810493 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:01:52.810504 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:01:52.810515 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:01:52.810525 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:01:52.810534 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:01:52.810544 kernel: loop: module loaded Aug 13 01:01:52.810553 kernel: fuse: init (API version 7.41) Aug 13 01:01:52.810562 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:01:52.810572 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 01:01:52.810583 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 01:01:52.810593 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:01:52.810603 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:01:52.810613 systemd[1]: Stopped verity-setup.service. Aug 13 01:01:52.810623 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:01:52.810633 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 01:01:52.810642 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 01:01:52.810652 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 01:01:52.810663 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 01:01:52.810673 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 01:01:52.810683 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 01:01:52.810692 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 01:01:52.810702 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:01:52.810712 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:01:52.810722 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 01:01:52.810731 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:01:52.810741 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:01:52.810752 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:01:52.810762 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:01:52.810773 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:01:52.810783 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 01:01:52.810793 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:01:52.810824 systemd-journald[1206]: Collecting audit messages is disabled. Aug 13 01:01:52.810846 kernel: ACPI: bus type drm_connector registered Aug 13 01:01:52.810856 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:01:52.810866 systemd-journald[1206]: Journal started Aug 13 01:01:52.810900 systemd-journald[1206]: Runtime Journal (/run/log/journal/2c913bd48fe1482c83ddeecc57979b14) is 8M, max 78.5M, 70.5M free. Aug 13 01:01:52.468317 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:01:52.481537 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 01:01:52.482000 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:01:52.812982 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:01:52.816179 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:01:52.816567 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:01:52.816837 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:01:52.817745 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:01:52.818640 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 01:01:52.819533 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 01:01:52.833696 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:01:52.836032 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 01:01:52.839096 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 01:01:52.839710 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:01:52.839787 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:01:52.842762 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 01:01:52.847112 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 01:01:52.849110 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:01:52.852098 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 01:01:52.855545 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 01:01:52.856711 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:01:52.859174 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 01:01:52.860028 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:01:52.862449 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:01:52.865230 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 01:01:52.878494 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 01:01:52.882206 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 01:01:52.883114 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 01:01:52.906057 systemd-journald[1206]: Time spent on flushing to /var/log/journal/2c913bd48fe1482c83ddeecc57979b14 is 23.350ms for 998 entries. Aug 13 01:01:52.906057 systemd-journald[1206]: System Journal (/var/log/journal/2c913bd48fe1482c83ddeecc57979b14) is 8M, max 195.6M, 187.6M free. Aug 13 01:01:52.958747 systemd-journald[1206]: Received client request to flush runtime journal. Aug 13 01:01:52.958792 kernel: loop0: detected capacity change from 0 to 8 Aug 13 01:01:52.958815 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:01:52.916090 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 01:01:52.917379 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 01:01:52.923734 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 01:01:52.930402 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:01:52.961758 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 01:01:52.975471 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:01:52.976924 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 01:01:52.980983 kernel: loop1: detected capacity change from 0 to 221472 Aug 13 01:01:52.996230 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 01:01:53.001200 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:01:53.022046 kernel: loop2: detected capacity change from 0 to 113872 Aug 13 01:01:53.055392 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Aug 13 01:01:53.055412 systemd-tmpfiles[1259]: ACLs are not supported, ignoring. Aug 13 01:01:53.061435 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:01:53.066971 kernel: loop3: detected capacity change from 0 to 146240 Aug 13 01:01:53.104982 kernel: loop4: detected capacity change from 0 to 8 Aug 13 01:01:53.108154 kernel: loop5: detected capacity change from 0 to 221472 Aug 13 01:01:53.127733 kernel: loop6: detected capacity change from 0 to 113872 Aug 13 01:01:53.150080 kernel: loop7: detected capacity change from 0 to 146240 Aug 13 01:01:53.169916 (sd-merge)[1265]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 01:01:53.170724 (sd-merge)[1265]: Merged extensions into '/usr'. Aug 13 01:01:53.176512 systemd[1]: Reload requested from client PID 1240 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 01:01:53.176532 systemd[1]: Reloading... Aug 13 01:01:53.259998 zram_generator::config[1287]: No configuration found. Aug 13 01:01:53.379509 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:01:53.391434 ldconfig[1235]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:01:53.449912 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:01:53.450387 systemd[1]: Reloading finished in 273 ms. Aug 13 01:01:53.464734 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 01:01:53.465841 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 01:01:53.479076 systemd[1]: Starting ensure-sysext.service... Aug 13 01:01:53.480740 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:01:53.504031 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 01:01:53.508672 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:01:53.514610 systemd[1]: Reload requested from client PID 1334 ('systemctl') (unit ensure-sysext.service)... Aug 13 01:01:53.514645 systemd[1]: Reloading... Aug 13 01:01:53.515033 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 01:01:53.517235 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 01:01:53.517521 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:01:53.517746 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 01:01:53.518712 systemd-tmpfiles[1335]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:01:53.519026 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Aug 13 01:01:53.519138 systemd-tmpfiles[1335]: ACLs are not supported, ignoring. Aug 13 01:01:53.523027 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:01:53.523037 systemd-tmpfiles[1335]: Skipping /boot Aug 13 01:01:53.534909 systemd-tmpfiles[1335]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:01:53.534997 systemd-tmpfiles[1335]: Skipping /boot Aug 13 01:01:53.562262 systemd-udevd[1338]: Using default interface naming scheme 'v255'. Aug 13 01:01:53.588993 zram_generator::config[1369]: No configuration found. Aug 13 01:01:53.764274 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:01:53.838980 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:01:53.847723 systemd[1]: Reloading finished in 332 ms. Aug 13 01:01:53.850010 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 01:01:53.856976 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:01:53.859373 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:01:53.873887 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:01:53.873933 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 01:01:53.875895 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 01:01:53.890683 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 01:01:53.897111 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:01:53.899546 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:01:53.902829 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 01:01:53.904126 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:01:53.905209 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:01:53.908214 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:01:53.914024 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:01:53.914697 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:01:53.915041 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:01:53.916683 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 01:01:53.922071 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:01:53.927175 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:01:53.944871 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 01:01:53.945571 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:01:53.949293 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:01:53.949517 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:01:53.955879 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:01:53.956484 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:01:53.959368 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:01:53.960980 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:01:53.961479 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:01:53.967067 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 01:01:53.968294 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:01:53.969732 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:01:53.975393 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:01:53.981346 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:01:53.983771 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:01:53.987930 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:01:53.994537 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:01:53.994785 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:01:53.998753 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:01:54.004807 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:01:54.012421 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:01:54.013650 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:01:54.013811 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:01:54.013939 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:01:54.015226 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 01:01:54.016300 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:01:54.016524 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:01:54.026635 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:01:54.031092 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 01:01:54.032773 systemd[1]: Finished ensure-sysext.service. Aug 13 01:01:54.044680 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 01:01:54.046755 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 01:01:54.049190 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 01:01:54.051376 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:01:54.051740 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:01:54.062151 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:01:54.069747 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:01:54.070041 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:01:54.080350 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:01:54.080637 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:01:54.082254 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:01:54.094775 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 01:01:54.108983 kernel: EDAC MC: Ver: 3.0.0 Aug 13 01:01:54.118675 augenrules[1507]: No rules Aug 13 01:01:54.122070 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:01:54.122357 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:01:54.164769 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:01:54.191707 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 01:01:54.205499 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:01:54.208892 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 01:01:54.249242 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 01:01:54.337473 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:01:54.416613 systemd-networkd[1460]: lo: Link UP Aug 13 01:01:54.416890 systemd-networkd[1460]: lo: Gained carrier Aug 13 01:01:54.418748 systemd-networkd[1460]: Enumeration completed Aug 13 01:01:54.418889 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:01:54.421097 systemd-networkd[1460]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:01:54.421170 systemd-networkd[1460]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:01:54.422069 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 01:01:54.423671 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 01:01:54.426112 systemd-networkd[1460]: eth0: Link UP Aug 13 01:01:54.426294 systemd-networkd[1460]: eth0: Gained carrier Aug 13 01:01:54.426307 systemd-networkd[1460]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:01:54.439393 systemd-resolved[1461]: Positive Trust Anchors: Aug 13 01:01:54.439630 systemd-resolved[1461]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:01:54.439702 systemd-resolved[1461]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:01:54.445767 systemd-resolved[1461]: Defaulting to hostname 'linux'. Aug 13 01:01:54.450143 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:01:54.450925 systemd[1]: Reached target network.target - Network. Aug 13 01:01:54.451433 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:01:54.453082 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 01:01:54.453857 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:01:54.476115 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 01:01:54.476900 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 01:01:54.477542 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 01:01:54.478175 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 01:01:54.478986 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:01:54.479064 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:01:54.479702 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 01:01:54.480592 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 01:01:54.481295 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 01:01:54.481902 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:01:54.484491 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 01:01:54.487237 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 01:01:54.490050 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 01:01:54.490755 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 01:01:54.491343 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 01:01:54.495111 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 01:01:54.496178 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 01:01:54.497899 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 01:01:54.498654 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 01:01:54.500469 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:01:54.501166 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:01:54.501725 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:01:54.501761 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:01:54.503026 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 01:01:54.507070 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 01:01:54.511689 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 01:01:54.515447 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 01:01:54.521046 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 01:01:54.524975 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 01:01:54.526021 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 01:01:54.532565 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 01:01:54.540911 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 01:01:54.546027 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 01:01:54.546449 jq[1541]: false Aug 13 01:01:54.548531 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 01:01:54.558122 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 01:01:54.562852 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Refreshing passwd entry cache Aug 13 01:01:54.563098 oslogin_cache_refresh[1543]: Refreshing passwd entry cache Aug 13 01:01:54.565182 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 01:01:54.566818 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Failure getting users, quitting Aug 13 01:01:54.567752 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:01:54.570047 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:01:54.570047 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Refreshing group entry cache Aug 13 01:01:54.570047 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Failure getting groups, quitting Aug 13 01:01:54.570047 google_oslogin_nss_cache[1543]: oslogin_cache_refresh[1543]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:01:54.568804 oslogin_cache_refresh[1543]: Failure getting users, quitting Aug 13 01:01:54.568208 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:01:54.568821 oslogin_cache_refresh[1543]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:01:54.568857 oslogin_cache_refresh[1543]: Refreshing group entry cache Aug 13 01:01:54.569345 oslogin_cache_refresh[1543]: Failure getting groups, quitting Aug 13 01:01:54.569354 oslogin_cache_refresh[1543]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:01:54.570606 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 01:01:54.576101 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 01:01:54.580981 extend-filesystems[1542]: Found /dev/sda6 Aug 13 01:01:54.583460 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 01:01:54.586291 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:01:54.589478 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 01:01:54.589807 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 01:01:54.590136 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 01:01:54.597039 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:01:54.597312 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 01:01:54.598540 extend-filesystems[1542]: Found /dev/sda9 Aug 13 01:01:54.608300 jq[1556]: true Aug 13 01:01:54.615384 extend-filesystems[1542]: Checking size of /dev/sda9 Aug 13 01:01:54.641270 tar[1560]: linux-amd64/helm Aug 13 01:01:54.645700 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:01:54.650764 update_engine[1554]: I20250813 01:01:54.648661 1554 main.cc:92] Flatcar Update Engine starting Aug 13 01:01:54.648908 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 01:01:54.649977 (ntainerd)[1575]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 01:01:54.662841 jq[1574]: true Aug 13 01:01:54.667545 extend-filesystems[1542]: Resized partition /dev/sda9 Aug 13 01:01:54.677771 extend-filesystems[1588]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 01:01:54.692992 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 01:01:54.699973 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 01:01:54.702662 dbus-daemon[1539]: [system] SELinux support is enabled Aug 13 01:01:54.702874 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 01:01:54.707439 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:01:54.707464 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 01:01:54.709189 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:01:54.709206 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 01:01:54.716310 extend-filesystems[1588]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 01:01:54.716310 extend-filesystems[1588]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 01:01:54.716310 extend-filesystems[1588]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 01:01:54.735569 extend-filesystems[1542]: Resized filesystem in /dev/sda9 Aug 13 01:01:54.743112 coreos-metadata[1538]: Aug 13 01:01:54.732 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:01:54.718676 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:01:54.718936 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 01:01:54.750581 systemd[1]: Started update-engine.service - Update Engine. Aug 13 01:01:54.757585 update_engine[1554]: I20250813 01:01:54.756531 1554 update_check_scheduler.cc:74] Next update check in 9m25s Aug 13 01:01:54.776136 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 01:01:54.810644 bash[1609]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:01:54.811526 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 01:01:54.826171 systemd[1]: Starting sshkeys.service... Aug 13 01:01:54.830079 systemd-logind[1550]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 01:01:54.832003 systemd-logind[1550]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:01:54.832282 systemd-logind[1550]: New seat seat0. Aug 13 01:01:54.839519 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 01:01:54.882569 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 01:01:54.888627 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 01:01:54.999023 systemd-networkd[1460]: eth0: DHCPv4 address 172.233.209.21/24, gateway 172.233.209.1 acquired from 23.205.167.223 Aug 13 01:01:55.000305 systemd-timesyncd[1484]: Network configuration changed, trying to establish connection. Aug 13 01:01:55.004372 dbus-daemon[1539]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1460 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 01:01:55.013025 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 01:01:55.037176 containerd[1575]: time="2025-08-13T01:01:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 01:01:55.044640 containerd[1575]: time="2025-08-13T01:01:55.044614200Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 01:01:55.053918 coreos-metadata[1615]: Aug 13 01:01:55.052 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:01:55.069509 containerd[1575]: time="2025-08-13T01:01:55.069464450Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.14µs" Aug 13 01:01:55.069509 containerd[1575]: time="2025-08-13T01:01:55.069503630Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 01:01:55.069584 containerd[1575]: time="2025-08-13T01:01:55.069523030Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 01:01:55.069710 containerd[1575]: time="2025-08-13T01:01:55.069681900Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 01:01:55.069710 containerd[1575]: time="2025-08-13T01:01:55.069707590Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 01:01:55.069750 containerd[1575]: time="2025-08-13T01:01:55.069734950Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:01:55.069828 containerd[1575]: time="2025-08-13T01:01:55.069801470Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:01:55.069828 containerd[1575]: time="2025-08-13T01:01:55.069821710Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:01:55.074211 containerd[1575]: time="2025-08-13T01:01:55.074054730Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:01:55.074211 containerd[1575]: time="2025-08-13T01:01:55.074077820Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:01:55.074211 containerd[1575]: time="2025-08-13T01:01:55.074089920Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:01:55.074211 containerd[1575]: time="2025-08-13T01:01:55.074099240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 01:01:55.074211 containerd[1575]: time="2025-08-13T01:01:55.074200550Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 01:01:55.075146 containerd[1575]: time="2025-08-13T01:01:55.074421560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:01:55.075146 containerd[1575]: time="2025-08-13T01:01:55.074458580Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:01:55.075146 containerd[1575]: time="2025-08-13T01:01:55.074468190Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 01:01:55.075221 containerd[1575]: time="2025-08-13T01:01:55.075181920Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 01:01:55.076418 containerd[1575]: time="2025-08-13T01:01:55.076050730Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 01:01:55.076418 containerd[1575]: time="2025-08-13T01:01:55.076125120Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:01:55.079761 containerd[1575]: time="2025-08-13T01:01:55.079726860Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 01:01:55.079809 containerd[1575]: time="2025-08-13T01:01:55.079778630Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 01:01:55.079809 containerd[1575]: time="2025-08-13T01:01:55.079795240Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 01:01:55.079809 containerd[1575]: time="2025-08-13T01:01:55.079806760Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 01:01:55.080013 containerd[1575]: time="2025-08-13T01:01:55.079985880Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 01:01:55.080013 containerd[1575]: time="2025-08-13T01:01:55.080008970Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 01:01:55.080058 containerd[1575]: time="2025-08-13T01:01:55.080026080Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 01:01:55.080058 containerd[1575]: time="2025-08-13T01:01:55.080041850Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 01:01:55.080058 containerd[1575]: time="2025-08-13T01:01:55.080052240Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 01:01:55.080115 containerd[1575]: time="2025-08-13T01:01:55.080061870Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 01:01:55.080115 containerd[1575]: time="2025-08-13T01:01:55.080071610Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 01:01:55.080115 containerd[1575]: time="2025-08-13T01:01:55.080093640Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 01:01:55.081033 containerd[1575]: time="2025-08-13T01:01:55.081001720Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 01:01:55.081033 containerd[1575]: time="2025-08-13T01:01:55.081030020Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 01:01:55.081092 containerd[1575]: time="2025-08-13T01:01:55.081045310Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 01:01:55.081092 containerd[1575]: time="2025-08-13T01:01:55.081056630Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 01:01:55.081092 containerd[1575]: time="2025-08-13T01:01:55.081065970Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 01:01:55.081092 containerd[1575]: time="2025-08-13T01:01:55.081076700Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 01:01:55.081092 containerd[1575]: time="2025-08-13T01:01:55.081087940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 01:01:55.081183 containerd[1575]: time="2025-08-13T01:01:55.081098080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 01:01:55.081183 containerd[1575]: time="2025-08-13T01:01:55.081109140Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 01:01:55.081183 containerd[1575]: time="2025-08-13T01:01:55.081118590Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 01:01:55.081183 containerd[1575]: time="2025-08-13T01:01:55.081128800Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 01:01:55.081246 containerd[1575]: time="2025-08-13T01:01:55.081188470Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 01:01:55.081246 containerd[1575]: time="2025-08-13T01:01:55.081201380Z" level=info msg="Start snapshots syncer" Aug 13 01:01:55.081246 containerd[1575]: time="2025-08-13T01:01:55.081231820Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 01:01:55.081566 containerd[1575]: time="2025-08-13T01:01:55.081522970Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 01:01:55.081662 containerd[1575]: time="2025-08-13T01:01:55.081580250Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 01:01:55.085978 containerd[1575]: time="2025-08-13T01:01:55.084002170Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 01:01:55.085978 containerd[1575]: time="2025-08-13T01:01:55.084128730Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 01:01:55.085978 containerd[1575]: time="2025-08-13T01:01:55.084151250Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 01:01:55.085978 containerd[1575]: time="2025-08-13T01:01:55.084161150Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 01:01:55.085978 containerd[1575]: time="2025-08-13T01:01:55.084178380Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 01:01:55.085978 containerd[1575]: time="2025-08-13T01:01:55.084191430Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 01:01:55.085978 containerd[1575]: time="2025-08-13T01:01:55.084201890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 01:01:55.085978 containerd[1575]: time="2025-08-13T01:01:55.084212190Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 01:01:55.085978 containerd[1575]: time="2025-08-13T01:01:55.084233030Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 01:01:55.085978 containerd[1575]: time="2025-08-13T01:01:55.084247770Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 01:01:55.085978 containerd[1575]: time="2025-08-13T01:01:55.084258660Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 01:01:55.087189 containerd[1575]: time="2025-08-13T01:01:55.086328870Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:01:55.087189 containerd[1575]: time="2025-08-13T01:01:55.086358650Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:01:55.087189 containerd[1575]: time="2025-08-13T01:01:55.086411920Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:01:55.087189 containerd[1575]: time="2025-08-13T01:01:55.086432900Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:01:55.087189 containerd[1575]: time="2025-08-13T01:01:55.086441470Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 01:01:55.087189 containerd[1575]: time="2025-08-13T01:01:55.086451790Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 01:01:55.087189 containerd[1575]: time="2025-08-13T01:01:55.086462820Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 01:01:55.087189 containerd[1575]: time="2025-08-13T01:01:55.086479350Z" level=info msg="runtime interface created" Aug 13 01:01:55.087189 containerd[1575]: time="2025-08-13T01:01:55.086485120Z" level=info msg="created NRI interface" Aug 13 01:01:55.087189 containerd[1575]: time="2025-08-13T01:01:55.086492940Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 01:01:55.087189 containerd[1575]: time="2025-08-13T01:01:55.086503860Z" level=info msg="Connect containerd service" Aug 13 01:01:55.087189 containerd[1575]: time="2025-08-13T01:01:55.086524870Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 01:01:55.087612 containerd[1575]: time="2025-08-13T01:01:55.087402980Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:01:56.174025 systemd-resolved[1461]: Clock change detected. Flushing caches. Aug 13 01:01:56.174156 systemd-timesyncd[1484]: Contacted time server 23.186.168.132:123 (0.flatcar.pool.ntp.org). Aug 13 01:01:56.175220 systemd-timesyncd[1484]: Initial clock synchronization to Wed 2025-08-13 01:01:56.173734 UTC. Aug 13 01:01:56.193377 locksmithd[1598]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:01:56.209326 coreos-metadata[1615]: Aug 13 01:01:56.209 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 01:01:56.254430 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 01:01:56.259472 dbus-daemon[1539]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 01:01:56.262353 dbus-daemon[1539]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1621 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 01:01:56.268211 sshd_keygen[1585]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:01:56.271543 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 01:01:56.310495 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 01:01:56.317304 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 01:01:56.344371 coreos-metadata[1615]: Aug 13 01:01:56.344 INFO Fetch successful Aug 13 01:01:56.344631 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:01:56.345081 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 01:01:56.353015 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 01:01:56.360355 containerd[1575]: time="2025-08-13T01:01:56.360315217Z" level=info msg="Start subscribing containerd event" Aug 13 01:01:56.361589 containerd[1575]: time="2025-08-13T01:01:56.361538717Z" level=info msg="Start recovering state" Aug 13 01:01:56.361682 containerd[1575]: time="2025-08-13T01:01:56.361653997Z" level=info msg="Start event monitor" Aug 13 01:01:56.361682 containerd[1575]: time="2025-08-13T01:01:56.361679407Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:01:56.361726 containerd[1575]: time="2025-08-13T01:01:56.361693647Z" level=info msg="Start streaming server" Aug 13 01:01:56.361726 containerd[1575]: time="2025-08-13T01:01:56.361709117Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 01:01:56.361726 containerd[1575]: time="2025-08-13T01:01:56.361716027Z" level=info msg="runtime interface starting up..." Aug 13 01:01:56.361726 containerd[1575]: time="2025-08-13T01:01:56.361721577Z" level=info msg="starting plugins..." Aug 13 01:01:56.361796 containerd[1575]: time="2025-08-13T01:01:56.361734917Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 01:01:56.363488 containerd[1575]: time="2025-08-13T01:01:56.362473007Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:01:56.363611 containerd[1575]: time="2025-08-13T01:01:56.363584377Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:01:56.363832 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 01:01:56.375249 containerd[1575]: time="2025-08-13T01:01:56.374626817Z" level=info msg="containerd successfully booted in 0.282140s" Aug 13 01:01:56.385414 update-ssh-keys[1654]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:01:56.388001 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 01:01:56.391260 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 01:01:56.393355 systemd[1]: Finished sshkeys.service. Aug 13 01:01:56.399789 polkitd[1638]: Started polkitd version 126 Aug 13 01:01:56.402496 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 01:01:56.406852 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 01:01:56.409738 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 01:01:56.415317 polkitd[1638]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 01:01:56.415618 polkitd[1638]: Loading rules from directory /run/polkit-1/rules.d Aug 13 01:01:56.415667 polkitd[1638]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:01:56.415891 polkitd[1638]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 13 01:01:56.415919 polkitd[1638]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:01:56.415956 polkitd[1638]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 01:01:56.417156 polkitd[1638]: Finished loading, compiling and executing 2 rules Aug 13 01:01:56.417368 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 01:01:56.418784 dbus-daemon[1539]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 01:01:56.419113 polkitd[1638]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 01:01:56.426951 systemd-hostnamed[1621]: Hostname set to <172-233-209-21> (transient) Aug 13 01:01:56.426966 systemd-resolved[1461]: System hostname changed to '172-233-209-21'. Aug 13 01:01:56.522164 tar[1560]: linux-amd64/LICENSE Aug 13 01:01:56.522235 tar[1560]: linux-amd64/README.md Aug 13 01:01:56.545821 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 01:01:56.804021 coreos-metadata[1538]: Aug 13 01:01:56.803 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 01:01:56.889350 systemd-networkd[1460]: eth0: Gained IPv6LL Aug 13 01:01:56.892147 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 01:01:56.892810 coreos-metadata[1538]: Aug 13 01:01:56.892 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 01:01:56.894172 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 01:01:56.897161 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:01:56.900370 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 01:01:56.924042 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 01:01:57.116423 coreos-metadata[1538]: Aug 13 01:01:57.116 INFO Fetch successful Aug 13 01:01:57.116423 coreos-metadata[1538]: Aug 13 01:01:57.116 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 01:01:57.369256 coreos-metadata[1538]: Aug 13 01:01:57.368 INFO Fetch successful Aug 13 01:01:57.472304 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 01:01:57.473969 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 01:01:57.745438 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:01:57.746384 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 01:01:57.749099 (kubelet)[1710]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:01:57.784263 systemd[1]: Startup finished in 2.957s (kernel) + 8.227s (initrd) + 4.895s (userspace) = 16.080s. Aug 13 01:01:58.248662 kubelet[1710]: E0813 01:01:58.248609 1710 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:01:58.252353 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:01:58.252549 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:01:58.253176 systemd[1]: kubelet.service: Consumed 849ms CPU time, 264.7M memory peak. Aug 13 01:01:59.271263 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 01:01:59.272475 systemd[1]: Started sshd@0-172.233.209.21:22-147.75.109.163:49250.service - OpenSSH per-connection server daemon (147.75.109.163:49250). Aug 13 01:01:59.613899 sshd[1723]: Accepted publickey for core from 147.75.109.163 port 49250 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:01:59.615263 sshd-session[1723]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:01:59.622211 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 01:01:59.623663 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 01:01:59.631368 systemd-logind[1550]: New session 1 of user core. Aug 13 01:01:59.644490 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 01:01:59.647795 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 01:01:59.658582 (systemd)[1727]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:01:59.661337 systemd-logind[1550]: New session c1 of user core. Aug 13 01:01:59.800396 systemd[1727]: Queued start job for default target default.target. Aug 13 01:01:59.817841 systemd[1727]: Created slice app.slice - User Application Slice. Aug 13 01:01:59.817871 systemd[1727]: Reached target paths.target - Paths. Aug 13 01:01:59.817914 systemd[1727]: Reached target timers.target - Timers. Aug 13 01:01:59.819344 systemd[1727]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 01:01:59.829395 systemd[1727]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 01:01:59.829561 systemd[1727]: Reached target sockets.target - Sockets. Aug 13 01:01:59.829598 systemd[1727]: Reached target basic.target - Basic System. Aug 13 01:01:59.829638 systemd[1727]: Reached target default.target - Main User Target. Aug 13 01:01:59.829668 systemd[1727]: Startup finished in 162ms. Aug 13 01:01:59.829789 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 01:01:59.842328 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 01:02:00.096863 systemd[1]: Started sshd@1-172.233.209.21:22-147.75.109.163:49256.service - OpenSSH per-connection server daemon (147.75.109.163:49256). Aug 13 01:02:00.435538 sshd[1738]: Accepted publickey for core from 147.75.109.163 port 49256 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:02:00.437338 sshd-session[1738]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:02:00.443372 systemd-logind[1550]: New session 2 of user core. Aug 13 01:02:00.452351 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 01:02:00.679381 sshd[1740]: Connection closed by 147.75.109.163 port 49256 Aug 13 01:02:00.679900 sshd-session[1738]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:00.683268 systemd-logind[1550]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:02:00.683930 systemd[1]: sshd@1-172.233.209.21:22-147.75.109.163:49256.service: Deactivated successfully. Aug 13 01:02:00.685620 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:02:00.687763 systemd-logind[1550]: Removed session 2. Aug 13 01:02:00.739249 systemd[1]: Started sshd@2-172.233.209.21:22-147.75.109.163:49266.service - OpenSSH per-connection server daemon (147.75.109.163:49266). Aug 13 01:02:01.095367 sshd[1746]: Accepted publickey for core from 147.75.109.163 port 49266 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:02:01.097264 sshd-session[1746]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:02:01.103241 systemd-logind[1550]: New session 3 of user core. Aug 13 01:02:01.113329 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 01:02:01.340211 sshd[1748]: Connection closed by 147.75.109.163 port 49266 Aug 13 01:02:01.340772 sshd-session[1746]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:01.345847 systemd[1]: sshd@2-172.233.209.21:22-147.75.109.163:49266.service: Deactivated successfully. Aug 13 01:02:01.348161 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:02:01.349046 systemd-logind[1550]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:02:01.350684 systemd-logind[1550]: Removed session 3. Aug 13 01:02:01.405546 systemd[1]: Started sshd@3-172.233.209.21:22-147.75.109.163:49276.service - OpenSSH per-connection server daemon (147.75.109.163:49276). Aug 13 01:02:01.743124 sshd[1754]: Accepted publickey for core from 147.75.109.163 port 49276 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:02:01.745130 sshd-session[1754]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:02:01.751005 systemd-logind[1550]: New session 4 of user core. Aug 13 01:02:01.760311 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 01:02:01.987189 sshd[1756]: Connection closed by 147.75.109.163 port 49276 Aug 13 01:02:01.987781 sshd-session[1754]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:01.992326 systemd-logind[1550]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:02:01.992919 systemd[1]: sshd@3-172.233.209.21:22-147.75.109.163:49276.service: Deactivated successfully. Aug 13 01:02:01.995585 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:02:01.998206 systemd-logind[1550]: Removed session 4. Aug 13 01:02:02.055665 systemd[1]: Started sshd@4-172.233.209.21:22-147.75.109.163:49278.service - OpenSSH per-connection server daemon (147.75.109.163:49278). Aug 13 01:02:02.411278 sshd[1762]: Accepted publickey for core from 147.75.109.163 port 49278 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:02:02.412591 sshd-session[1762]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:02:02.417243 systemd-logind[1550]: New session 5 of user core. Aug 13 01:02:02.425319 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 01:02:02.623462 sudo[1765]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 01:02:02.623780 sudo[1765]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:02:02.641332 sudo[1765]: pam_unix(sudo:session): session closed for user root Aug 13 01:02:02.694162 sshd[1764]: Connection closed by 147.75.109.163 port 49278 Aug 13 01:02:02.695140 sshd-session[1762]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:02.699028 systemd[1]: sshd@4-172.233.209.21:22-147.75.109.163:49278.service: Deactivated successfully. Aug 13 01:02:02.701174 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:02:02.702546 systemd-logind[1550]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:02:02.703839 systemd-logind[1550]: Removed session 5. Aug 13 01:02:02.753796 systemd[1]: Started sshd@5-172.233.209.21:22-147.75.109.163:49294.service - OpenSSH per-connection server daemon (147.75.109.163:49294). Aug 13 01:02:03.099120 sshd[1771]: Accepted publickey for core from 147.75.109.163 port 49294 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:02:03.101423 sshd-session[1771]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:02:03.107587 systemd-logind[1550]: New session 6 of user core. Aug 13 01:02:03.114324 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 01:02:03.298949 sudo[1775]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 01:02:03.299306 sudo[1775]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:02:03.304501 sudo[1775]: pam_unix(sudo:session): session closed for user root Aug 13 01:02:03.310346 sudo[1774]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 01:02:03.310638 sudo[1774]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:02:03.321409 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:02:03.364655 augenrules[1797]: No rules Aug 13 01:02:03.366382 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:02:03.366664 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:02:03.367781 sudo[1774]: pam_unix(sudo:session): session closed for user root Aug 13 01:02:03.419271 sshd[1773]: Connection closed by 147.75.109.163 port 49294 Aug 13 01:02:03.419716 sshd-session[1771]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:03.423545 systemd-logind[1550]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:02:03.424318 systemd[1]: sshd@5-172.233.209.21:22-147.75.109.163:49294.service: Deactivated successfully. Aug 13 01:02:03.426336 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:02:03.428713 systemd-logind[1550]: Removed session 6. Aug 13 01:02:03.476264 systemd[1]: Started sshd@6-172.233.209.21:22-147.75.109.163:49310.service - OpenSSH per-connection server daemon (147.75.109.163:49310). Aug 13 01:02:03.820969 sshd[1806]: Accepted publickey for core from 147.75.109.163 port 49310 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:02:03.822003 sshd-session[1806]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:02:03.826242 systemd-logind[1550]: New session 7 of user core. Aug 13 01:02:03.835310 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 01:02:04.016463 sudo[1809]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:02:04.016760 sudo[1809]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:02:04.294910 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 01:02:04.305474 (dockerd)[1827]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 01:02:04.494582 dockerd[1827]: time="2025-08-13T01:02:04.494519777Z" level=info msg="Starting up" Aug 13 01:02:04.496130 dockerd[1827]: time="2025-08-13T01:02:04.496105677Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 01:02:04.552170 dockerd[1827]: time="2025-08-13T01:02:04.551897817Z" level=info msg="Loading containers: start." Aug 13 01:02:04.562228 kernel: Initializing XFRM netlink socket Aug 13 01:02:04.848018 systemd-networkd[1460]: docker0: Link UP Aug 13 01:02:04.852987 dockerd[1827]: time="2025-08-13T01:02:04.852172037Z" level=info msg="Loading containers: done." Aug 13 01:02:04.868233 dockerd[1827]: time="2025-08-13T01:02:04.868158887Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:02:04.868364 dockerd[1827]: time="2025-08-13T01:02:04.868274227Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 01:02:04.868402 dockerd[1827]: time="2025-08-13T01:02:04.868378297Z" level=info msg="Initializing buildkit" Aug 13 01:02:04.870184 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1733427843-merged.mount: Deactivated successfully. Aug 13 01:02:04.894143 dockerd[1827]: time="2025-08-13T01:02:04.894101907Z" level=info msg="Completed buildkit initialization" Aug 13 01:02:04.903571 dockerd[1827]: time="2025-08-13T01:02:04.903529687Z" level=info msg="Daemon has completed initialization" Aug 13 01:02:04.903694 dockerd[1827]: time="2025-08-13T01:02:04.903658637Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:02:04.904063 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 01:02:05.510495 containerd[1575]: time="2025-08-13T01:02:05.510460017Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 01:02:06.275295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3564725214.mount: Deactivated successfully. Aug 13 01:02:07.665211 containerd[1575]: time="2025-08-13T01:02:07.665067937Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:07.666272 containerd[1575]: time="2025-08-13T01:02:07.666104757Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 01:02:07.667032 containerd[1575]: time="2025-08-13T01:02:07.667004657Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:07.669426 containerd[1575]: time="2025-08-13T01:02:07.669399097Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:07.670498 containerd[1575]: time="2025-08-13T01:02:07.670474727Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 2.15998092s" Aug 13 01:02:07.670566 containerd[1575]: time="2025-08-13T01:02:07.670553127Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 01:02:07.671290 containerd[1575]: time="2025-08-13T01:02:07.671253947Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 01:02:08.503073 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:02:08.505551 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:02:08.693377 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:02:08.698516 (kubelet)[2093]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:02:08.738105 kubelet[2093]: E0813 01:02:08.738040 2093 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:02:08.742748 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:02:08.742927 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:02:08.743298 systemd[1]: kubelet.service: Consumed 196ms CPU time, 109M memory peak. Aug 13 01:02:09.597807 containerd[1575]: time="2025-08-13T01:02:09.597306527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:09.598458 containerd[1575]: time="2025-08-13T01:02:09.598417907Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 01:02:09.599388 containerd[1575]: time="2025-08-13T01:02:09.598734127Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:09.602460 containerd[1575]: time="2025-08-13T01:02:09.602104897Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:09.603117 containerd[1575]: time="2025-08-13T01:02:09.603081797Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 1.93180422s" Aug 13 01:02:09.603156 containerd[1575]: time="2025-08-13T01:02:09.603117217Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 01:02:09.606693 containerd[1575]: time="2025-08-13T01:02:09.606662187Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 01:02:11.168696 containerd[1575]: time="2025-08-13T01:02:11.168653327Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:11.169500 containerd[1575]: time="2025-08-13T01:02:11.169432937Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 01:02:11.170171 containerd[1575]: time="2025-08-13T01:02:11.170147127Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:11.172126 containerd[1575]: time="2025-08-13T01:02:11.172092127Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:11.173116 containerd[1575]: time="2025-08-13T01:02:11.173014957Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 1.56632271s" Aug 13 01:02:11.173116 containerd[1575]: time="2025-08-13T01:02:11.173039747Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 01:02:11.173600 containerd[1575]: time="2025-08-13T01:02:11.173585687Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 01:02:12.365081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3009816500.mount: Deactivated successfully. Aug 13 01:02:12.684256 containerd[1575]: time="2025-08-13T01:02:12.683674427Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:12.684701 containerd[1575]: time="2025-08-13T01:02:12.684680867Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 01:02:12.684885 containerd[1575]: time="2025-08-13T01:02:12.684863627Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:12.686326 containerd[1575]: time="2025-08-13T01:02:12.686307417Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:12.686717 containerd[1575]: time="2025-08-13T01:02:12.686669757Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 1.51306353s" Aug 13 01:02:12.686751 containerd[1575]: time="2025-08-13T01:02:12.686718567Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 01:02:12.687245 containerd[1575]: time="2025-08-13T01:02:12.687206017Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:02:13.446150 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount556193342.mount: Deactivated successfully. Aug 13 01:02:14.271065 containerd[1575]: time="2025-08-13T01:02:14.271006307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:14.272175 containerd[1575]: time="2025-08-13T01:02:14.272148037Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 01:02:14.272914 containerd[1575]: time="2025-08-13T01:02:14.272857707Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:14.275059 containerd[1575]: time="2025-08-13T01:02:14.275022797Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:14.277304 containerd[1575]: time="2025-08-13T01:02:14.276078467Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.58883739s" Aug 13 01:02:14.277304 containerd[1575]: time="2025-08-13T01:02:14.276113727Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:02:14.277969 containerd[1575]: time="2025-08-13T01:02:14.277939117Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:02:14.959637 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3092478765.mount: Deactivated successfully. Aug 13 01:02:14.963641 containerd[1575]: time="2025-08-13T01:02:14.963586497Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:02:14.964431 containerd[1575]: time="2025-08-13T01:02:14.964397157Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 01:02:14.965909 containerd[1575]: time="2025-08-13T01:02:14.964863677Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:02:14.966412 containerd[1575]: time="2025-08-13T01:02:14.966382967Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:02:14.967071 containerd[1575]: time="2025-08-13T01:02:14.967043427Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 689.07146ms" Aug 13 01:02:14.967153 containerd[1575]: time="2025-08-13T01:02:14.967136997Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:02:14.967844 containerd[1575]: time="2025-08-13T01:02:14.967777527Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 01:02:15.668033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3962415734.mount: Deactivated successfully. Aug 13 01:02:17.064055 containerd[1575]: time="2025-08-13T01:02:17.064008307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:17.065498 containerd[1575]: time="2025-08-13T01:02:17.065333687Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 01:02:17.066108 containerd[1575]: time="2025-08-13T01:02:17.066074037Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:17.069388 containerd[1575]: time="2025-08-13T01:02:17.068211527Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:17.069388 containerd[1575]: time="2025-08-13T01:02:17.069229957Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 2.10133451s" Aug 13 01:02:17.069388 containerd[1575]: time="2025-08-13T01:02:17.069264077Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:02:18.994014 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 01:02:19.002808 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:02:19.112735 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 01:02:19.112976 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 01:02:19.113417 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:02:19.113732 systemd[1]: kubelet.service: Consumed 97ms CPU time, 69M memory peak. Aug 13 01:02:19.125103 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:02:19.142977 systemd[1]: Reload requested from client PID 2258 ('systemctl') (unit session-7.scope)... Aug 13 01:02:19.142995 systemd[1]: Reloading... Aug 13 01:02:19.258231 zram_generator::config[2302]: No configuration found. Aug 13 01:02:19.361678 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:02:19.467585 systemd[1]: Reloading finished in 324 ms. Aug 13 01:02:19.529722 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 01:02:19.529816 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 01:02:19.530289 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:02:19.530332 systemd[1]: kubelet.service: Consumed 152ms CPU time, 98.1M memory peak. Aug 13 01:02:19.532055 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:02:19.694914 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:02:19.703664 (kubelet)[2356]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:02:19.746122 kubelet[2356]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:02:19.747396 kubelet[2356]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:02:19.747396 kubelet[2356]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:02:19.747396 kubelet[2356]: I0813 01:02:19.746455 2356 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:02:19.914205 kubelet[2356]: I0813 01:02:19.914157 2356 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:02:19.914205 kubelet[2356]: I0813 01:02:19.914178 2356 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:02:19.914451 kubelet[2356]: I0813 01:02:19.914434 2356 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:02:19.934603 kubelet[2356]: E0813 01:02:19.934579 2356 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.233.209.21:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.233.209.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:02:19.935476 kubelet[2356]: I0813 01:02:19.935330 2356 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:02:19.945722 kubelet[2356]: I0813 01:02:19.945596 2356 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:02:19.950538 kubelet[2356]: I0813 01:02:19.950492 2356 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:02:19.950661 kubelet[2356]: I0813 01:02:19.950617 2356 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:02:19.950951 kubelet[2356]: I0813 01:02:19.950912 2356 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:02:19.951104 kubelet[2356]: I0813 01:02:19.950941 2356 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-233-209-21","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:02:19.951208 kubelet[2356]: I0813 01:02:19.951109 2356 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:02:19.951208 kubelet[2356]: I0813 01:02:19.951117 2356 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:02:19.951277 kubelet[2356]: I0813 01:02:19.951255 2356 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:02:19.954236 kubelet[2356]: I0813 01:02:19.953976 2356 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:02:19.954236 kubelet[2356]: I0813 01:02:19.953999 2356 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:02:19.954236 kubelet[2356]: I0813 01:02:19.954030 2356 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:02:19.954236 kubelet[2356]: I0813 01:02:19.954045 2356 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:02:19.958097 kubelet[2356]: W0813 01:02:19.958053 2356 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.233.209.21:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-209-21&limit=500&resourceVersion=0": dial tcp 172.233.209.21:6443: connect: connection refused Aug 13 01:02:19.958152 kubelet[2356]: E0813 01:02:19.958105 2356 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.233.209.21:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-233-209-21&limit=500&resourceVersion=0\": dial tcp 172.233.209.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:02:19.958489 kubelet[2356]: W0813 01:02:19.958411 2356 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.233.209.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.233.209.21:6443: connect: connection refused Aug 13 01:02:19.958489 kubelet[2356]: E0813 01:02:19.958453 2356 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.233.209.21:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.233.209.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:02:19.958553 kubelet[2356]: I0813 01:02:19.958512 2356 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:02:19.959069 kubelet[2356]: I0813 01:02:19.959047 2356 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:02:19.959130 kubelet[2356]: W0813 01:02:19.959110 2356 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:02:19.961985 kubelet[2356]: I0813 01:02:19.961960 2356 server.go:1274] "Started kubelet" Aug 13 01:02:19.962435 kubelet[2356]: I0813 01:02:19.962414 2356 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:02:19.963487 kubelet[2356]: I0813 01:02:19.963473 2356 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:02:19.966675 kubelet[2356]: I0813 01:02:19.966633 2356 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:02:19.966899 kubelet[2356]: I0813 01:02:19.966863 2356 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:02:19.967720 kubelet[2356]: I0813 01:02:19.967706 2356 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:02:19.970445 kubelet[2356]: E0813 01:02:19.969400 2356 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.233.209.21:6443/api/v1/namespaces/default/events\": dial tcp 172.233.209.21:6443: connect: connection refused" event="&Event{ObjectMeta:{172-233-209-21.185b2de4ac9155b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-233-209-21,UID:172-233-209-21,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-233-209-21,},FirstTimestamp:2025-08-13 01:02:19.961939377 +0000 UTC m=+0.254110511,LastTimestamp:2025-08-13 01:02:19.961939377 +0000 UTC m=+0.254110511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-233-209-21,}" Aug 13 01:02:19.971587 kubelet[2356]: I0813 01:02:19.971539 2356 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:02:19.974210 kubelet[2356]: E0813 01:02:19.974175 2356 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:02:19.975006 kubelet[2356]: I0813 01:02:19.974722 2356 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:02:19.975090 kubelet[2356]: E0813 01:02:19.975061 2356 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-233-209-21\" not found" Aug 13 01:02:19.975874 kubelet[2356]: E0813 01:02:19.975835 2356 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.209.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-209-21?timeout=10s\": dial tcp 172.233.209.21:6443: connect: connection refused" interval="200ms" Aug 13 01:02:19.976689 kubelet[2356]: I0813 01:02:19.976662 2356 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:02:19.976689 kubelet[2356]: I0813 01:02:19.976681 2356 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:02:19.976772 kubelet[2356]: I0813 01:02:19.976743 2356 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:02:19.978054 kubelet[2356]: I0813 01:02:19.977910 2356 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:02:19.978054 kubelet[2356]: I0813 01:02:19.977953 2356 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:02:19.980603 kubelet[2356]: W0813 01:02:19.980571 2356 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.233.209.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.233.209.21:6443: connect: connection refused Aug 13 01:02:19.980917 kubelet[2356]: E0813 01:02:19.980898 2356 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.233.209.21:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.233.209.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:02:19.990296 kubelet[2356]: I0813 01:02:19.990263 2356 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:02:19.991607 kubelet[2356]: I0813 01:02:19.991582 2356 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:02:19.991607 kubelet[2356]: I0813 01:02:19.991602 2356 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:02:19.991677 kubelet[2356]: I0813 01:02:19.991618 2356 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:02:19.991677 kubelet[2356]: E0813 01:02:19.991657 2356 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:02:19.999219 kubelet[2356]: W0813 01:02:19.999165 2356 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.233.209.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.233.209.21:6443: connect: connection refused Aug 13 01:02:19.999314 kubelet[2356]: E0813 01:02:19.999276 2356 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.233.209.21:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.233.209.21:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:02:20.006674 kubelet[2356]: I0813 01:02:20.006659 2356 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:02:20.006782 kubelet[2356]: I0813 01:02:20.006750 2356 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:02:20.006782 kubelet[2356]: I0813 01:02:20.006766 2356 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:02:20.008632 kubelet[2356]: I0813 01:02:20.008569 2356 policy_none.go:49] "None policy: Start" Aug 13 01:02:20.009098 kubelet[2356]: I0813 01:02:20.009086 2356 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:02:20.009362 kubelet[2356]: I0813 01:02:20.009131 2356 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:02:20.015235 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 01:02:20.030885 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 01:02:20.034647 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 01:02:20.045130 kubelet[2356]: I0813 01:02:20.044973 2356 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:02:20.045130 kubelet[2356]: I0813 01:02:20.045116 2356 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:02:20.045272 kubelet[2356]: I0813 01:02:20.045125 2356 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:02:20.045472 kubelet[2356]: I0813 01:02:20.045457 2356 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:02:20.047896 kubelet[2356]: E0813 01:02:20.047853 2356 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-233-209-21\" not found" Aug 13 01:02:20.102836 systemd[1]: Created slice kubepods-burstable-pod84e5b6d62d487c8856457964b4e5c40a.slice - libcontainer container kubepods-burstable-pod84e5b6d62d487c8856457964b4e5c40a.slice. Aug 13 01:02:20.124619 systemd[1]: Created slice kubepods-burstable-pod64d5421c8c4ac817f60c78f054d6ea67.slice - libcontainer container kubepods-burstable-pod64d5421c8c4ac817f60c78f054d6ea67.slice. Aug 13 01:02:20.133624 systemd[1]: Created slice kubepods-burstable-pod67a6bf939ebcfc59368f68eb98711e8e.slice - libcontainer container kubepods-burstable-pod67a6bf939ebcfc59368f68eb98711e8e.slice. Aug 13 01:02:20.148009 kubelet[2356]: I0813 01:02:20.147978 2356 kubelet_node_status.go:72] "Attempting to register node" node="172-233-209-21" Aug 13 01:02:20.148298 kubelet[2356]: E0813 01:02:20.148275 2356 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.233.209.21:6443/api/v1/nodes\": dial tcp 172.233.209.21:6443: connect: connection refused" node="172-233-209-21" Aug 13 01:02:20.176933 kubelet[2356]: E0813 01:02:20.176900 2356 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.209.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-209-21?timeout=10s\": dial tcp 172.233.209.21:6443: connect: connection refused" interval="400ms" Aug 13 01:02:20.280165 kubelet[2356]: I0813 01:02:20.280088 2356 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/64d5421c8c4ac817f60c78f054d6ea67-kubeconfig\") pod \"kube-controller-manager-172-233-209-21\" (UID: \"64d5421c8c4ac817f60c78f054d6ea67\") " pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:02:20.280165 kubelet[2356]: I0813 01:02:20.280134 2356 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64d5421c8c4ac817f60c78f054d6ea67-usr-share-ca-certificates\") pod \"kube-controller-manager-172-233-209-21\" (UID: \"64d5421c8c4ac817f60c78f054d6ea67\") " pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:02:20.280165 kubelet[2356]: I0813 01:02:20.280154 2356 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67a6bf939ebcfc59368f68eb98711e8e-kubeconfig\") pod \"kube-scheduler-172-233-209-21\" (UID: \"67a6bf939ebcfc59368f68eb98711e8e\") " pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:02:20.280759 kubelet[2356]: I0813 01:02:20.280171 2356 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84e5b6d62d487c8856457964b4e5c40a-ca-certs\") pod \"kube-apiserver-172-233-209-21\" (UID: \"84e5b6d62d487c8856457964b4e5c40a\") " pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:02:20.280759 kubelet[2356]: I0813 01:02:20.280187 2356 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64d5421c8c4ac817f60c78f054d6ea67-k8s-certs\") pod \"kube-controller-manager-172-233-209-21\" (UID: \"64d5421c8c4ac817f60c78f054d6ea67\") " pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:02:20.280759 kubelet[2356]: I0813 01:02:20.280223 2356 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64d5421c8c4ac817f60c78f054d6ea67-ca-certs\") pod \"kube-controller-manager-172-233-209-21\" (UID: \"64d5421c8c4ac817f60c78f054d6ea67\") " pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:02:20.280759 kubelet[2356]: I0813 01:02:20.280238 2356 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/64d5421c8c4ac817f60c78f054d6ea67-flexvolume-dir\") pod \"kube-controller-manager-172-233-209-21\" (UID: \"64d5421c8c4ac817f60c78f054d6ea67\") " pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:02:20.280759 kubelet[2356]: I0813 01:02:20.280253 2356 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84e5b6d62d487c8856457964b4e5c40a-k8s-certs\") pod \"kube-apiserver-172-233-209-21\" (UID: \"84e5b6d62d487c8856457964b4e5c40a\") " pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:02:20.280865 kubelet[2356]: I0813 01:02:20.280268 2356 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84e5b6d62d487c8856457964b4e5c40a-usr-share-ca-certificates\") pod \"kube-apiserver-172-233-209-21\" (UID: \"84e5b6d62d487c8856457964b4e5c40a\") " pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:02:20.350699 kubelet[2356]: I0813 01:02:20.350668 2356 kubelet_node_status.go:72] "Attempting to register node" node="172-233-209-21" Aug 13 01:02:20.351043 kubelet[2356]: E0813 01:02:20.350994 2356 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.233.209.21:6443/api/v1/nodes\": dial tcp 172.233.209.21:6443: connect: connection refused" node="172-233-209-21" Aug 13 01:02:20.421654 kubelet[2356]: E0813 01:02:20.421618 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:20.422625 containerd[1575]: time="2025-08-13T01:02:20.422593267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-233-209-21,Uid:84e5b6d62d487c8856457964b4e5c40a,Namespace:kube-system,Attempt:0,}" Aug 13 01:02:20.432021 kubelet[2356]: E0813 01:02:20.431965 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:20.433148 containerd[1575]: time="2025-08-13T01:02:20.432598887Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-233-209-21,Uid:64d5421c8c4ac817f60c78f054d6ea67,Namespace:kube-system,Attempt:0,}" Aug 13 01:02:20.437486 kubelet[2356]: E0813 01:02:20.437288 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:20.452536 containerd[1575]: time="2025-08-13T01:02:20.452507447Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-233-209-21,Uid:67a6bf939ebcfc59368f68eb98711e8e,Namespace:kube-system,Attempt:0,}" Aug 13 01:02:20.455690 containerd[1575]: time="2025-08-13T01:02:20.455652697Z" level=info msg="connecting to shim bd651942f9d3447e48f150306a91a5368a971d0f2692282ecc148fdc85f369be" address="unix:///run/containerd/s/69b860c35ad9ecbdd10ad3ee31d65bd2073c2d244596792c5d4895a44fc8383a" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:02:20.465923 containerd[1575]: time="2025-08-13T01:02:20.465900677Z" level=info msg="connecting to shim b7d5a29b541638c5db5273cbe54b0c1bb924d30867f082edced29f91f09be526" address="unix:///run/containerd/s/f2c14616cc957896a4d0d7026951d2944b898a322689aa21507201d8367de5b7" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:02:20.484336 containerd[1575]: time="2025-08-13T01:02:20.484275037Z" level=info msg="connecting to shim 75e99c3b3cc7006464adc35ae58981cc654fb986a652f2e7217296735a7b8e4e" address="unix:///run/containerd/s/ef6897bd8f4a968ab5db1b0970b45d70fac5a00cd667d223f6fdd197025028e8" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:02:20.502332 systemd[1]: Started cri-containerd-bd651942f9d3447e48f150306a91a5368a971d0f2692282ecc148fdc85f369be.scope - libcontainer container bd651942f9d3447e48f150306a91a5368a971d0f2692282ecc148fdc85f369be. Aug 13 01:02:20.518333 systemd[1]: Started cri-containerd-b7d5a29b541638c5db5273cbe54b0c1bb924d30867f082edced29f91f09be526.scope - libcontainer container b7d5a29b541638c5db5273cbe54b0c1bb924d30867f082edced29f91f09be526. Aug 13 01:02:20.523564 systemd[1]: Started cri-containerd-75e99c3b3cc7006464adc35ae58981cc654fb986a652f2e7217296735a7b8e4e.scope - libcontainer container 75e99c3b3cc7006464adc35ae58981cc654fb986a652f2e7217296735a7b8e4e. Aug 13 01:02:20.570275 containerd[1575]: time="2025-08-13T01:02:20.570102107Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-233-209-21,Uid:84e5b6d62d487c8856457964b4e5c40a,Namespace:kube-system,Attempt:0,} returns sandbox id \"bd651942f9d3447e48f150306a91a5368a971d0f2692282ecc148fdc85f369be\"" Aug 13 01:02:20.571670 kubelet[2356]: E0813 01:02:20.571650 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:20.574263 containerd[1575]: time="2025-08-13T01:02:20.573507017Z" level=info msg="CreateContainer within sandbox \"bd651942f9d3447e48f150306a91a5368a971d0f2692282ecc148fdc85f369be\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:02:20.577868 kubelet[2356]: E0813 01:02:20.577726 2356 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.233.209.21:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-233-209-21?timeout=10s\": dial tcp 172.233.209.21:6443: connect: connection refused" interval="800ms" Aug 13 01:02:20.580074 containerd[1575]: time="2025-08-13T01:02:20.580055667Z" level=info msg="Container af7599726c4a7bb0f84095840add605519925661b1fd0d7c44e6e916a2db03a0: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:02:20.592667 containerd[1575]: time="2025-08-13T01:02:20.592645357Z" level=info msg="CreateContainer within sandbox \"bd651942f9d3447e48f150306a91a5368a971d0f2692282ecc148fdc85f369be\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"af7599726c4a7bb0f84095840add605519925661b1fd0d7c44e6e916a2db03a0\"" Aug 13 01:02:20.593304 containerd[1575]: time="2025-08-13T01:02:20.593286567Z" level=info msg="StartContainer for \"af7599726c4a7bb0f84095840add605519925661b1fd0d7c44e6e916a2db03a0\"" Aug 13 01:02:20.594447 containerd[1575]: time="2025-08-13T01:02:20.594417127Z" level=info msg="connecting to shim af7599726c4a7bb0f84095840add605519925661b1fd0d7c44e6e916a2db03a0" address="unix:///run/containerd/s/69b860c35ad9ecbdd10ad3ee31d65bd2073c2d244596792c5d4895a44fc8383a" protocol=ttrpc version=3 Aug 13 01:02:20.596342 containerd[1575]: time="2025-08-13T01:02:20.596315927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-233-209-21,Uid:64d5421c8c4ac817f60c78f054d6ea67,Namespace:kube-system,Attempt:0,} returns sandbox id \"b7d5a29b541638c5db5273cbe54b0c1bb924d30867f082edced29f91f09be526\"" Aug 13 01:02:20.597902 kubelet[2356]: E0813 01:02:20.597876 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:20.604177 containerd[1575]: time="2025-08-13T01:02:20.604150327Z" level=info msg="CreateContainer within sandbox \"b7d5a29b541638c5db5273cbe54b0c1bb924d30867f082edced29f91f09be526\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:02:20.619211 containerd[1575]: time="2025-08-13T01:02:20.618865887Z" level=info msg="Container 0b6e87b87d88e3bd18447e040239f34829b57f7871e191e0c1fc3e8ad09456eb: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:02:20.624332 systemd[1]: Started cri-containerd-af7599726c4a7bb0f84095840add605519925661b1fd0d7c44e6e916a2db03a0.scope - libcontainer container af7599726c4a7bb0f84095840add605519925661b1fd0d7c44e6e916a2db03a0. Aug 13 01:02:20.638960 containerd[1575]: time="2025-08-13T01:02:20.638720487Z" level=info msg="CreateContainer within sandbox \"b7d5a29b541638c5db5273cbe54b0c1bb924d30867f082edced29f91f09be526\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"0b6e87b87d88e3bd18447e040239f34829b57f7871e191e0c1fc3e8ad09456eb\"" Aug 13 01:02:20.639676 containerd[1575]: time="2025-08-13T01:02:20.639638287Z" level=info msg="StartContainer for \"0b6e87b87d88e3bd18447e040239f34829b57f7871e191e0c1fc3e8ad09456eb\"" Aug 13 01:02:20.640485 containerd[1575]: time="2025-08-13T01:02:20.640461097Z" level=info msg="connecting to shim 0b6e87b87d88e3bd18447e040239f34829b57f7871e191e0c1fc3e8ad09456eb" address="unix:///run/containerd/s/f2c14616cc957896a4d0d7026951d2944b898a322689aa21507201d8367de5b7" protocol=ttrpc version=3 Aug 13 01:02:20.650599 containerd[1575]: time="2025-08-13T01:02:20.650557087Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-233-209-21,Uid:67a6bf939ebcfc59368f68eb98711e8e,Namespace:kube-system,Attempt:0,} returns sandbox id \"75e99c3b3cc7006464adc35ae58981cc654fb986a652f2e7217296735a7b8e4e\"" Aug 13 01:02:20.652010 kubelet[2356]: E0813 01:02:20.651825 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:20.654267 containerd[1575]: time="2025-08-13T01:02:20.654232587Z" level=info msg="CreateContainer within sandbox \"75e99c3b3cc7006464adc35ae58981cc654fb986a652f2e7217296735a7b8e4e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:02:20.663827 containerd[1575]: time="2025-08-13T01:02:20.663803767Z" level=info msg="Container 3073eeb3c8dd69d4913099e5d19baef1ad1f1065889b95e1161dfed0f2b7f499: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:02:20.676307 systemd[1]: Started cri-containerd-0b6e87b87d88e3bd18447e040239f34829b57f7871e191e0c1fc3e8ad09456eb.scope - libcontainer container 0b6e87b87d88e3bd18447e040239f34829b57f7871e191e0c1fc3e8ad09456eb. Aug 13 01:02:20.690889 containerd[1575]: time="2025-08-13T01:02:20.690700577Z" level=info msg="CreateContainer within sandbox \"75e99c3b3cc7006464adc35ae58981cc654fb986a652f2e7217296735a7b8e4e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3073eeb3c8dd69d4913099e5d19baef1ad1f1065889b95e1161dfed0f2b7f499\"" Aug 13 01:02:20.692686 containerd[1575]: time="2025-08-13T01:02:20.692659807Z" level=info msg="StartContainer for \"3073eeb3c8dd69d4913099e5d19baef1ad1f1065889b95e1161dfed0f2b7f499\"" Aug 13 01:02:20.693675 containerd[1575]: time="2025-08-13T01:02:20.693644817Z" level=info msg="connecting to shim 3073eeb3c8dd69d4913099e5d19baef1ad1f1065889b95e1161dfed0f2b7f499" address="unix:///run/containerd/s/ef6897bd8f4a968ab5db1b0970b45d70fac5a00cd667d223f6fdd197025028e8" protocol=ttrpc version=3 Aug 13 01:02:20.719797 containerd[1575]: time="2025-08-13T01:02:20.718892607Z" level=info msg="StartContainer for \"af7599726c4a7bb0f84095840add605519925661b1fd0d7c44e6e916a2db03a0\" returns successfully" Aug 13 01:02:20.724366 systemd[1]: Started cri-containerd-3073eeb3c8dd69d4913099e5d19baef1ad1f1065889b95e1161dfed0f2b7f499.scope - libcontainer container 3073eeb3c8dd69d4913099e5d19baef1ad1f1065889b95e1161dfed0f2b7f499. Aug 13 01:02:20.760616 kubelet[2356]: I0813 01:02:20.760569 2356 kubelet_node_status.go:72] "Attempting to register node" node="172-233-209-21" Aug 13 01:02:20.769287 kubelet[2356]: E0813 01:02:20.769183 2356 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.233.209.21:6443/api/v1/nodes\": dial tcp 172.233.209.21:6443: connect: connection refused" node="172-233-209-21" Aug 13 01:02:20.772833 containerd[1575]: time="2025-08-13T01:02:20.772793727Z" level=info msg="StartContainer for \"0b6e87b87d88e3bd18447e040239f34829b57f7871e191e0c1fc3e8ad09456eb\" returns successfully" Aug 13 01:02:20.838715 containerd[1575]: time="2025-08-13T01:02:20.838218937Z" level=info msg="StartContainer for \"3073eeb3c8dd69d4913099e5d19baef1ad1f1065889b95e1161dfed0f2b7f499\" returns successfully" Aug 13 01:02:21.011908 kubelet[2356]: E0813 01:02:21.011867 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:21.012923 kubelet[2356]: E0813 01:02:21.012696 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:21.016732 kubelet[2356]: E0813 01:02:21.016697 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:21.575230 kubelet[2356]: I0813 01:02:21.573777 2356 kubelet_node_status.go:72] "Attempting to register node" node="172-233-209-21" Aug 13 01:02:22.022215 kubelet[2356]: E0813 01:02:22.021997 2356 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:22.101430 kubelet[2356]: E0813 01:02:22.101390 2356 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-233-209-21\" not found" node="172-233-209-21" Aug 13 01:02:22.155623 kubelet[2356]: E0813 01:02:22.155534 2356 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{172-233-209-21.185b2de4ac9155b1 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-233-209-21,UID:172-233-209-21,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-233-209-21,},FirstTimestamp:2025-08-13 01:02:19.961939377 +0000 UTC m=+0.254110511,LastTimestamp:2025-08-13 01:02:19.961939377 +0000 UTC m=+0.254110511,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-233-209-21,}" Aug 13 01:02:22.192135 kubelet[2356]: I0813 01:02:22.192085 2356 kubelet_node_status.go:75] "Successfully registered node" node="172-233-209-21" Aug 13 01:02:22.959732 kubelet[2356]: I0813 01:02:22.959688 2356 apiserver.go:52] "Watching apiserver" Aug 13 01:02:22.978741 kubelet[2356]: I0813 01:02:22.978708 2356 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:02:24.144997 systemd[1]: Reload requested from client PID 2622 ('systemctl') (unit session-7.scope)... Aug 13 01:02:24.145336 systemd[1]: Reloading... Aug 13 01:02:24.223379 zram_generator::config[2662]: No configuration found. Aug 13 01:02:24.333180 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:02:24.452012 systemd[1]: Reloading finished in 306 ms. Aug 13 01:02:24.474913 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:02:24.492318 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:02:24.492645 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:02:24.492697 systemd[1]: kubelet.service: Consumed 653ms CPU time, 128.8M memory peak. Aug 13 01:02:24.495343 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:02:24.661557 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:02:24.670705 (kubelet)[2717]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:02:24.714980 kubelet[2717]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:02:24.714980 kubelet[2717]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:02:24.714980 kubelet[2717]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:02:24.715575 kubelet[2717]: I0813 01:02:24.715016 2717 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:02:24.721044 kubelet[2717]: I0813 01:02:24.721028 2717 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:02:24.722591 kubelet[2717]: I0813 01:02:24.721706 2717 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:02:24.722591 kubelet[2717]: I0813 01:02:24.721912 2717 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:02:24.723300 kubelet[2717]: I0813 01:02:24.723264 2717 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 01:02:24.727381 kubelet[2717]: I0813 01:02:24.727023 2717 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:02:24.732031 kubelet[2717]: I0813 01:02:24.731976 2717 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:02:24.739013 kubelet[2717]: I0813 01:02:24.737360 2717 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:02:24.739013 kubelet[2717]: I0813 01:02:24.737450 2717 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:02:24.739013 kubelet[2717]: I0813 01:02:24.737557 2717 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:02:24.739013 kubelet[2717]: I0813 01:02:24.737589 2717 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-233-209-21","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:02:24.739182 kubelet[2717]: I0813 01:02:24.738000 2717 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:02:24.739182 kubelet[2717]: I0813 01:02:24.738008 2717 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:02:24.739182 kubelet[2717]: I0813 01:02:24.738031 2717 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:02:24.739182 kubelet[2717]: I0813 01:02:24.738116 2717 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:02:24.739182 kubelet[2717]: I0813 01:02:24.738126 2717 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:02:24.739182 kubelet[2717]: I0813 01:02:24.738147 2717 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:02:24.739182 kubelet[2717]: I0813 01:02:24.738156 2717 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:02:24.744262 kubelet[2717]: I0813 01:02:24.744247 2717 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:02:24.744593 kubelet[2717]: I0813 01:02:24.744580 2717 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:02:24.745115 kubelet[2717]: I0813 01:02:24.744932 2717 server.go:1274] "Started kubelet" Aug 13 01:02:24.747216 kubelet[2717]: I0813 01:02:24.746067 2717 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:02:24.747216 kubelet[2717]: I0813 01:02:24.746739 2717 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:02:24.747391 kubelet[2717]: I0813 01:02:24.747351 2717 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:02:24.747610 kubelet[2717]: I0813 01:02:24.747597 2717 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:02:24.749754 kubelet[2717]: I0813 01:02:24.749734 2717 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:02:24.750002 kubelet[2717]: I0813 01:02:24.749989 2717 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:02:24.757371 kubelet[2717]: I0813 01:02:24.756384 2717 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:02:24.757371 kubelet[2717]: E0813 01:02:24.756519 2717 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-233-209-21\" not found" Aug 13 01:02:24.758057 kubelet[2717]: I0813 01:02:24.757792 2717 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:02:24.758057 kubelet[2717]: I0813 01:02:24.757897 2717 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:02:24.759789 kubelet[2717]: E0813 01:02:24.758868 2717 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:02:24.759947 kubelet[2717]: I0813 01:02:24.759743 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:02:24.761293 kubelet[2717]: I0813 01:02:24.761262 2717 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:02:24.762973 kubelet[2717]: I0813 01:02:24.762952 2717 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:02:24.762973 kubelet[2717]: I0813 01:02:24.762973 2717 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:02:24.763240 kubelet[2717]: I0813 01:02:24.763004 2717 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:02:24.763240 kubelet[2717]: E0813 01:02:24.763040 2717 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:02:24.764364 kubelet[2717]: I0813 01:02:24.764327 2717 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:02:24.764364 kubelet[2717]: I0813 01:02:24.764340 2717 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:02:24.830879 kubelet[2717]: I0813 01:02:24.830850 2717 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:02:24.830879 kubelet[2717]: I0813 01:02:24.830869 2717 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:02:24.830879 kubelet[2717]: I0813 01:02:24.830886 2717 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:02:24.831047 kubelet[2717]: I0813 01:02:24.831014 2717 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:02:24.831047 kubelet[2717]: I0813 01:02:24.831023 2717 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:02:24.831047 kubelet[2717]: I0813 01:02:24.831039 2717 policy_none.go:49] "None policy: Start" Aug 13 01:02:24.832278 kubelet[2717]: I0813 01:02:24.832264 2717 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:02:24.832278 kubelet[2717]: I0813 01:02:24.832309 2717 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:02:24.832278 kubelet[2717]: I0813 01:02:24.832433 2717 state_mem.go:75] "Updated machine memory state" Aug 13 01:02:24.836974 kubelet[2717]: I0813 01:02:24.836954 2717 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:02:24.837511 kubelet[2717]: I0813 01:02:24.837486 2717 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:02:24.837553 kubelet[2717]: I0813 01:02:24.837500 2717 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:02:24.838516 kubelet[2717]: I0813 01:02:24.838502 2717 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:02:24.942445 kubelet[2717]: I0813 01:02:24.942412 2717 kubelet_node_status.go:72] "Attempting to register node" node="172-233-209-21" Aug 13 01:02:24.950656 kubelet[2717]: I0813 01:02:24.950633 2717 kubelet_node_status.go:111] "Node was previously registered" node="172-233-209-21" Aug 13 01:02:24.950707 kubelet[2717]: I0813 01:02:24.950684 2717 kubelet_node_status.go:75] "Successfully registered node" node="172-233-209-21" Aug 13 01:02:25.059114 kubelet[2717]: I0813 01:02:25.059028 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/84e5b6d62d487c8856457964b4e5c40a-ca-certs\") pod \"kube-apiserver-172-233-209-21\" (UID: \"84e5b6d62d487c8856457964b4e5c40a\") " pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:02:25.059114 kubelet[2717]: I0813 01:02:25.059056 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/84e5b6d62d487c8856457964b4e5c40a-k8s-certs\") pod \"kube-apiserver-172-233-209-21\" (UID: \"84e5b6d62d487c8856457964b4e5c40a\") " pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:02:25.059114 kubelet[2717]: I0813 01:02:25.059074 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/64d5421c8c4ac817f60c78f054d6ea67-kubeconfig\") pod \"kube-controller-manager-172-233-209-21\" (UID: \"64d5421c8c4ac817f60c78f054d6ea67\") " pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:02:25.059114 kubelet[2717]: I0813 01:02:25.059088 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/67a6bf939ebcfc59368f68eb98711e8e-kubeconfig\") pod \"kube-scheduler-172-233-209-21\" (UID: \"67a6bf939ebcfc59368f68eb98711e8e\") " pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:02:25.059114 kubelet[2717]: I0813 01:02:25.059108 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/84e5b6d62d487c8856457964b4e5c40a-usr-share-ca-certificates\") pod \"kube-apiserver-172-233-209-21\" (UID: \"84e5b6d62d487c8856457964b4e5c40a\") " pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:02:25.060209 kubelet[2717]: I0813 01:02:25.059127 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/64d5421c8c4ac817f60c78f054d6ea67-ca-certs\") pod \"kube-controller-manager-172-233-209-21\" (UID: \"64d5421c8c4ac817f60c78f054d6ea67\") " pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:02:25.060209 kubelet[2717]: I0813 01:02:25.059139 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/64d5421c8c4ac817f60c78f054d6ea67-flexvolume-dir\") pod \"kube-controller-manager-172-233-209-21\" (UID: \"64d5421c8c4ac817f60c78f054d6ea67\") " pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:02:25.060209 kubelet[2717]: I0813 01:02:25.059151 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/64d5421c8c4ac817f60c78f054d6ea67-k8s-certs\") pod \"kube-controller-manager-172-233-209-21\" (UID: \"64d5421c8c4ac817f60c78f054d6ea67\") " pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:02:25.060209 kubelet[2717]: I0813 01:02:25.059164 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/64d5421c8c4ac817f60c78f054d6ea67-usr-share-ca-certificates\") pod \"kube-controller-manager-172-233-209-21\" (UID: \"64d5421c8c4ac817f60c78f054d6ea67\") " pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:02:25.171731 kubelet[2717]: E0813 01:02:25.171697 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:25.172219 kubelet[2717]: E0813 01:02:25.172005 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:25.172740 kubelet[2717]: E0813 01:02:25.172717 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:25.744732 kubelet[2717]: I0813 01:02:25.744493 2717 apiserver.go:52] "Watching apiserver" Aug 13 01:02:25.759249 kubelet[2717]: I0813 01:02:25.759219 2717 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:02:25.799569 kubelet[2717]: E0813 01:02:25.799528 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:25.800703 kubelet[2717]: E0813 01:02:25.800630 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:25.808534 kubelet[2717]: E0813 01:02:25.806834 2717 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-233-209-21\" already exists" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:02:25.808534 kubelet[2717]: E0813 01:02:25.807495 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:25.837654 kubelet[2717]: I0813 01:02:25.837183 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-233-209-21" podStartSLOduration=1.837166167 podStartE2EDuration="1.837166167s" podCreationTimestamp="2025-08-13 01:02:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:02:25.836931067 +0000 UTC m=+1.161749431" watchObservedRunningTime="2025-08-13 01:02:25.837166167 +0000 UTC m=+1.161984531" Aug 13 01:02:25.838213 kubelet[2717]: I0813 01:02:25.838097 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-233-209-21" podStartSLOduration=1.838088397 podStartE2EDuration="1.838088397s" podCreationTimestamp="2025-08-13 01:02:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:02:25.828440227 +0000 UTC m=+1.153258591" watchObservedRunningTime="2025-08-13 01:02:25.838088397 +0000 UTC m=+1.162906761" Aug 13 01:02:25.848399 kubelet[2717]: I0813 01:02:25.848251 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-233-209-21" podStartSLOduration=1.8482370769999998 podStartE2EDuration="1.848237077s" podCreationTimestamp="2025-08-13 01:02:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:02:25.847746347 +0000 UTC m=+1.172564711" watchObservedRunningTime="2025-08-13 01:02:25.848237077 +0000 UTC m=+1.173055441" Aug 13 01:02:26.436310 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 01:02:26.800950 kubelet[2717]: E0813 01:02:26.800452 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:27.804255 kubelet[2717]: E0813 01:02:27.803535 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:27.932590 kubelet[2717]: E0813 01:02:27.932569 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:30.729762 kubelet[2717]: I0813 01:02:30.729688 2717 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:02:30.730718 containerd[1575]: time="2025-08-13T01:02:30.730669205Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:02:30.731474 kubelet[2717]: I0813 01:02:30.730972 2717 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:02:30.768536 kubelet[2717]: W0813 01:02:30.766519 2717 reflector.go:561] object-"kube-system"/"kube-proxy": failed to list *v1.ConfigMap: configmaps "kube-proxy" is forbidden: User "system:node:172-233-209-21" cannot list resource "configmaps" in API group "" in the namespace "kube-system": no relationship found between node '172-233-209-21' and this object Aug 13 01:02:30.768536 kubelet[2717]: E0813 01:02:30.766559 2717 reflector.go:158] "Unhandled Error" err="object-\"kube-system\"/\"kube-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:172-233-209-21\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node '172-233-209-21' and this object" logger="UnhandledError" Aug 13 01:02:30.775876 systemd[1]: Created slice kubepods-besteffort-pod3cf8726f_ad89_4c6b_a42e_e66e36065126.slice - libcontainer container kubepods-besteffort-pod3cf8726f_ad89_4c6b_a42e_e66e36065126.slice. Aug 13 01:02:30.796282 kubelet[2717]: I0813 01:02:30.796237 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lg8nb\" (UniqueName: \"kubernetes.io/projected/3cf8726f-ad89-4c6b-a42e-e66e36065126-kube-api-access-lg8nb\") pod \"kube-proxy-ff6qp\" (UID: \"3cf8726f-ad89-4c6b-a42e-e66e36065126\") " pod="kube-system/kube-proxy-ff6qp" Aug 13 01:02:30.796282 kubelet[2717]: I0813 01:02:30.796272 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/3cf8726f-ad89-4c6b-a42e-e66e36065126-kube-proxy\") pod \"kube-proxy-ff6qp\" (UID: \"3cf8726f-ad89-4c6b-a42e-e66e36065126\") " pod="kube-system/kube-proxy-ff6qp" Aug 13 01:02:30.796387 kubelet[2717]: I0813 01:02:30.796289 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/3cf8726f-ad89-4c6b-a42e-e66e36065126-xtables-lock\") pod \"kube-proxy-ff6qp\" (UID: \"3cf8726f-ad89-4c6b-a42e-e66e36065126\") " pod="kube-system/kube-proxy-ff6qp" Aug 13 01:02:30.796387 kubelet[2717]: I0813 01:02:30.796302 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/3cf8726f-ad89-4c6b-a42e-e66e36065126-lib-modules\") pod \"kube-proxy-ff6qp\" (UID: \"3cf8726f-ad89-4c6b-a42e-e66e36065126\") " pod="kube-system/kube-proxy-ff6qp" Aug 13 01:02:30.902211 kubelet[2717]: E0813 01:02:30.901299 2717 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 01:02:30.902211 kubelet[2717]: E0813 01:02:30.901324 2717 projected.go:194] Error preparing data for projected volume kube-api-access-lg8nb for pod kube-system/kube-proxy-ff6qp: configmap "kube-root-ca.crt" not found Aug 13 01:02:30.902211 kubelet[2717]: E0813 01:02:30.901362 2717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3cf8726f-ad89-4c6b-a42e-e66e36065126-kube-api-access-lg8nb podName:3cf8726f-ad89-4c6b-a42e-e66e36065126 nodeName:}" failed. No retries permitted until 2025-08-13 01:02:31.40134746 +0000 UTC m=+6.726165824 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-lg8nb" (UniqueName: "kubernetes.io/projected/3cf8726f-ad89-4c6b-a42e-e66e36065126-kube-api-access-lg8nb") pod "kube-proxy-ff6qp" (UID: "3cf8726f-ad89-4c6b-a42e-e66e36065126") : configmap "kube-root-ca.crt" not found Aug 13 01:02:31.459437 kubelet[2717]: E0813 01:02:31.459396 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:31.501598 kubelet[2717]: E0813 01:02:31.501569 2717 projected.go:288] Couldn't get configMap kube-system/kube-root-ca.crt: configmap "kube-root-ca.crt" not found Aug 13 01:02:31.501598 kubelet[2717]: E0813 01:02:31.501593 2717 projected.go:194] Error preparing data for projected volume kube-api-access-lg8nb for pod kube-system/kube-proxy-ff6qp: configmap "kube-root-ca.crt" not found Aug 13 01:02:31.501838 kubelet[2717]: E0813 01:02:31.501633 2717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3cf8726f-ad89-4c6b-a42e-e66e36065126-kube-api-access-lg8nb podName:3cf8726f-ad89-4c6b-a42e-e66e36065126 nodeName:}" failed. No retries permitted until 2025-08-13 01:02:32.501620202 +0000 UTC m=+7.826438566 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-lg8nb" (UniqueName: "kubernetes.io/projected/3cf8726f-ad89-4c6b-a42e-e66e36065126-kube-api-access-lg8nb") pod "kube-proxy-ff6qp" (UID: "3cf8726f-ad89-4c6b-a42e-e66e36065126") : configmap "kube-root-ca.crt" not found Aug 13 01:02:31.811053 kubelet[2717]: E0813 01:02:31.810521 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:31.864237 systemd[1]: Created slice kubepods-besteffort-pod131552f4_ce9b_4b8b_9d37_b3dff23e0591.slice - libcontainer container kubepods-besteffort-pod131552f4_ce9b_4b8b_9d37_b3dff23e0591.slice. Aug 13 01:02:31.898349 kubelet[2717]: E0813 01:02:31.898322 2717 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Aug 13 01:02:31.898916 kubelet[2717]: E0813 01:02:31.898654 2717 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3cf8726f-ad89-4c6b-a42e-e66e36065126-kube-proxy podName:3cf8726f-ad89-4c6b-a42e-e66e36065126 nodeName:}" failed. No retries permitted until 2025-08-13 01:02:32.398634826 +0000 UTC m=+7.723453200 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/3cf8726f-ad89-4c6b-a42e-e66e36065126-kube-proxy") pod "kube-proxy-ff6qp" (UID: "3cf8726f-ad89-4c6b-a42e-e66e36065126") : failed to sync configmap cache: timed out waiting for the condition Aug 13 01:02:31.904511 kubelet[2717]: I0813 01:02:31.904477 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/131552f4-ce9b-4b8b-9d37-b3dff23e0591-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-fdng7\" (UID: \"131552f4-ce9b-4b8b-9d37-b3dff23e0591\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-fdng7" Aug 13 01:02:31.904573 kubelet[2717]: I0813 01:02:31.904515 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-95lcs\" (UniqueName: \"kubernetes.io/projected/131552f4-ce9b-4b8b-9d37-b3dff23e0591-kube-api-access-95lcs\") pod \"tigera-operator-5bf8dfcb4-fdng7\" (UID: \"131552f4-ce9b-4b8b-9d37-b3dff23e0591\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-fdng7" Aug 13 01:02:32.101577 kubelet[2717]: E0813 01:02:32.101468 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:32.172325 containerd[1575]: time="2025-08-13T01:02:32.172272448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-fdng7,Uid:131552f4-ce9b-4b8b-9d37-b3dff23e0591,Namespace:tigera-operator,Attempt:0,}" Aug 13 01:02:32.189216 containerd[1575]: time="2025-08-13T01:02:32.188705044Z" level=info msg="connecting to shim dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd" address="unix:///run/containerd/s/103409dfe813c71d12aac06481c3a1f4542ebbe87b6ed3da994bedd4191ec3d8" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:02:32.220326 systemd[1]: Started cri-containerd-dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd.scope - libcontainer container dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd. Aug 13 01:02:32.263567 containerd[1575]: time="2025-08-13T01:02:32.263534684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-fdng7,Uid:131552f4-ce9b-4b8b-9d37-b3dff23e0591,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd\"" Aug 13 01:02:32.265627 containerd[1575]: time="2025-08-13T01:02:32.265581780Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 01:02:32.584169 kubelet[2717]: E0813 01:02:32.584135 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:32.584878 containerd[1575]: time="2025-08-13T01:02:32.584524370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ff6qp,Uid:3cf8726f-ad89-4c6b-a42e-e66e36065126,Namespace:kube-system,Attempt:0,}" Aug 13 01:02:32.602113 containerd[1575]: time="2025-08-13T01:02:32.602056177Z" level=info msg="connecting to shim 6e4659e6647912f96c24c4c199010edceb24af93264ea85d14a1c2f555935745" address="unix:///run/containerd/s/9d5b765bfb43d238c890d1068a43af734584c468fafb35d9b18a06e8d50b00fa" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:02:32.628319 systemd[1]: Started cri-containerd-6e4659e6647912f96c24c4c199010edceb24af93264ea85d14a1c2f555935745.scope - libcontainer container 6e4659e6647912f96c24c4c199010edceb24af93264ea85d14a1c2f555935745. Aug 13 01:02:32.657966 containerd[1575]: time="2025-08-13T01:02:32.657934303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-ff6qp,Uid:3cf8726f-ad89-4c6b-a42e-e66e36065126,Namespace:kube-system,Attempt:0,} returns sandbox id \"6e4659e6647912f96c24c4c199010edceb24af93264ea85d14a1c2f555935745\"" Aug 13 01:02:32.658958 kubelet[2717]: E0813 01:02:32.658714 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:32.660862 containerd[1575]: time="2025-08-13T01:02:32.660710967Z" level=info msg="CreateContainer within sandbox \"6e4659e6647912f96c24c4c199010edceb24af93264ea85d14a1c2f555935745\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:02:32.672039 containerd[1575]: time="2025-08-13T01:02:32.672020008Z" level=info msg="Container 13874db445858d5039e0ae6525e003a6bee52c860065ff3b8da35836ae38375c: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:02:32.676586 containerd[1575]: time="2025-08-13T01:02:32.676512312Z" level=info msg="CreateContainer within sandbox \"6e4659e6647912f96c24c4c199010edceb24af93264ea85d14a1c2f555935745\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"13874db445858d5039e0ae6525e003a6bee52c860065ff3b8da35836ae38375c\"" Aug 13 01:02:32.677623 containerd[1575]: time="2025-08-13T01:02:32.677219761Z" level=info msg="StartContainer for \"13874db445858d5039e0ae6525e003a6bee52c860065ff3b8da35836ae38375c\"" Aug 13 01:02:32.679774 containerd[1575]: time="2025-08-13T01:02:32.679712559Z" level=info msg="connecting to shim 13874db445858d5039e0ae6525e003a6bee52c860065ff3b8da35836ae38375c" address="unix:///run/containerd/s/9d5b765bfb43d238c890d1068a43af734584c468fafb35d9b18a06e8d50b00fa" protocol=ttrpc version=3 Aug 13 01:02:32.702347 systemd[1]: Started cri-containerd-13874db445858d5039e0ae6525e003a6bee52c860065ff3b8da35836ae38375c.scope - libcontainer container 13874db445858d5039e0ae6525e003a6bee52c860065ff3b8da35836ae38375c. Aug 13 01:02:32.751987 containerd[1575]: time="2025-08-13T01:02:32.751866573Z" level=info msg="StartContainer for \"13874db445858d5039e0ae6525e003a6bee52c860065ff3b8da35836ae38375c\" returns successfully" Aug 13 01:02:32.817679 kubelet[2717]: E0813 01:02:32.817639 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:32.818950 kubelet[2717]: E0813 01:02:32.818275 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:32.819843 kubelet[2717]: E0813 01:02:32.819800 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:32.850720 kubelet[2717]: I0813 01:02:32.849312 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-ff6qp" podStartSLOduration=2.849298125 podStartE2EDuration="2.849298125s" podCreationTimestamp="2025-08-13 01:02:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:02:32.849172437 +0000 UTC m=+8.173990801" watchObservedRunningTime="2025-08-13 01:02:32.849298125 +0000 UTC m=+8.174116489" Aug 13 01:02:33.595916 containerd[1575]: time="2025-08-13T01:02:33.595875040Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:33.596669 containerd[1575]: time="2025-08-13T01:02:33.596599569Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 01:02:33.597290 containerd[1575]: time="2025-08-13T01:02:33.597262149Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:33.599402 containerd[1575]: time="2025-08-13T01:02:33.599268617Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:33.600678 containerd[1575]: time="2025-08-13T01:02:33.599945877Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.334340148s" Aug 13 01:02:33.600678 containerd[1575]: time="2025-08-13T01:02:33.599975116Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:02:33.603086 containerd[1575]: time="2025-08-13T01:02:33.603066918Z" level=info msg="CreateContainer within sandbox \"dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 01:02:33.610234 containerd[1575]: time="2025-08-13T01:02:33.608759249Z" level=info msg="Container 60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:02:33.615533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2102800290.mount: Deactivated successfully. Aug 13 01:02:33.616872 containerd[1575]: time="2025-08-13T01:02:33.616838642Z" level=info msg="CreateContainer within sandbox \"dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733\"" Aug 13 01:02:33.617912 containerd[1575]: time="2025-08-13T01:02:33.617878236Z" level=info msg="StartContainer for \"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733\"" Aug 13 01:02:33.618802 containerd[1575]: time="2025-08-13T01:02:33.618725652Z" level=info msg="connecting to shim 60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733" address="unix:///run/containerd/s/103409dfe813c71d12aac06481c3a1f4542ebbe87b6ed3da994bedd4191ec3d8" protocol=ttrpc version=3 Aug 13 01:02:33.645317 systemd[1]: Started cri-containerd-60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733.scope - libcontainer container 60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733. Aug 13 01:02:33.681133 containerd[1575]: time="2025-08-13T01:02:33.680506185Z" level=info msg="StartContainer for \"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733\" returns successfully" Aug 13 01:02:37.936486 kubelet[2717]: E0813 01:02:37.936457 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:37.945855 kubelet[2717]: I0813 01:02:37.945527 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-fdng7" podStartSLOduration=5.609643425 podStartE2EDuration="6.945516748s" podCreationTimestamp="2025-08-13 01:02:31 +0000 UTC" firstStartedPulling="2025-08-13 01:02:32.264879861 +0000 UTC m=+7.589698225" lastFinishedPulling="2025-08-13 01:02:33.600753184 +0000 UTC m=+8.925571548" observedRunningTime="2025-08-13 01:02:33.829581139 +0000 UTC m=+9.154399503" watchObservedRunningTime="2025-08-13 01:02:37.945516748 +0000 UTC m=+13.270335112" Aug 13 01:02:39.376778 sudo[1809]: pam_unix(sudo:session): session closed for user root Aug 13 01:02:39.427779 sshd[1808]: Connection closed by 147.75.109.163 port 49310 Aug 13 01:02:39.429265 sshd-session[1806]: pam_unix(sshd:session): session closed for user core Aug 13 01:02:39.434148 systemd-logind[1550]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:02:39.435788 systemd[1]: sshd@6-172.233.209.21:22-147.75.109.163:49310.service: Deactivated successfully. Aug 13 01:02:39.440631 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:02:39.441277 systemd[1]: session-7.scope: Consumed 3.954s CPU time, 222.9M memory peak. Aug 13 01:02:39.445736 systemd-logind[1550]: Removed session 7. Aug 13 01:02:40.706215 update_engine[1554]: I20250813 01:02:40.705230 1554 update_attempter.cc:509] Updating boot flags... Aug 13 01:02:42.719614 systemd[1]: Created slice kubepods-besteffort-podd0bc16c7_8c0c_40dd_9744_85f7decf56d7.slice - libcontainer container kubepods-besteffort-podd0bc16c7_8c0c_40dd_9744_85f7decf56d7.slice. Aug 13 01:02:42.774773 kubelet[2717]: I0813 01:02:42.774749 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d0bc16c7-8c0c-40dd-9744-85f7decf56d7-typha-certs\") pod \"calico-typha-5fdd567c68-zgxjx\" (UID: \"d0bc16c7-8c0c-40dd-9744-85f7decf56d7\") " pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:02:42.775362 kubelet[2717]: I0813 01:02:42.775270 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d0bc16c7-8c0c-40dd-9744-85f7decf56d7-tigera-ca-bundle\") pod \"calico-typha-5fdd567c68-zgxjx\" (UID: \"d0bc16c7-8c0c-40dd-9744-85f7decf56d7\") " pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:02:42.775362 kubelet[2717]: I0813 01:02:42.775298 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d5zsw\" (UniqueName: \"kubernetes.io/projected/d0bc16c7-8c0c-40dd-9744-85f7decf56d7-kube-api-access-d5zsw\") pod \"calico-typha-5fdd567c68-zgxjx\" (UID: \"d0bc16c7-8c0c-40dd-9744-85f7decf56d7\") " pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:02:43.023539 kubelet[2717]: E0813 01:02:43.023418 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:43.025843 containerd[1575]: time="2025-08-13T01:02:43.025765495Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fdd567c68-zgxjx,Uid:d0bc16c7-8c0c-40dd-9744-85f7decf56d7,Namespace:calico-system,Attempt:0,}" Aug 13 01:02:43.038643 systemd[1]: Created slice kubepods-besteffort-pode6731efa_3c96_4227_b83c_f4c3adff36c6.slice - libcontainer container kubepods-besteffort-pode6731efa_3c96_4227_b83c_f4c3adff36c6.slice. Aug 13 01:02:43.060384 containerd[1575]: time="2025-08-13T01:02:43.060303661Z" level=info msg="connecting to shim b135e3a363965c51514b25fe9381f404baf7f08abdd110257a0a2b464913aa29" address="unix:///run/containerd/s/2307250b9b71657d25937054e79e52c1f9b0bbd39531593b5c9deec8c8a678ee" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:02:43.077622 kubelet[2717]: I0813 01:02:43.077541 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/e6731efa-3c96-4227-b83c-f4c3adff36c6-var-run-calico\") pod \"calico-node-7pdcs\" (UID: \"e6731efa-3c96-4227-b83c-f4c3adff36c6\") " pod="calico-system/calico-node-7pdcs" Aug 13 01:02:43.077690 kubelet[2717]: I0813 01:02:43.077631 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/e6731efa-3c96-4227-b83c-f4c3adff36c6-xtables-lock\") pod \"calico-node-7pdcs\" (UID: \"e6731efa-3c96-4227-b83c-f4c3adff36c6\") " pod="calico-system/calico-node-7pdcs" Aug 13 01:02:43.077690 kubelet[2717]: I0813 01:02:43.077650 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmcp8\" (UniqueName: \"kubernetes.io/projected/e6731efa-3c96-4227-b83c-f4c3adff36c6-kube-api-access-pmcp8\") pod \"calico-node-7pdcs\" (UID: \"e6731efa-3c96-4227-b83c-f4c3adff36c6\") " pod="calico-system/calico-node-7pdcs" Aug 13 01:02:43.077690 kubelet[2717]: I0813 01:02:43.077665 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6731efa-3c96-4227-b83c-f4c3adff36c6-tigera-ca-bundle\") pod \"calico-node-7pdcs\" (UID: \"e6731efa-3c96-4227-b83c-f4c3adff36c6\") " pod="calico-system/calico-node-7pdcs" Aug 13 01:02:43.077855 kubelet[2717]: I0813 01:02:43.077728 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/e6731efa-3c96-4227-b83c-f4c3adff36c6-cni-net-dir\") pod \"calico-node-7pdcs\" (UID: \"e6731efa-3c96-4227-b83c-f4c3adff36c6\") " pod="calico-system/calico-node-7pdcs" Aug 13 01:02:43.077855 kubelet[2717]: I0813 01:02:43.077743 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/e6731efa-3c96-4227-b83c-f4c3adff36c6-node-certs\") pod \"calico-node-7pdcs\" (UID: \"e6731efa-3c96-4227-b83c-f4c3adff36c6\") " pod="calico-system/calico-node-7pdcs" Aug 13 01:02:43.077855 kubelet[2717]: I0813 01:02:43.077806 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/e6731efa-3c96-4227-b83c-f4c3adff36c6-cni-bin-dir\") pod \"calico-node-7pdcs\" (UID: \"e6731efa-3c96-4227-b83c-f4c3adff36c6\") " pod="calico-system/calico-node-7pdcs" Aug 13 01:02:43.077855 kubelet[2717]: I0813 01:02:43.077822 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/e6731efa-3c96-4227-b83c-f4c3adff36c6-cni-log-dir\") pod \"calico-node-7pdcs\" (UID: \"e6731efa-3c96-4227-b83c-f4c3adff36c6\") " pod="calico-system/calico-node-7pdcs" Aug 13 01:02:43.077855 kubelet[2717]: I0813 01:02:43.077834 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/e6731efa-3c96-4227-b83c-f4c3adff36c6-flexvol-driver-host\") pod \"calico-node-7pdcs\" (UID: \"e6731efa-3c96-4227-b83c-f4c3adff36c6\") " pod="calico-system/calico-node-7pdcs" Aug 13 01:02:43.077959 kubelet[2717]: I0813 01:02:43.077897 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/e6731efa-3c96-4227-b83c-f4c3adff36c6-var-lib-calico\") pod \"calico-node-7pdcs\" (UID: \"e6731efa-3c96-4227-b83c-f4c3adff36c6\") " pod="calico-system/calico-node-7pdcs" Aug 13 01:02:43.077959 kubelet[2717]: I0813 01:02:43.077913 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e6731efa-3c96-4227-b83c-f4c3adff36c6-lib-modules\") pod \"calico-node-7pdcs\" (UID: \"e6731efa-3c96-4227-b83c-f4c3adff36c6\") " pod="calico-system/calico-node-7pdcs" Aug 13 01:02:43.077959 kubelet[2717]: I0813 01:02:43.077935 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/e6731efa-3c96-4227-b83c-f4c3adff36c6-policysync\") pod \"calico-node-7pdcs\" (UID: \"e6731efa-3c96-4227-b83c-f4c3adff36c6\") " pod="calico-system/calico-node-7pdcs" Aug 13 01:02:43.086330 systemd[1]: Started cri-containerd-b135e3a363965c51514b25fe9381f404baf7f08abdd110257a0a2b464913aa29.scope - libcontainer container b135e3a363965c51514b25fe9381f404baf7f08abdd110257a0a2b464913aa29. Aug 13 01:02:43.148511 containerd[1575]: time="2025-08-13T01:02:43.148444317Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5fdd567c68-zgxjx,Uid:d0bc16c7-8c0c-40dd-9744-85f7decf56d7,Namespace:calico-system,Attempt:0,} returns sandbox id \"b135e3a363965c51514b25fe9381f404baf7f08abdd110257a0a2b464913aa29\"" Aug 13 01:02:43.150635 kubelet[2717]: E0813 01:02:43.150614 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:43.152511 containerd[1575]: time="2025-08-13T01:02:43.152458964Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 01:02:43.183294 kubelet[2717]: E0813 01:02:43.183261 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.183457 kubelet[2717]: W0813 01:02:43.183386 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.183457 kubelet[2717]: E0813 01:02:43.183421 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.183771 kubelet[2717]: E0813 01:02:43.183729 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.183771 kubelet[2717]: W0813 01:02:43.183740 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.183771 kubelet[2717]: E0813 01:02:43.183750 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.184991 kubelet[2717]: E0813 01:02:43.184362 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.184991 kubelet[2717]: W0813 01:02:43.184373 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.184991 kubelet[2717]: E0813 01:02:43.184382 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.185337 kubelet[2717]: E0813 01:02:43.185290 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.185337 kubelet[2717]: W0813 01:02:43.185308 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.185337 kubelet[2717]: E0813 01:02:43.185317 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.185827 kubelet[2717]: E0813 01:02:43.185793 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.185827 kubelet[2717]: W0813 01:02:43.185818 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.187227 kubelet[2717]: E0813 01:02:43.186471 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.187227 kubelet[2717]: W0813 01:02:43.186483 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.187227 kubelet[2717]: E0813 01:02:43.186496 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.187227 kubelet[2717]: E0813 01:02:43.186513 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.189354 kubelet[2717]: E0813 01:02:43.189332 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.189689 kubelet[2717]: W0813 01:02:43.189610 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.189689 kubelet[2717]: E0813 01:02:43.189628 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.193101 kubelet[2717]: E0813 01:02:43.193083 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.193101 kubelet[2717]: W0813 01:02:43.193096 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.193180 kubelet[2717]: E0813 01:02:43.193107 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.284426 kubelet[2717]: E0813 01:02:43.282720 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-84hvc" podUID="f2b74998-29fc-4213-8313-543c9154bc64" Aug 13 01:02:43.343435 containerd[1575]: time="2025-08-13T01:02:43.343381015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7pdcs,Uid:e6731efa-3c96-4227-b83c-f4c3adff36c6,Namespace:calico-system,Attempt:0,}" Aug 13 01:02:43.362226 containerd[1575]: time="2025-08-13T01:02:43.362164281Z" level=info msg="connecting to shim d12cf5aacc597b7c6167326efb88e368e58730ffb459492ad3a89d6754a56a6d" address="unix:///run/containerd/s/ea3ca65a49e80f52abda37f00293fb706217e18eb71f9546cee6ff146935258b" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:02:43.372088 kubelet[2717]: E0813 01:02:43.372052 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.372088 kubelet[2717]: W0813 01:02:43.372082 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.372270 kubelet[2717]: E0813 01:02:43.372102 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.373072 kubelet[2717]: E0813 01:02:43.373053 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.373072 kubelet[2717]: W0813 01:02:43.373070 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.373156 kubelet[2717]: E0813 01:02:43.373083 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.373756 kubelet[2717]: E0813 01:02:43.373699 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.373793 kubelet[2717]: W0813 01:02:43.373760 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.373793 kubelet[2717]: E0813 01:02:43.373773 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.374168 kubelet[2717]: E0813 01:02:43.374121 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.374253 kubelet[2717]: W0813 01:02:43.374178 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.374253 kubelet[2717]: E0813 01:02:43.374203 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.374675 kubelet[2717]: E0813 01:02:43.374658 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.374675 kubelet[2717]: W0813 01:02:43.374670 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.374675 kubelet[2717]: E0813 01:02:43.374679 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.375094 kubelet[2717]: E0813 01:02:43.375077 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.375094 kubelet[2717]: W0813 01:02:43.375089 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.375322 kubelet[2717]: E0813 01:02:43.375098 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.375693 kubelet[2717]: E0813 01:02:43.375676 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.375693 kubelet[2717]: W0813 01:02:43.375688 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.375734 kubelet[2717]: E0813 01:02:43.375696 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.375964 kubelet[2717]: E0813 01:02:43.375948 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.375964 kubelet[2717]: W0813 01:02:43.375961 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.376013 kubelet[2717]: E0813 01:02:43.375971 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.376325 kubelet[2717]: E0813 01:02:43.376297 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.376325 kubelet[2717]: W0813 01:02:43.376313 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.376325 kubelet[2717]: E0813 01:02:43.376321 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.376625 kubelet[2717]: E0813 01:02:43.376608 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.376625 kubelet[2717]: W0813 01:02:43.376621 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.376672 kubelet[2717]: E0813 01:02:43.376628 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.377295 kubelet[2717]: E0813 01:02:43.377277 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.377295 kubelet[2717]: W0813 01:02:43.377291 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.377361 kubelet[2717]: E0813 01:02:43.377302 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.378759 kubelet[2717]: E0813 01:02:43.378741 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.378759 kubelet[2717]: W0813 01:02:43.378755 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.378834 kubelet[2717]: E0813 01:02:43.378764 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.378988 kubelet[2717]: E0813 01:02:43.378970 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.378988 kubelet[2717]: W0813 01:02:43.378984 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.379056 kubelet[2717]: E0813 01:02:43.378992 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.379450 kubelet[2717]: E0813 01:02:43.379435 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.379450 kubelet[2717]: W0813 01:02:43.379447 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.379502 kubelet[2717]: E0813 01:02:43.379455 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.379625 kubelet[2717]: E0813 01:02:43.379610 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.379625 kubelet[2717]: W0813 01:02:43.379622 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.379666 kubelet[2717]: E0813 01:02:43.379630 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.379795 kubelet[2717]: E0813 01:02:43.379774 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.379795 kubelet[2717]: W0813 01:02:43.379786 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.379795 kubelet[2717]: E0813 01:02:43.379793 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.380224 kubelet[2717]: E0813 01:02:43.380183 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.380224 kubelet[2717]: W0813 01:02:43.380220 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.380280 kubelet[2717]: E0813 01:02:43.380229 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.380632 kubelet[2717]: E0813 01:02:43.380615 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.380632 kubelet[2717]: W0813 01:02:43.380628 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.380632 kubelet[2717]: E0813 01:02:43.380636 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.382342 kubelet[2717]: E0813 01:02:43.382083 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.382342 kubelet[2717]: W0813 01:02:43.382098 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.382342 kubelet[2717]: E0813 01:02:43.382108 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.383101 kubelet[2717]: E0813 01:02:43.383079 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.383101 kubelet[2717]: W0813 01:02:43.383095 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.383233 kubelet[2717]: E0813 01:02:43.383215 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.384267 kubelet[2717]: E0813 01:02:43.384222 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.384267 kubelet[2717]: W0813 01:02:43.384238 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.384267 kubelet[2717]: E0813 01:02:43.384261 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.384346 kubelet[2717]: I0813 01:02:43.384304 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/f2b74998-29fc-4213-8313-543c9154bc64-kubelet-dir\") pod \"csi-node-driver-84hvc\" (UID: \"f2b74998-29fc-4213-8313-543c9154bc64\") " pod="calico-system/csi-node-driver-84hvc" Aug 13 01:02:43.385334 kubelet[2717]: E0813 01:02:43.385305 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.385334 kubelet[2717]: W0813 01:02:43.385324 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.385383 kubelet[2717]: E0813 01:02:43.385348 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.385383 kubelet[2717]: I0813 01:02:43.385364 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/f2b74998-29fc-4213-8313-543c9154bc64-registration-dir\") pod \"csi-node-driver-84hvc\" (UID: \"f2b74998-29fc-4213-8313-543c9154bc64\") " pod="calico-system/csi-node-driver-84hvc" Aug 13 01:02:43.386665 kubelet[2717]: E0813 01:02:43.386523 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.386665 kubelet[2717]: W0813 01:02:43.386546 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.386665 kubelet[2717]: E0813 01:02:43.386587 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.386665 kubelet[2717]: I0813 01:02:43.386611 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/f2b74998-29fc-4213-8313-543c9154bc64-varrun\") pod \"csi-node-driver-84hvc\" (UID: \"f2b74998-29fc-4213-8313-543c9154bc64\") " pod="calico-system/csi-node-driver-84hvc" Aug 13 01:02:43.386815 kubelet[2717]: E0813 01:02:43.386782 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.386815 kubelet[2717]: W0813 01:02:43.386795 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.388158 kubelet[2717]: E0813 01:02:43.387838 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.388158 kubelet[2717]: W0813 01:02:43.387847 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.388293 kubelet[2717]: E0813 01:02:43.388266 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.388293 kubelet[2717]: W0813 01:02:43.388280 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.388448 kubelet[2717]: E0813 01:02:43.388424 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.388448 kubelet[2717]: W0813 01:02:43.388438 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.388448 kubelet[2717]: E0813 01:02:43.388446 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.388600 kubelet[2717]: E0813 01:02:43.388577 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.388600 kubelet[2717]: W0813 01:02:43.388590 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.388600 kubelet[2717]: E0813 01:02:43.388598 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.388681 kubelet[2717]: E0813 01:02:43.388616 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.388835 kubelet[2717]: E0813 01:02:43.388819 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.388835 kubelet[2717]: W0813 01:02:43.388832 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.388892 kubelet[2717]: E0813 01:02:43.388841 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.388892 kubelet[2717]: E0813 01:02:43.388852 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.389003 kubelet[2717]: E0813 01:02:43.388987 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.389003 kubelet[2717]: W0813 01:02:43.388999 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.389048 kubelet[2717]: E0813 01:02:43.389007 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.389048 kubelet[2717]: E0813 01:02:43.389018 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.389048 kubelet[2717]: I0813 01:02:43.389034 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/f2b74998-29fc-4213-8313-543c9154bc64-socket-dir\") pod \"csi-node-driver-84hvc\" (UID: \"f2b74998-29fc-4213-8313-543c9154bc64\") " pod="calico-system/csi-node-driver-84hvc" Aug 13 01:02:43.389211 kubelet[2717]: E0813 01:02:43.389176 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.389266 kubelet[2717]: W0813 01:02:43.389236 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.389266 kubelet[2717]: E0813 01:02:43.389246 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.389266 kubelet[2717]: I0813 01:02:43.389259 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c9kwh\" (UniqueName: \"kubernetes.io/projected/f2b74998-29fc-4213-8313-543c9154bc64-kube-api-access-c9kwh\") pod \"csi-node-driver-84hvc\" (UID: \"f2b74998-29fc-4213-8313-543c9154bc64\") " pod="calico-system/csi-node-driver-84hvc" Aug 13 01:02:43.389452 kubelet[2717]: E0813 01:02:43.389434 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.389452 kubelet[2717]: W0813 01:02:43.389448 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.389515 kubelet[2717]: E0813 01:02:43.389457 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.389662 kubelet[2717]: E0813 01:02:43.389642 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.389662 kubelet[2717]: W0813 01:02:43.389654 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.389662 kubelet[2717]: E0813 01:02:43.389662 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.389819 kubelet[2717]: E0813 01:02:43.389803 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.389819 kubelet[2717]: W0813 01:02:43.389816 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.389860 kubelet[2717]: E0813 01:02:43.389825 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.389965 kubelet[2717]: E0813 01:02:43.389950 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.389965 kubelet[2717]: W0813 01:02:43.389962 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.390027 kubelet[2717]: E0813 01:02:43.389969 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.392380 systemd[1]: Started cri-containerd-d12cf5aacc597b7c6167326efb88e368e58730ffb459492ad3a89d6754a56a6d.scope - libcontainer container d12cf5aacc597b7c6167326efb88e368e58730ffb459492ad3a89d6754a56a6d. Aug 13 01:02:43.442028 containerd[1575]: time="2025-08-13T01:02:43.441975775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-7pdcs,Uid:e6731efa-3c96-4227-b83c-f4c3adff36c6,Namespace:calico-system,Attempt:0,} returns sandbox id \"d12cf5aacc597b7c6167326efb88e368e58730ffb459492ad3a89d6754a56a6d\"" Aug 13 01:02:43.490693 kubelet[2717]: E0813 01:02:43.490650 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.490693 kubelet[2717]: W0813 01:02:43.490675 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.490693 kubelet[2717]: E0813 01:02:43.490693 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.491052 kubelet[2717]: E0813 01:02:43.491022 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.491052 kubelet[2717]: W0813 01:02:43.491044 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.491052 kubelet[2717]: E0813 01:02:43.491066 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.493342 kubelet[2717]: E0813 01:02:43.493323 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.493342 kubelet[2717]: W0813 01:02:43.493338 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.493518 kubelet[2717]: E0813 01:02:43.493352 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.493572 kubelet[2717]: E0813 01:02:43.493551 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.493572 kubelet[2717]: W0813 01:02:43.493566 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.493668 kubelet[2717]: E0813 01:02:43.493644 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.493824 kubelet[2717]: E0813 01:02:43.493805 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.493824 kubelet[2717]: W0813 01:02:43.493819 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.493917 kubelet[2717]: E0813 01:02:43.493895 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.494305 kubelet[2717]: E0813 01:02:43.494287 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.494305 kubelet[2717]: W0813 01:02:43.494301 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.494401 kubelet[2717]: E0813 01:02:43.494380 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.494545 kubelet[2717]: E0813 01:02:43.494525 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.494545 kubelet[2717]: W0813 01:02:43.494540 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.494710 kubelet[2717]: E0813 01:02:43.494553 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.494760 kubelet[2717]: E0813 01:02:43.494737 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.494760 kubelet[2717]: W0813 01:02:43.494754 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.494849 kubelet[2717]: E0813 01:02:43.494829 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.494998 kubelet[2717]: E0813 01:02:43.494978 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.494998 kubelet[2717]: W0813 01:02:43.494993 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.495088 kubelet[2717]: E0813 01:02:43.495068 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.495276 kubelet[2717]: E0813 01:02:43.495256 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.495276 kubelet[2717]: W0813 01:02:43.495271 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.495372 kubelet[2717]: E0813 01:02:43.495351 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.495510 kubelet[2717]: E0813 01:02:43.495491 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.495510 kubelet[2717]: W0813 01:02:43.495504 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.495608 kubelet[2717]: E0813 01:02:43.495588 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.497323 kubelet[2717]: E0813 01:02:43.497303 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.497323 kubelet[2717]: W0813 01:02:43.497317 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.497410 kubelet[2717]: E0813 01:02:43.497390 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.497600 kubelet[2717]: E0813 01:02:43.497590 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.497891 kubelet[2717]: W0813 01:02:43.497653 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.497940 kubelet[2717]: E0813 01:02:43.497929 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.498151 kubelet[2717]: E0813 01:02:43.498140 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.498246 kubelet[2717]: W0813 01:02:43.498235 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.498328 kubelet[2717]: E0813 01:02:43.498317 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.498521 kubelet[2717]: E0813 01:02:43.498511 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.498586 kubelet[2717]: W0813 01:02:43.498575 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.498699 kubelet[2717]: E0813 01:02:43.498688 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.498955 kubelet[2717]: E0813 01:02:43.498933 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.498955 kubelet[2717]: W0813 01:02:43.498943 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.499077 kubelet[2717]: E0813 01:02:43.499044 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.499266 kubelet[2717]: E0813 01:02:43.499255 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.499406 kubelet[2717]: W0813 01:02:43.499322 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.499453 kubelet[2717]: E0813 01:02:43.499442 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.501281 kubelet[2717]: E0813 01:02:43.501226 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.501281 kubelet[2717]: W0813 01:02:43.501237 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.501385 kubelet[2717]: E0813 01:02:43.501354 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.501592 kubelet[2717]: E0813 01:02:43.501582 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.501672 kubelet[2717]: W0813 01:02:43.501646 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.501743 kubelet[2717]: E0813 01:02:43.501733 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.502172 kubelet[2717]: E0813 01:02:43.502149 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.502172 kubelet[2717]: W0813 01:02:43.502160 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.502468 kubelet[2717]: E0813 01:02:43.502350 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.502646 kubelet[2717]: E0813 01:02:43.502636 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.502705 kubelet[2717]: W0813 01:02:43.502694 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.502817 kubelet[2717]: E0813 01:02:43.502807 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.503023 kubelet[2717]: E0813 01:02:43.503002 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.503023 kubelet[2717]: W0813 01:02:43.503012 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.503139 kubelet[2717]: E0813 01:02:43.503117 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.503582 kubelet[2717]: E0813 01:02:43.503559 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.503582 kubelet[2717]: W0813 01:02:43.503569 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.503743 kubelet[2717]: E0813 01:02:43.503719 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.505349 kubelet[2717]: E0813 01:02:43.505224 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.505349 kubelet[2717]: W0813 01:02:43.505235 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.505349 kubelet[2717]: E0813 01:02:43.505247 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.505491 kubelet[2717]: E0813 01:02:43.505480 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.505549 kubelet[2717]: W0813 01:02:43.505537 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.505591 kubelet[2717]: E0813 01:02:43.505581 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:43.519218 kubelet[2717]: E0813 01:02:43.518548 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:43.519218 kubelet[2717]: W0813 01:02:43.518561 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:43.519218 kubelet[2717]: E0813 01:02:43.518570 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.424634 containerd[1575]: time="2025-08-13T01:02:44.424582189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:44.425403 containerd[1575]: time="2025-08-13T01:02:44.425370453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 01:02:44.426174 containerd[1575]: time="2025-08-13T01:02:44.425919359Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:44.427624 containerd[1575]: time="2025-08-13T01:02:44.427602306Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:44.428241 containerd[1575]: time="2025-08-13T01:02:44.428087092Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.275603398s" Aug 13 01:02:44.428241 containerd[1575]: time="2025-08-13T01:02:44.428116722Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 01:02:44.430377 containerd[1575]: time="2025-08-13T01:02:44.430349025Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 01:02:44.445677 containerd[1575]: time="2025-08-13T01:02:44.445646457Z" level=info msg="CreateContainer within sandbox \"b135e3a363965c51514b25fe9381f404baf7f08abdd110257a0a2b464913aa29\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 01:02:44.454724 containerd[1575]: time="2025-08-13T01:02:44.454704817Z" level=info msg="Container 6867f4b46ce5c2c14a591a7f6045f73bfd4793092b7a915b4c0c284cbf28d414: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:02:44.460256 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2179118704.mount: Deactivated successfully. Aug 13 01:02:44.465132 containerd[1575]: time="2025-08-13T01:02:44.465095347Z" level=info msg="CreateContainer within sandbox \"b135e3a363965c51514b25fe9381f404baf7f08abdd110257a0a2b464913aa29\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6867f4b46ce5c2c14a591a7f6045f73bfd4793092b7a915b4c0c284cbf28d414\"" Aug 13 01:02:44.465720 containerd[1575]: time="2025-08-13T01:02:44.465649803Z" level=info msg="StartContainer for \"6867f4b46ce5c2c14a591a7f6045f73bfd4793092b7a915b4c0c284cbf28d414\"" Aug 13 01:02:44.467067 containerd[1575]: time="2025-08-13T01:02:44.467034882Z" level=info msg="connecting to shim 6867f4b46ce5c2c14a591a7f6045f73bfd4793092b7a915b4c0c284cbf28d414" address="unix:///run/containerd/s/2307250b9b71657d25937054e79e52c1f9b0bbd39531593b5c9deec8c8a678ee" protocol=ttrpc version=3 Aug 13 01:02:44.493336 systemd[1]: Started cri-containerd-6867f4b46ce5c2c14a591a7f6045f73bfd4793092b7a915b4c0c284cbf28d414.scope - libcontainer container 6867f4b46ce5c2c14a591a7f6045f73bfd4793092b7a915b4c0c284cbf28d414. Aug 13 01:02:44.560972 containerd[1575]: time="2025-08-13T01:02:44.560943799Z" level=info msg="StartContainer for \"6867f4b46ce5c2c14a591a7f6045f73bfd4793092b7a915b4c0c284cbf28d414\" returns successfully" Aug 13 01:02:44.766488 kubelet[2717]: E0813 01:02:44.766444 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-84hvc" podUID="f2b74998-29fc-4213-8313-543c9154bc64" Aug 13 01:02:44.862240 kubelet[2717]: E0813 01:02:44.861823 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:44.873043 kubelet[2717]: I0813 01:02:44.872845 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5fdd567c68-zgxjx" podStartSLOduration=1.5950182050000001 podStartE2EDuration="2.872830726s" podCreationTimestamp="2025-08-13 01:02:42 +0000 UTC" firstStartedPulling="2025-08-13 01:02:43.151515862 +0000 UTC m=+18.476334226" lastFinishedPulling="2025-08-13 01:02:44.429328383 +0000 UTC m=+19.754146747" observedRunningTime="2025-08-13 01:02:44.872026693 +0000 UTC m=+20.196845057" watchObservedRunningTime="2025-08-13 01:02:44.872830726 +0000 UTC m=+20.197649100" Aug 13 01:02:44.897710 kubelet[2717]: E0813 01:02:44.897357 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.897710 kubelet[2717]: W0813 01:02:44.897375 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.897710 kubelet[2717]: E0813 01:02:44.897394 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.897710 kubelet[2717]: E0813 01:02:44.897612 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.897710 kubelet[2717]: W0813 01:02:44.897620 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.897710 kubelet[2717]: E0813 01:02:44.897629 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.898564 kubelet[2717]: E0813 01:02:44.898217 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.898564 kubelet[2717]: W0813 01:02:44.898229 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.898564 kubelet[2717]: E0813 01:02:44.898237 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.898564 kubelet[2717]: E0813 01:02:44.898454 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.898564 kubelet[2717]: W0813 01:02:44.898462 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.898564 kubelet[2717]: E0813 01:02:44.898469 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.899793 kubelet[2717]: E0813 01:02:44.899556 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.899793 kubelet[2717]: W0813 01:02:44.899570 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.899793 kubelet[2717]: E0813 01:02:44.899579 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.899793 kubelet[2717]: E0813 01:02:44.899746 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.899793 kubelet[2717]: W0813 01:02:44.899753 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.899793 kubelet[2717]: E0813 01:02:44.899760 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.900839 kubelet[2717]: E0813 01:02:44.900745 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.900839 kubelet[2717]: W0813 01:02:44.900756 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.900839 kubelet[2717]: E0813 01:02:44.900765 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.901261 kubelet[2717]: E0813 01:02:44.901243 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.901456 kubelet[2717]: W0813 01:02:44.901354 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.901456 kubelet[2717]: E0813 01:02:44.901368 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.901981 kubelet[2717]: E0813 01:02:44.901968 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.902131 kubelet[2717]: W0813 01:02:44.902025 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.902131 kubelet[2717]: E0813 01:02:44.902038 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.903785 kubelet[2717]: E0813 01:02:44.902718 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.903785 kubelet[2717]: W0813 01:02:44.902753 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.903785 kubelet[2717]: E0813 01:02:44.902762 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.904046 kubelet[2717]: E0813 01:02:44.904033 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.904095 kubelet[2717]: W0813 01:02:44.904085 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.904151 kubelet[2717]: E0813 01:02:44.904140 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.904541 kubelet[2717]: E0813 01:02:44.904530 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.904603 kubelet[2717]: W0813 01:02:44.904591 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.904644 kubelet[2717]: E0813 01:02:44.904635 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.904816 kubelet[2717]: E0813 01:02:44.904805 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.904864 kubelet[2717]: W0813 01:02:44.904855 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.904903 kubelet[2717]: E0813 01:02:44.904894 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.905064 kubelet[2717]: E0813 01:02:44.905054 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.905119 kubelet[2717]: W0813 01:02:44.905109 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.905159 kubelet[2717]: E0813 01:02:44.905150 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.905544 kubelet[2717]: E0813 01:02:44.905532 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.905607 kubelet[2717]: W0813 01:02:44.905597 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.905647 kubelet[2717]: E0813 01:02:44.905638 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.909651 kubelet[2717]: E0813 01:02:44.909318 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.909729 kubelet[2717]: W0813 01:02:44.909716 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.909775 kubelet[2717]: E0813 01:02:44.909765 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.911392 kubelet[2717]: E0813 01:02:44.911380 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.911455 kubelet[2717]: W0813 01:02:44.911444 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.911498 kubelet[2717]: E0813 01:02:44.911489 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.911721 kubelet[2717]: E0813 01:02:44.911709 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.912077 kubelet[2717]: W0813 01:02:44.911770 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.912077 kubelet[2717]: E0813 01:02:44.911784 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.912308 kubelet[2717]: E0813 01:02:44.912296 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.912358 kubelet[2717]: W0813 01:02:44.912348 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.912398 kubelet[2717]: E0813 01:02:44.912390 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.912770 kubelet[2717]: E0813 01:02:44.912758 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.912834 kubelet[2717]: W0813 01:02:44.912822 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.912878 kubelet[2717]: E0813 01:02:44.912867 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.913101 kubelet[2717]: E0813 01:02:44.913089 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.913156 kubelet[2717]: W0813 01:02:44.913146 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.913217 kubelet[2717]: E0813 01:02:44.913207 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.913672 kubelet[2717]: E0813 01:02:44.913424 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.913672 kubelet[2717]: W0813 01:02:44.913435 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.913672 kubelet[2717]: E0813 01:02:44.913443 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.914254 kubelet[2717]: E0813 01:02:44.914241 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.914307 kubelet[2717]: W0813 01:02:44.914297 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.914348 kubelet[2717]: E0813 01:02:44.914339 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.914530 kubelet[2717]: E0813 01:02:44.914520 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.914592 kubelet[2717]: W0813 01:02:44.914581 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.914641 kubelet[2717]: E0813 01:02:44.914625 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.914816 kubelet[2717]: E0813 01:02:44.914805 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.914863 kubelet[2717]: W0813 01:02:44.914854 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.914903 kubelet[2717]: E0813 01:02:44.914894 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.915186 kubelet[2717]: E0813 01:02:44.915162 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.915186 kubelet[2717]: W0813 01:02:44.915172 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.915351 kubelet[2717]: E0813 01:02:44.915295 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.915580 kubelet[2717]: E0813 01:02:44.915555 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.915580 kubelet[2717]: W0813 01:02:44.915566 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.915701 kubelet[2717]: E0813 01:02:44.915648 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.916174 kubelet[2717]: E0813 01:02:44.916152 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.916174 kubelet[2717]: W0813 01:02:44.916162 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.916351 kubelet[2717]: E0813 01:02:44.916295 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.916535 kubelet[2717]: E0813 01:02:44.916513 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.916535 kubelet[2717]: W0813 01:02:44.916523 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.916635 kubelet[2717]: E0813 01:02:44.916597 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.916853 kubelet[2717]: E0813 01:02:44.916842 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.916926 kubelet[2717]: W0813 01:02:44.916901 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.916981 kubelet[2717]: E0813 01:02:44.916959 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.917186 kubelet[2717]: E0813 01:02:44.917165 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.917186 kubelet[2717]: W0813 01:02:44.917174 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.917337 kubelet[2717]: E0813 01:02:44.917283 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.917514 kubelet[2717]: E0813 01:02:44.917503 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.917569 kubelet[2717]: W0813 01:02:44.917558 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.917611 kubelet[2717]: E0813 01:02:44.917602 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:44.918229 kubelet[2717]: E0813 01:02:44.918166 2717 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:02:44.918229 kubelet[2717]: W0813 01:02:44.918175 2717 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:02:44.918229 kubelet[2717]: E0813 01:02:44.918183 2717 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:02:45.170740 containerd[1575]: time="2025-08-13T01:02:45.170652244Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:45.171955 containerd[1575]: time="2025-08-13T01:02:45.171842086Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 01:02:45.172861 containerd[1575]: time="2025-08-13T01:02:45.172774499Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:45.174662 containerd[1575]: time="2025-08-13T01:02:45.174618796Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:45.175409 containerd[1575]: time="2025-08-13T01:02:45.175058722Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 744.591878ms" Aug 13 01:02:45.175409 containerd[1575]: time="2025-08-13T01:02:45.175085362Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 01:02:45.177569 containerd[1575]: time="2025-08-13T01:02:45.177539354Z" level=info msg="CreateContainer within sandbox \"d12cf5aacc597b7c6167326efb88e368e58730ffb459492ad3a89d6754a56a6d\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 01:02:45.186434 containerd[1575]: time="2025-08-13T01:02:45.186328831Z" level=info msg="Container b70e5d90e0f0869799ff12c4a3266cb1daf342aba56b6b79724849f65ffb2782: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:02:45.188774 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount832726510.mount: Deactivated successfully. Aug 13 01:02:45.195020 containerd[1575]: time="2025-08-13T01:02:45.194998228Z" level=info msg="CreateContainer within sandbox \"d12cf5aacc597b7c6167326efb88e368e58730ffb459492ad3a89d6754a56a6d\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b70e5d90e0f0869799ff12c4a3266cb1daf342aba56b6b79724849f65ffb2782\"" Aug 13 01:02:45.195660 containerd[1575]: time="2025-08-13T01:02:45.195547344Z" level=info msg="StartContainer for \"b70e5d90e0f0869799ff12c4a3266cb1daf342aba56b6b79724849f65ffb2782\"" Aug 13 01:02:45.197071 containerd[1575]: time="2025-08-13T01:02:45.197045344Z" level=info msg="connecting to shim b70e5d90e0f0869799ff12c4a3266cb1daf342aba56b6b79724849f65ffb2782" address="unix:///run/containerd/s/ea3ca65a49e80f52abda37f00293fb706217e18eb71f9546cee6ff146935258b" protocol=ttrpc version=3 Aug 13 01:02:45.227464 systemd[1]: Started cri-containerd-b70e5d90e0f0869799ff12c4a3266cb1daf342aba56b6b79724849f65ffb2782.scope - libcontainer container b70e5d90e0f0869799ff12c4a3266cb1daf342aba56b6b79724849f65ffb2782. Aug 13 01:02:45.268858 containerd[1575]: time="2025-08-13T01:02:45.268829085Z" level=info msg="StartContainer for \"b70e5d90e0f0869799ff12c4a3266cb1daf342aba56b6b79724849f65ffb2782\" returns successfully" Aug 13 01:02:45.281959 systemd[1]: cri-containerd-b70e5d90e0f0869799ff12c4a3266cb1daf342aba56b6b79724849f65ffb2782.scope: Deactivated successfully. Aug 13 01:02:45.286230 containerd[1575]: time="2025-08-13T01:02:45.286127250Z" level=info msg="received exit event container_id:\"b70e5d90e0f0869799ff12c4a3266cb1daf342aba56b6b79724849f65ffb2782\" id:\"b70e5d90e0f0869799ff12c4a3266cb1daf342aba56b6b79724849f65ffb2782\" pid:3409 exited_at:{seconds:1755046965 nanos:285767723}" Aug 13 01:02:45.286687 containerd[1575]: time="2025-08-13T01:02:45.286663906Z" level=info msg="TaskExit event in podsandbox handler container_id:\"b70e5d90e0f0869799ff12c4a3266cb1daf342aba56b6b79724849f65ffb2782\" id:\"b70e5d90e0f0869799ff12c4a3266cb1daf342aba56b6b79724849f65ffb2782\" pid:3409 exited_at:{seconds:1755046965 nanos:285767723}" Aug 13 01:02:45.311314 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b70e5d90e0f0869799ff12c4a3266cb1daf342aba56b6b79724849f65ffb2782-rootfs.mount: Deactivated successfully. Aug 13 01:02:45.865178 kubelet[2717]: I0813 01:02:45.864717 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:02:45.865178 kubelet[2717]: E0813 01:02:45.865133 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:45.866398 containerd[1575]: time="2025-08-13T01:02:45.866368540Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 01:02:46.765153 kubelet[2717]: E0813 01:02:46.764463 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-84hvc" podUID="f2b74998-29fc-4213-8313-543c9154bc64" Aug 13 01:02:47.835117 containerd[1575]: time="2025-08-13T01:02:47.834888735Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:47.836059 containerd[1575]: time="2025-08-13T01:02:47.835612131Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 01:02:47.837168 containerd[1575]: time="2025-08-13T01:02:47.836767593Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:47.839211 containerd[1575]: time="2025-08-13T01:02:47.839158408Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:02:47.839716 containerd[1575]: time="2025-08-13T01:02:47.839694575Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 1.973292525s" Aug 13 01:02:47.839780 containerd[1575]: time="2025-08-13T01:02:47.839766234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 01:02:47.842990 containerd[1575]: time="2025-08-13T01:02:47.842943834Z" level=info msg="CreateContainer within sandbox \"d12cf5aacc597b7c6167326efb88e368e58730ffb459492ad3a89d6754a56a6d\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 01:02:47.856285 containerd[1575]: time="2025-08-13T01:02:47.853481057Z" level=info msg="Container e45e328e632598e8efb022a9d3ea880ef0aa58b0833c2795785ba9a8063fea15: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:02:47.861950 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount107479329.mount: Deactivated successfully. Aug 13 01:02:47.868483 containerd[1575]: time="2025-08-13T01:02:47.868433262Z" level=info msg="CreateContainer within sandbox \"d12cf5aacc597b7c6167326efb88e368e58730ffb459492ad3a89d6754a56a6d\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e45e328e632598e8efb022a9d3ea880ef0aa58b0833c2795785ba9a8063fea15\"" Aug 13 01:02:47.869911 containerd[1575]: time="2025-08-13T01:02:47.869868663Z" level=info msg="StartContainer for \"e45e328e632598e8efb022a9d3ea880ef0aa58b0833c2795785ba9a8063fea15\"" Aug 13 01:02:47.871353 containerd[1575]: time="2025-08-13T01:02:47.871318794Z" level=info msg="connecting to shim e45e328e632598e8efb022a9d3ea880ef0aa58b0833c2795785ba9a8063fea15" address="unix:///run/containerd/s/ea3ca65a49e80f52abda37f00293fb706217e18eb71f9546cee6ff146935258b" protocol=ttrpc version=3 Aug 13 01:02:47.896338 systemd[1]: Started cri-containerd-e45e328e632598e8efb022a9d3ea880ef0aa58b0833c2795785ba9a8063fea15.scope - libcontainer container e45e328e632598e8efb022a9d3ea880ef0aa58b0833c2795785ba9a8063fea15. Aug 13 01:02:47.946665 containerd[1575]: time="2025-08-13T01:02:47.946611976Z" level=info msg="StartContainer for \"e45e328e632598e8efb022a9d3ea880ef0aa58b0833c2795785ba9a8063fea15\" returns successfully" Aug 13 01:02:48.417482 containerd[1575]: time="2025-08-13T01:02:48.417218784Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:02:48.421412 systemd[1]: cri-containerd-e45e328e632598e8efb022a9d3ea880ef0aa58b0833c2795785ba9a8063fea15.scope: Deactivated successfully. Aug 13 01:02:48.421992 systemd[1]: cri-containerd-e45e328e632598e8efb022a9d3ea880ef0aa58b0833c2795785ba9a8063fea15.scope: Consumed 501ms CPU time, 191.2M memory peak, 171.2M written to disk. Aug 13 01:02:48.424034 containerd[1575]: time="2025-08-13T01:02:48.424003184Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e45e328e632598e8efb022a9d3ea880ef0aa58b0833c2795785ba9a8063fea15\" id:\"e45e328e632598e8efb022a9d3ea880ef0aa58b0833c2795785ba9a8063fea15\" pid:3468 exited_at:{seconds:1755046968 nanos:423597566}" Aug 13 01:02:48.424269 containerd[1575]: time="2025-08-13T01:02:48.424217623Z" level=info msg="received exit event container_id:\"e45e328e632598e8efb022a9d3ea880ef0aa58b0833c2795785ba9a8063fea15\" id:\"e45e328e632598e8efb022a9d3ea880ef0aa58b0833c2795785ba9a8063fea15\" pid:3468 exited_at:{seconds:1755046968 nanos:423597566}" Aug 13 01:02:48.451612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e45e328e632598e8efb022a9d3ea880ef0aa58b0833c2795785ba9a8063fea15-rootfs.mount: Deactivated successfully. Aug 13 01:02:48.504704 kubelet[2717]: I0813 01:02:48.504683 2717 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 01:02:48.539109 kubelet[2717]: I0813 01:02:48.539062 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/71e7bd3a-41d9-438e-8fb2-d3ebfa340687-goldmane-key-pair\") pod \"goldmane-58fd7646b9-n678g\" (UID: \"71e7bd3a-41d9-438e-8fb2-d3ebfa340687\") " pod="calico-system/goldmane-58fd7646b9-n678g" Aug 13 01:02:48.539499 kubelet[2717]: I0813 01:02:48.539137 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f9c72ae5-36a4-480f-b976-d40f680cf121-whisker-backend-key-pair\") pod \"whisker-64877d9b7f-xhqjd\" (UID: \"f9c72ae5-36a4-480f-b976-d40f680cf121\") " pod="calico-system/whisker-64877d9b7f-xhqjd" Aug 13 01:02:48.539499 kubelet[2717]: I0813 01:02:48.539165 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25dl4\" (UniqueName: \"kubernetes.io/projected/6787cc4f-56e4-4094-978c-958d0d7a35ba-kube-api-access-25dl4\") pod \"coredns-7c65d6cfc9-dk5p7\" (UID: \"6787cc4f-56e4-4094-978c-958d0d7a35ba\") " pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:02:48.539499 kubelet[2717]: I0813 01:02:48.539233 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jz2qk\" (UniqueName: \"kubernetes.io/projected/f76a049b-a07b-423e-bd17-f1f74f3ae333-kube-api-access-jz2qk\") pod \"calico-apiserver-76c855dbd4-dfn6z\" (UID: \"f76a049b-a07b-423e-bd17-f1f74f3ae333\") " pod="calico-apiserver/calico-apiserver-76c855dbd4-dfn6z" Aug 13 01:02:48.539499 kubelet[2717]: I0813 01:02:48.539249 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/655bcff7-644b-4ab8-8ce6-cb473702553b-calico-apiserver-certs\") pod \"calico-apiserver-76c855dbd4-2qlmw\" (UID: \"655bcff7-644b-4ab8-8ce6-cb473702553b\") " pod="calico-apiserver/calico-apiserver-76c855dbd4-2qlmw" Aug 13 01:02:48.539499 kubelet[2717]: I0813 01:02:48.539299 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tb5kr\" (UniqueName: \"kubernetes.io/projected/655bcff7-644b-4ab8-8ce6-cb473702553b-kube-api-access-tb5kr\") pod \"calico-apiserver-76c855dbd4-2qlmw\" (UID: \"655bcff7-644b-4ab8-8ce6-cb473702553b\") " pod="calico-apiserver/calico-apiserver-76c855dbd4-2qlmw" Aug 13 01:02:48.539642 kubelet[2717]: I0813 01:02:48.539314 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqfjp\" (UniqueName: \"kubernetes.io/projected/a7e8405c-2c82-420c-bac7-a7277571f968-kube-api-access-vqfjp\") pod \"calico-kube-controllers-564d8b8748-ps97n\" (UID: \"a7e8405c-2c82-420c-bac7-a7277571f968\") " pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:02:48.539642 kubelet[2717]: I0813 01:02:48.539332 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71e7bd3a-41d9-438e-8fb2-d3ebfa340687-config\") pod \"goldmane-58fd7646b9-n678g\" (UID: \"71e7bd3a-41d9-438e-8fb2-d3ebfa340687\") " pod="calico-system/goldmane-58fd7646b9-n678g" Aug 13 01:02:48.540110 kubelet[2717]: I0813 01:02:48.539897 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a7e8405c-2c82-420c-bac7-a7277571f968-tigera-ca-bundle\") pod \"calico-kube-controllers-564d8b8748-ps97n\" (UID: \"a7e8405c-2c82-420c-bac7-a7277571f968\") " pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:02:48.541402 kubelet[2717]: I0813 01:02:48.541374 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gth48\" (UniqueName: \"kubernetes.io/projected/c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af-kube-api-access-gth48\") pod \"coredns-7c65d6cfc9-tp469\" (UID: \"c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af\") " pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:02:48.541637 kubelet[2717]: I0813 01:02:48.541547 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxrvf\" (UniqueName: \"kubernetes.io/projected/f9c72ae5-36a4-480f-b976-d40f680cf121-kube-api-access-kxrvf\") pod \"whisker-64877d9b7f-xhqjd\" (UID: \"f9c72ae5-36a4-480f-b976-d40f680cf121\") " pod="calico-system/whisker-64877d9b7f-xhqjd" Aug 13 01:02:48.541637 kubelet[2717]: I0813 01:02:48.541574 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f76a049b-a07b-423e-bd17-f1f74f3ae333-calico-apiserver-certs\") pod \"calico-apiserver-76c855dbd4-dfn6z\" (UID: \"f76a049b-a07b-423e-bd17-f1f74f3ae333\") " pod="calico-apiserver/calico-apiserver-76c855dbd4-dfn6z" Aug 13 01:02:48.541637 kubelet[2717]: I0813 01:02:48.541605 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71e7bd3a-41d9-438e-8fb2-d3ebfa340687-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-n678g\" (UID: \"71e7bd3a-41d9-438e-8fb2-d3ebfa340687\") " pod="calico-system/goldmane-58fd7646b9-n678g" Aug 13 01:02:48.541637 kubelet[2717]: I0813 01:02:48.541628 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9c72ae5-36a4-480f-b976-d40f680cf121-whisker-ca-bundle\") pod \"whisker-64877d9b7f-xhqjd\" (UID: \"f9c72ae5-36a4-480f-b976-d40f680cf121\") " pod="calico-system/whisker-64877d9b7f-xhqjd" Aug 13 01:02:48.541936 kubelet[2717]: I0813 01:02:48.541660 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjm89\" (UniqueName: \"kubernetes.io/projected/71e7bd3a-41d9-438e-8fb2-d3ebfa340687-kube-api-access-vjm89\") pod \"goldmane-58fd7646b9-n678g\" (UID: \"71e7bd3a-41d9-438e-8fb2-d3ebfa340687\") " pod="calico-system/goldmane-58fd7646b9-n678g" Aug 13 01:02:48.541936 kubelet[2717]: I0813 01:02:48.541682 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af-config-volume\") pod \"coredns-7c65d6cfc9-tp469\" (UID: \"c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af\") " pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:02:48.541936 kubelet[2717]: I0813 01:02:48.541695 2717 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/6787cc4f-56e4-4094-978c-958d0d7a35ba-config-volume\") pod \"coredns-7c65d6cfc9-dk5p7\" (UID: \"6787cc4f-56e4-4094-978c-958d0d7a35ba\") " pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:02:48.544159 systemd[1]: Created slice kubepods-burstable-pod6787cc4f_56e4_4094_978c_958d0d7a35ba.slice - libcontainer container kubepods-burstable-pod6787cc4f_56e4_4094_978c_958d0d7a35ba.slice. Aug 13 01:02:48.553163 kubelet[2717]: W0813 01:02:48.552574 2717 reflector.go:561] object-"calico-system"/"whisker-ca-bundle": failed to list *v1.ConfigMap: configmaps "whisker-ca-bundle" is forbidden: User "system:node:172-233-209-21" cannot list resource "configmaps" in API group "" in the namespace "calico-system": no relationship found between node '172-233-209-21' and this object Aug 13 01:02:48.553163 kubelet[2717]: E0813 01:02:48.552609 2717 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"whisker-ca-bundle\" is forbidden: User \"system:node:172-233-209-21\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-233-209-21' and this object" logger="UnhandledError" Aug 13 01:02:48.553163 kubelet[2717]: W0813 01:02:48.552654 2717 reflector.go:561] object-"calico-system"/"whisker-backend-key-pair": failed to list *v1.Secret: secrets "whisker-backend-key-pair" is forbidden: User "system:node:172-233-209-21" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node '172-233-209-21' and this object Aug 13 01:02:48.553163 kubelet[2717]: E0813 01:02:48.552667 2717 reflector.go:158] "Unhandled Error" err="object-\"calico-system\"/\"whisker-backend-key-pair\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"whisker-backend-key-pair\" is forbidden: User \"system:node:172-233-209-21\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node '172-233-209-21' and this object" logger="UnhandledError" Aug 13 01:02:48.559956 systemd[1]: Created slice kubepods-besteffort-poda7e8405c_2c82_420c_bac7_a7277571f968.slice - libcontainer container kubepods-besteffort-poda7e8405c_2c82_420c_bac7_a7277571f968.slice. Aug 13 01:02:48.573490 systemd[1]: Created slice kubepods-besteffort-pod71e7bd3a_41d9_438e_8fb2_d3ebfa340687.slice - libcontainer container kubepods-besteffort-pod71e7bd3a_41d9_438e_8fb2_d3ebfa340687.slice. Aug 13 01:02:48.585677 systemd[1]: Created slice kubepods-besteffort-podf76a049b_a07b_423e_bd17_f1f74f3ae333.slice - libcontainer container kubepods-besteffort-podf76a049b_a07b_423e_bd17_f1f74f3ae333.slice. Aug 13 01:02:48.595301 systemd[1]: Created slice kubepods-burstable-podc98cf28c_3b77_4d2a_9f0b_e0f918c9a0af.slice - libcontainer container kubepods-burstable-podc98cf28c_3b77_4d2a_9f0b_e0f918c9a0af.slice. Aug 13 01:02:48.606136 systemd[1]: Created slice kubepods-besteffort-pod655bcff7_644b_4ab8_8ce6_cb473702553b.slice - libcontainer container kubepods-besteffort-pod655bcff7_644b_4ab8_8ce6_cb473702553b.slice. Aug 13 01:02:48.616037 systemd[1]: Created slice kubepods-besteffort-podf9c72ae5_36a4_480f_b976_d40f680cf121.slice - libcontainer container kubepods-besteffort-podf9c72ae5_36a4_480f_b976_d40f680cf121.slice. Aug 13 01:02:48.770522 systemd[1]: Created slice kubepods-besteffort-podf2b74998_29fc_4213_8313_543c9154bc64.slice - libcontainer container kubepods-besteffort-podf2b74998_29fc_4213_8313_543c9154bc64.slice. Aug 13 01:02:48.772892 containerd[1575]: time="2025-08-13T01:02:48.772831838Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84hvc,Uid:f2b74998-29fc-4213-8313-543c9154bc64,Namespace:calico-system,Attempt:0,}" Aug 13 01:02:48.827348 containerd[1575]: time="2025-08-13T01:02:48.827297874Z" level=error msg="Failed to destroy network for sandbox \"dd65fb042a6b7f6fb730663840d5d8929337654b90f8edbaf0307b40e2a85c38\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:48.828661 containerd[1575]: time="2025-08-13T01:02:48.828602246Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84hvc,Uid:f2b74998-29fc-4213-8313-543c9154bc64,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd65fb042a6b7f6fb730663840d5d8929337654b90f8edbaf0307b40e2a85c38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:48.829074 kubelet[2717]: E0813 01:02:48.828990 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd65fb042a6b7f6fb730663840d5d8929337654b90f8edbaf0307b40e2a85c38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:48.829074 kubelet[2717]: E0813 01:02:48.829050 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd65fb042a6b7f6fb730663840d5d8929337654b90f8edbaf0307b40e2a85c38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:02:48.829074 kubelet[2717]: E0813 01:02:48.829069 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd65fb042a6b7f6fb730663840d5d8929337654b90f8edbaf0307b40e2a85c38\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:02:48.829158 kubelet[2717]: E0813 01:02:48.829107 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-84hvc_calico-system(f2b74998-29fc-4213-8313-543c9154bc64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-84hvc_calico-system(f2b74998-29fc-4213-8313-543c9154bc64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd65fb042a6b7f6fb730663840d5d8929337654b90f8edbaf0307b40e2a85c38\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-84hvc" podUID="f2b74998-29fc-4213-8313-543c9154bc64" Aug 13 01:02:48.864226 kubelet[2717]: E0813 01:02:48.863270 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:48.868212 containerd[1575]: time="2025-08-13T01:02:48.866862729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564d8b8748-ps97n,Uid:a7e8405c-2c82-420c-bac7-a7277571f968,Namespace:calico-system,Attempt:0,}" Aug 13 01:02:48.868212 containerd[1575]: time="2025-08-13T01:02:48.867252266Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,}" Aug 13 01:02:48.883020 containerd[1575]: time="2025-08-13T01:02:48.882637525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-n678g,Uid:71e7bd3a-41d9-438e-8fb2-d3ebfa340687,Namespace:calico-system,Attempt:0,}" Aug 13 01:02:48.896119 containerd[1575]: time="2025-08-13T01:02:48.896086225Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c855dbd4-dfn6z,Uid:f76a049b-a07b-423e-bd17-f1f74f3ae333,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:02:48.903336 kubelet[2717]: E0813 01:02:48.901808 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:02:48.905401 containerd[1575]: time="2025-08-13T01:02:48.905353160Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tp469,Uid:c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af,Namespace:kube-system,Attempt:0,}" Aug 13 01:02:48.911208 containerd[1575]: time="2025-08-13T01:02:48.911145975Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:02:48.918414 containerd[1575]: time="2025-08-13T01:02:48.918373022Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c855dbd4-2qlmw,Uid:655bcff7-644b-4ab8-8ce6-cb473702553b,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:02:49.018733 containerd[1575]: time="2025-08-13T01:02:49.018657162Z" level=error msg="Failed to destroy network for sandbox \"5d9ab2c600116f00cb24256b4200fd721ac7a3ef5ac747f6f9a16439b6926ca5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.022082 containerd[1575]: time="2025-08-13T01:02:49.021275497Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c855dbd4-2qlmw,Uid:655bcff7-644b-4ab8-8ce6-cb473702553b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d9ab2c600116f00cb24256b4200fd721ac7a3ef5ac747f6f9a16439b6926ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.022688 kubelet[2717]: E0813 01:02:49.021873 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d9ab2c600116f00cb24256b4200fd721ac7a3ef5ac747f6f9a16439b6926ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.022688 kubelet[2717]: E0813 01:02:49.021937 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d9ab2c600116f00cb24256b4200fd721ac7a3ef5ac747f6f9a16439b6926ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c855dbd4-2qlmw" Aug 13 01:02:49.022688 kubelet[2717]: E0813 01:02:49.021957 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d9ab2c600116f00cb24256b4200fd721ac7a3ef5ac747f6f9a16439b6926ca5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c855dbd4-2qlmw" Aug 13 01:02:49.023321 kubelet[2717]: E0813 01:02:49.022028 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76c855dbd4-2qlmw_calico-apiserver(655bcff7-644b-4ab8-8ce6-cb473702553b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76c855dbd4-2qlmw_calico-apiserver(655bcff7-644b-4ab8-8ce6-cb473702553b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d9ab2c600116f00cb24256b4200fd721ac7a3ef5ac747f6f9a16439b6926ca5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76c855dbd4-2qlmw" podUID="655bcff7-644b-4ab8-8ce6-cb473702553b" Aug 13 01:02:49.076151 containerd[1575]: time="2025-08-13T01:02:49.076112771Z" level=error msg="Failed to destroy network for sandbox \"7b2c82949041ce082a12bf93f4c7de530a8cef5487a10df05cefdd922620bdaa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.078079 containerd[1575]: time="2025-08-13T01:02:49.077857012Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c855dbd4-dfn6z,Uid:f76a049b-a07b-423e-bd17-f1f74f3ae333,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2c82949041ce082a12bf93f4c7de530a8cef5487a10df05cefdd922620bdaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.080433 kubelet[2717]: E0813 01:02:49.079171 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2c82949041ce082a12bf93f4c7de530a8cef5487a10df05cefdd922620bdaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.080433 kubelet[2717]: E0813 01:02:49.079696 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2c82949041ce082a12bf93f4c7de530a8cef5487a10df05cefdd922620bdaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c855dbd4-dfn6z" Aug 13 01:02:49.080433 kubelet[2717]: E0813 01:02:49.079717 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7b2c82949041ce082a12bf93f4c7de530a8cef5487a10df05cefdd922620bdaa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c855dbd4-dfn6z" Aug 13 01:02:49.080614 kubelet[2717]: E0813 01:02:49.079758 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76c855dbd4-dfn6z_calico-apiserver(f76a049b-a07b-423e-bd17-f1f74f3ae333)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76c855dbd4-dfn6z_calico-apiserver(f76a049b-a07b-423e-bd17-f1f74f3ae333)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7b2c82949041ce082a12bf93f4c7de530a8cef5487a10df05cefdd922620bdaa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76c855dbd4-dfn6z" podUID="f76a049b-a07b-423e-bd17-f1f74f3ae333" Aug 13 01:02:49.089849 containerd[1575]: time="2025-08-13T01:02:49.089816695Z" level=error msg="Failed to destroy network for sandbox \"569a6afcfa018f93820ddc77f44ca7609549a6db641b78e77a099928c1ab793a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.092104 containerd[1575]: time="2025-08-13T01:02:49.092064642Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564d8b8748-ps97n,Uid:a7e8405c-2c82-420c-bac7-a7277571f968,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"569a6afcfa018f93820ddc77f44ca7609549a6db641b78e77a099928c1ab793a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.093049 kubelet[2717]: E0813 01:02:49.092364 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"569a6afcfa018f93820ddc77f44ca7609549a6db641b78e77a099928c1ab793a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.093049 kubelet[2717]: E0813 01:02:49.092401 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"569a6afcfa018f93820ddc77f44ca7609549a6db641b78e77a099928c1ab793a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:02:49.093049 kubelet[2717]: E0813 01:02:49.092419 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"569a6afcfa018f93820ddc77f44ca7609549a6db641b78e77a099928c1ab793a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:02:49.093155 kubelet[2717]: E0813 01:02:49.092451 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"569a6afcfa018f93820ddc77f44ca7609549a6db641b78e77a099928c1ab793a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:02:49.098674 containerd[1575]: time="2025-08-13T01:02:49.098629126Z" level=error msg="Failed to destroy network for sandbox \"1a00ed614619face9d001ca34dcda5e6fdbcc557c52b066bce48b4aeda0ede31\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.101285 containerd[1575]: time="2025-08-13T01:02:49.101146232Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-n678g,Uid:71e7bd3a-41d9-438e-8fb2-d3ebfa340687,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a00ed614619face9d001ca34dcda5e6fdbcc557c52b066bce48b4aeda0ede31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.102277 kubelet[2717]: E0813 01:02:49.101946 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a00ed614619face9d001ca34dcda5e6fdbcc557c52b066bce48b4aeda0ede31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.102277 kubelet[2717]: E0813 01:02:49.101980 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a00ed614619face9d001ca34dcda5e6fdbcc557c52b066bce48b4aeda0ede31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-n678g" Aug 13 01:02:49.102277 kubelet[2717]: E0813 01:02:49.101997 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1a00ed614619face9d001ca34dcda5e6fdbcc557c52b066bce48b4aeda0ede31\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-n678g" Aug 13 01:02:49.102398 kubelet[2717]: E0813 01:02:49.102025 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-n678g_calico-system(71e7bd3a-41d9-438e-8fb2-d3ebfa340687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-n678g_calico-system(71e7bd3a-41d9-438e-8fb2-d3ebfa340687)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1a00ed614619face9d001ca34dcda5e6fdbcc557c52b066bce48b4aeda0ede31\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-n678g" podUID="71e7bd3a-41d9-438e-8fb2-d3ebfa340687" Aug 13 01:02:49.109118 containerd[1575]: time="2025-08-13T01:02:49.108894689Z" level=error msg="Failed to destroy network for sandbox \"f56eadb9b444b337e926a7973c2d02515ed47f89cf0309a8d8a1e76bb666514d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.111534 containerd[1575]: time="2025-08-13T01:02:49.111480614Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f56eadb9b444b337e926a7973c2d02515ed47f89cf0309a8d8a1e76bb666514d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.111835 kubelet[2717]: E0813 01:02:49.111752 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f56eadb9b444b337e926a7973c2d02515ed47f89cf0309a8d8a1e76bb666514d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.111835 kubelet[2717]: E0813 01:02:49.111789 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f56eadb9b444b337e926a7973c2d02515ed47f89cf0309a8d8a1e76bb666514d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:02:49.111835 kubelet[2717]: E0813 01:02:49.111812 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f56eadb9b444b337e926a7973c2d02515ed47f89cf0309a8d8a1e76bb666514d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:02:49.112375 kubelet[2717]: E0813 01:02:49.112131 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f56eadb9b444b337e926a7973c2d02515ed47f89cf0309a8d8a1e76bb666514d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dk5p7" podUID="6787cc4f-56e4-4094-978c-958d0d7a35ba" Aug 13 01:02:49.115530 containerd[1575]: time="2025-08-13T01:02:49.115475792Z" level=error msg="Failed to destroy network for sandbox \"b3d043702705b8295d8a1971979224d1f92d2c7c255bbd31407cd23e8fd94764\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.116381 containerd[1575]: time="2025-08-13T01:02:49.116277107Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tp469,Uid:c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3d043702705b8295d8a1971979224d1f92d2c7c255bbd31407cd23e8fd94764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.116512 kubelet[2717]: E0813 01:02:49.116483 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3d043702705b8295d8a1971979224d1f92d2c7c255bbd31407cd23e8fd94764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.116583 kubelet[2717]: E0813 01:02:49.116550 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3d043702705b8295d8a1971979224d1f92d2c7c255bbd31407cd23e8fd94764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:02:49.116583 kubelet[2717]: E0813 01:02:49.116569 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b3d043702705b8295d8a1971979224d1f92d2c7c255bbd31407cd23e8fd94764\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:02:49.116686 kubelet[2717]: E0813 01:02:49.116606 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tp469_kube-system(c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tp469_kube-system(c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b3d043702705b8295d8a1971979224d1f92d2c7c255bbd31407cd23e8fd94764\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tp469" podUID="c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af" Aug 13 01:02:49.519995 containerd[1575]: time="2025-08-13T01:02:49.519931986Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64877d9b7f-xhqjd,Uid:f9c72ae5-36a4-480f-b976-d40f680cf121,Namespace:calico-system,Attempt:0,}" Aug 13 01:02:49.578324 containerd[1575]: time="2025-08-13T01:02:49.578212501Z" level=error msg="Failed to destroy network for sandbox \"e39197333824edc9e44041ecf9298e65f7a470300056a4a8f1d63aba724be0af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.579636 containerd[1575]: time="2025-08-13T01:02:49.579585963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-64877d9b7f-xhqjd,Uid:f9c72ae5-36a4-480f-b976-d40f680cf121,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e39197333824edc9e44041ecf9298e65f7a470300056a4a8f1d63aba724be0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.579843 kubelet[2717]: E0813 01:02:49.579798 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e39197333824edc9e44041ecf9298e65f7a470300056a4a8f1d63aba724be0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:49.580893 kubelet[2717]: E0813 01:02:49.579869 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e39197333824edc9e44041ecf9298e65f7a470300056a4a8f1d63aba724be0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64877d9b7f-xhqjd" Aug 13 01:02:49.580893 kubelet[2717]: E0813 01:02:49.579896 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e39197333824edc9e44041ecf9298e65f7a470300056a4a8f1d63aba724be0af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-64877d9b7f-xhqjd" Aug 13 01:02:49.580893 kubelet[2717]: E0813 01:02:49.579934 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-64877d9b7f-xhqjd_calico-system(f9c72ae5-36a4-480f-b976-d40f680cf121)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-64877d9b7f-xhqjd_calico-system(f9c72ae5-36a4-480f-b976-d40f680cf121)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e39197333824edc9e44041ecf9298e65f7a470300056a4a8f1d63aba724be0af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-64877d9b7f-xhqjd" podUID="f9c72ae5-36a4-480f-b976-d40f680cf121" Aug 13 01:02:49.858603 systemd[1]: run-netns-cni\x2d57bdd1a4\x2d4c99\x2d562f\x2d638e\x2d6a192a210593.mount: Deactivated successfully. Aug 13 01:02:49.862200 systemd[1]: run-netns-cni\x2d4f23f88c\x2d59a8\x2d1766\x2d87d2\x2d0a3c94409a68.mount: Deactivated successfully. Aug 13 01:02:49.862328 systemd[1]: run-netns-cni\x2d7d996373\x2d8414\x2d13cf\x2dad47\x2dcd7f8487d9cd.mount: Deactivated successfully. Aug 13 01:02:49.862436 systemd[1]: run-netns-cni\x2d12db187c\x2db00b\x2d5f25\x2da15b\x2d68c7e6fd75e8.mount: Deactivated successfully. Aug 13 01:02:51.526143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount58514406.mount: Deactivated successfully. Aug 13 01:02:51.527644 containerd[1575]: time="2025-08-13T01:02:51.527451382Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount58514406: write /var/lib/containerd/tmpmounts/containerd-mount58514406/usr/bin/calico-node: no space left on device" Aug 13 01:02:51.527644 containerd[1575]: time="2025-08-13T01:02:51.527523971Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:02:51.528695 kubelet[2717]: E0813 01:02:51.527855 2717 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount58514406: write /var/lib/containerd/tmpmounts/containerd-mount58514406/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:02:51.528695 kubelet[2717]: E0813 01:02:51.527908 2717 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount58514406: write /var/lib/containerd/tmpmounts/containerd-mount58514406/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:02:51.529010 kubelet[2717]: E0813 01:02:51.528098 2717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmcp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-7pdcs_calico-system(e6731efa-3c96-4227-b83c-f4c3adff36c6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount58514406: write /var/lib/containerd/tmpmounts/containerd-mount58514406/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:02:51.529280 kubelet[2717]: E0813 01:02:51.529231 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount58514406: write /var/lib/containerd/tmpmounts/containerd-mount58514406/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-7pdcs" podUID="e6731efa-3c96-4227-b83c-f4c3adff36c6" Aug 13 01:02:51.906015 kubelet[2717]: E0813 01:02:51.905825 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-7pdcs" podUID="e6731efa-3c96-4227-b83c-f4c3adff36c6" Aug 13 01:02:54.933722 kubelet[2717]: I0813 01:02:54.933678 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:02:54.933722 kubelet[2717]: I0813 01:02:54.933720 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:02:54.940505 kubelet[2717]: I0813 01:02:54.938772 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:02:54.955377 kubelet[2717]: I0813 01:02:54.955309 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:02:54.955472 kubelet[2717]: I0813 01:02:54.955383 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/whisker-64877d9b7f-xhqjd","calico-apiserver/calico-apiserver-76c855dbd4-2qlmw","calico-system/goldmane-58fd7646b9-n678g","calico-apiserver/calico-apiserver-76c855dbd4-dfn6z","calico-system/calico-kube-controllers-564d8b8748-ps97n","kube-system/coredns-7c65d6cfc9-tp469","kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/csi-node-driver-84hvc","calico-system/calico-node-7pdcs","tigera-operator/tigera-operator-5bf8dfcb4-fdng7","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:02:54.959906 kubelet[2717]: I0813 01:02:54.959867 2717 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-system/whisker-64877d9b7f-xhqjd" Aug 13 01:02:54.959906 kubelet[2717]: I0813 01:02:54.959890 2717 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/whisker-64877d9b7f-xhqjd"] Aug 13 01:02:54.981699 kubelet[2717]: I0813 01:02:54.980990 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9c72ae5-36a4-480f-b976-d40f680cf121-whisker-ca-bundle\") pod \"f9c72ae5-36a4-480f-b976-d40f680cf121\" (UID: \"f9c72ae5-36a4-480f-b976-d40f680cf121\") " Aug 13 01:02:54.981699 kubelet[2717]: I0813 01:02:54.981041 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxrvf\" (UniqueName: \"kubernetes.io/projected/f9c72ae5-36a4-480f-b976-d40f680cf121-kube-api-access-kxrvf\") pod \"f9c72ae5-36a4-480f-b976-d40f680cf121\" (UID: \"f9c72ae5-36a4-480f-b976-d40f680cf121\") " Aug 13 01:02:54.981699 kubelet[2717]: I0813 01:02:54.981062 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f9c72ae5-36a4-480f-b976-d40f680cf121-whisker-backend-key-pair\") pod \"f9c72ae5-36a4-480f-b976-d40f680cf121\" (UID: \"f9c72ae5-36a4-480f-b976-d40f680cf121\") " Aug 13 01:02:54.981855 kubelet[2717]: I0813 01:02:54.981815 2717 kubelet.go:2306] "Pod admission denied" podUID="49f03fba-9a2f-47dd-998e-859b683a20e5" pod="calico-system/whisker-64877d9b7f-j47fn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:02:54.984454 kubelet[2717]: I0813 01:02:54.984422 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f9c72ae5-36a4-480f-b976-d40f680cf121-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f9c72ae5-36a4-480f-b976-d40f680cf121" (UID: "f9c72ae5-36a4-480f-b976-d40f680cf121"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:02:54.988311 kubelet[2717]: I0813 01:02:54.988275 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f9c72ae5-36a4-480f-b976-d40f680cf121-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f9c72ae5-36a4-480f-b976-d40f680cf121" (UID: "f9c72ae5-36a4-480f-b976-d40f680cf121"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:02:54.989845 systemd[1]: var-lib-kubelet-pods-f9c72ae5\x2d36a4\x2d480f\x2db976\x2dd40f680cf121-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:02:54.992408 kubelet[2717]: I0813 01:02:54.992369 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f9c72ae5-36a4-480f-b976-d40f680cf121-kube-api-access-kxrvf" (OuterVolumeSpecName: "kube-api-access-kxrvf") pod "f9c72ae5-36a4-480f-b976-d40f680cf121" (UID: "f9c72ae5-36a4-480f-b976-d40f680cf121"). InnerVolumeSpecName "kube-api-access-kxrvf". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:02:54.994936 systemd[1]: var-lib-kubelet-pods-f9c72ae5\x2d36a4\x2d480f\x2db976\x2dd40f680cf121-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dkxrvf.mount: Deactivated successfully. Aug 13 01:02:55.015357 kubelet[2717]: I0813 01:02:55.015308 2717 kubelet.go:2306] "Pod admission denied" podUID="91476d20-b5c7-4ff0-988d-dd2d393a2b9a" pod="calico-system/whisker-64877d9b7f-2f46m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:02:55.037230 kubelet[2717]: I0813 01:02:55.036883 2717 kubelet.go:2306] "Pod admission denied" podUID="6881078a-07d3-4afd-8484-793a388517de" pod="calico-system/whisker-64877d9b7f-v5wtm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:02:55.063766 kubelet[2717]: I0813 01:02:55.063724 2717 kubelet.go:2306] "Pod admission denied" podUID="6a35b462-a4b5-4cf0-be67-d0ea141bd8cf" pod="calico-system/whisker-64877d9b7f-s4zjj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:02:55.082227 kubelet[2717]: I0813 01:02:55.081402 2717 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f9c72ae5-36a4-480f-b976-d40f680cf121-whisker-ca-bundle\") on node \"172-233-209-21\" DevicePath \"\"" Aug 13 01:02:55.082227 kubelet[2717]: I0813 01:02:55.081424 2717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxrvf\" (UniqueName: \"kubernetes.io/projected/f9c72ae5-36a4-480f-b976-d40f680cf121-kube-api-access-kxrvf\") on node \"172-233-209-21\" DevicePath \"\"" Aug 13 01:02:55.082227 kubelet[2717]: I0813 01:02:55.081435 2717 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f9c72ae5-36a4-480f-b976-d40f680cf121-whisker-backend-key-pair\") on node \"172-233-209-21\" DevicePath \"\"" Aug 13 01:02:55.084677 kubelet[2717]: I0813 01:02:55.084453 2717 kubelet.go:2306] "Pod admission denied" podUID="c721eabb-593b-4d35-8458-31a7899a01bc" pod="calico-system/whisker-64877d9b7f-qzn48" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:02:55.107597 kubelet[2717]: I0813 01:02:55.107570 2717 kubelet.go:2306] "Pod admission denied" podUID="ef550f79-9581-4df1-a7cc-c7ed83e68b86" pod="calico-system/whisker-64877d9b7f-vx92q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:02:55.130875 kubelet[2717]: I0813 01:02:55.130751 2717 kubelet.go:2306] "Pod admission denied" podUID="37dfe8fa-4887-4278-be6f-1eb2108e475f" pod="calico-system/whisker-64877d9b7f-ctx6g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:02:55.157892 kubelet[2717]: I0813 01:02:55.157850 2717 kubelet.go:2306] "Pod admission denied" podUID="54a3dbe8-2601-48b6-9e32-9b1dd9b6a423" pod="calico-system/whisker-64877d9b7f-mws69" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:02:55.196217 kubelet[2717]: I0813 01:02:55.195716 2717 kubelet.go:2306] "Pod admission denied" podUID="dd8de764-7f74-47d1-a14a-2c2d29cfa5bc" pod="calico-system/whisker-64877d9b7f-dlrhr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:02:55.915622 systemd[1]: Removed slice kubepods-besteffort-podf9c72ae5_36a4_480f_b976_d40f680cf121.slice - libcontainer container kubepods-besteffort-podf9c72ae5_36a4_480f_b976_d40f680cf121.slice. Aug 13 01:02:55.960071 kubelet[2717]: I0813 01:02:55.960024 2717 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-system/whisker-64877d9b7f-xhqjd"] Aug 13 01:02:59.764076 containerd[1575]: time="2025-08-13T01:02:59.764025705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c855dbd4-2qlmw,Uid:655bcff7-644b-4ab8-8ce6-cb473702553b,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:02:59.764454 containerd[1575]: time="2025-08-13T01:02:59.764349624Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84hvc,Uid:f2b74998-29fc-4213-8313-543c9154bc64,Namespace:calico-system,Attempt:0,}" Aug 13 01:02:59.764514 containerd[1575]: time="2025-08-13T01:02:59.764485943Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-n678g,Uid:71e7bd3a-41d9-438e-8fb2-d3ebfa340687,Namespace:calico-system,Attempt:0,}" Aug 13 01:02:59.831634 containerd[1575]: time="2025-08-13T01:02:59.831531817Z" level=error msg="Failed to destroy network for sandbox \"2f39a25c94fe0ce7d66a90db673a49de8ee85a1f2a2a81f57bae4e546e679fa1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:59.834320 containerd[1575]: time="2025-08-13T01:02:59.834289789Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84hvc,Uid:f2b74998-29fc-4213-8313-543c9154bc64,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f39a25c94fe0ce7d66a90db673a49de8ee85a1f2a2a81f57bae4e546e679fa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:59.834883 systemd[1]: run-netns-cni\x2d052e9a01\x2d9d82\x2dbfd7\x2dc70f\x2d2cc6ea570cef.mount: Deactivated successfully. Aug 13 01:02:59.837384 kubelet[2717]: E0813 01:02:59.835458 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f39a25c94fe0ce7d66a90db673a49de8ee85a1f2a2a81f57bae4e546e679fa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:59.837384 kubelet[2717]: E0813 01:02:59.835624 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f39a25c94fe0ce7d66a90db673a49de8ee85a1f2a2a81f57bae4e546e679fa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:02:59.837384 kubelet[2717]: E0813 01:02:59.835646 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2f39a25c94fe0ce7d66a90db673a49de8ee85a1f2a2a81f57bae4e546e679fa1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:02:59.837384 kubelet[2717]: E0813 01:02:59.836490 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-84hvc_calico-system(f2b74998-29fc-4213-8313-543c9154bc64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-84hvc_calico-system(f2b74998-29fc-4213-8313-543c9154bc64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2f39a25c94fe0ce7d66a90db673a49de8ee85a1f2a2a81f57bae4e546e679fa1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-84hvc" podUID="f2b74998-29fc-4213-8313-543c9154bc64" Aug 13 01:02:59.859063 containerd[1575]: time="2025-08-13T01:02:59.859019887Z" level=error msg="Failed to destroy network for sandbox \"102834540241aecb6e76679b21cd9cd4b21e30ee68900f55299624f98d7d619f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:59.863570 containerd[1575]: time="2025-08-13T01:02:59.863266194Z" level=error msg="Failed to destroy network for sandbox \"3e3fd35afb4506e4a2b3f15a91318c0e24abd1f3521e30ba33620c06e024c32f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:59.863403 systemd[1]: run-netns-cni\x2d7ce750f5\x2d7dc0\x2daea2\x2d812a\x2d24d8b77757b2.mount: Deactivated successfully. Aug 13 01:02:59.864592 containerd[1575]: time="2025-08-13T01:02:59.864456871Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-n678g,Uid:71e7bd3a-41d9-438e-8fb2-d3ebfa340687,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"102834540241aecb6e76679b21cd9cd4b21e30ee68900f55299624f98d7d619f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:59.865353 kubelet[2717]: E0813 01:02:59.865319 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"102834540241aecb6e76679b21cd9cd4b21e30ee68900f55299624f98d7d619f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:59.865405 kubelet[2717]: E0813 01:02:59.865367 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"102834540241aecb6e76679b21cd9cd4b21e30ee68900f55299624f98d7d619f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-n678g" Aug 13 01:02:59.865405 kubelet[2717]: E0813 01:02:59.865387 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"102834540241aecb6e76679b21cd9cd4b21e30ee68900f55299624f98d7d619f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-n678g" Aug 13 01:02:59.865449 kubelet[2717]: E0813 01:02:59.865421 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-n678g_calico-system(71e7bd3a-41d9-438e-8fb2-d3ebfa340687)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-n678g_calico-system(71e7bd3a-41d9-438e-8fb2-d3ebfa340687)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"102834540241aecb6e76679b21cd9cd4b21e30ee68900f55299624f98d7d619f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-n678g" podUID="71e7bd3a-41d9-438e-8fb2-d3ebfa340687" Aug 13 01:02:59.865873 containerd[1575]: time="2025-08-13T01:02:59.865780167Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c855dbd4-2qlmw,Uid:655bcff7-644b-4ab8-8ce6-cb473702553b,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e3fd35afb4506e4a2b3f15a91318c0e24abd1f3521e30ba33620c06e024c32f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:59.866329 kubelet[2717]: E0813 01:02:59.866301 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e3fd35afb4506e4a2b3f15a91318c0e24abd1f3521e30ba33620c06e024c32f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:02:59.866373 kubelet[2717]: E0813 01:02:59.866359 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e3fd35afb4506e4a2b3f15a91318c0e24abd1f3521e30ba33620c06e024c32f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c855dbd4-2qlmw" Aug 13 01:02:59.866396 kubelet[2717]: E0813 01:02:59.866378 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3e3fd35afb4506e4a2b3f15a91318c0e24abd1f3521e30ba33620c06e024c32f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c855dbd4-2qlmw" Aug 13 01:02:59.867102 kubelet[2717]: E0813 01:02:59.866405 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76c855dbd4-2qlmw_calico-apiserver(655bcff7-644b-4ab8-8ce6-cb473702553b)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76c855dbd4-2qlmw_calico-apiserver(655bcff7-644b-4ab8-8ce6-cb473702553b)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3e3fd35afb4506e4a2b3f15a91318c0e24abd1f3521e30ba33620c06e024c32f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76c855dbd4-2qlmw" podUID="655bcff7-644b-4ab8-8ce6-cb473702553b" Aug 13 01:02:59.866861 systemd[1]: run-netns-cni\x2d34cefc50\x2db6dc\x2d93a8\x2d5263\x2d14a3fde32e5c.mount: Deactivated successfully. Aug 13 01:03:01.764664 kubelet[2717]: E0813 01:03:01.763767 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:03:01.765062 containerd[1575]: time="2025-08-13T01:03:01.764356246Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,}" Aug 13 01:03:01.807729 containerd[1575]: time="2025-08-13T01:03:01.807672824Z" level=error msg="Failed to destroy network for sandbox \"6eb4f1e6ce246bb21e4755c7c5b165f1199c4e22fbaf646bfe3c9ac0d0b030a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:01.809974 containerd[1575]: time="2025-08-13T01:03:01.809929149Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eb4f1e6ce246bb21e4755c7c5b165f1199c4e22fbaf646bfe3c9ac0d0b030a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:01.810487 systemd[1]: run-netns-cni\x2d0ede21bc\x2d6ca7\x2d3d20\x2d644c\x2d63276cb9e625.mount: Deactivated successfully. Aug 13 01:03:01.811034 kubelet[2717]: E0813 01:03:01.811001 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eb4f1e6ce246bb21e4755c7c5b165f1199c4e22fbaf646bfe3c9ac0d0b030a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:01.811086 kubelet[2717]: E0813 01:03:01.811073 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eb4f1e6ce246bb21e4755c7c5b165f1199c4e22fbaf646bfe3c9ac0d0b030a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:03:01.811116 kubelet[2717]: E0813 01:03:01.811092 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6eb4f1e6ce246bb21e4755c7c5b165f1199c4e22fbaf646bfe3c9ac0d0b030a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:03:01.811174 kubelet[2717]: E0813 01:03:01.811152 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6eb4f1e6ce246bb21e4755c7c5b165f1199c4e22fbaf646bfe3c9ac0d0b030a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dk5p7" podUID="6787cc4f-56e4-4094-978c-958d0d7a35ba" Aug 13 01:03:02.764975 kubelet[2717]: E0813 01:03:02.764312 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:03:02.766118 containerd[1575]: time="2025-08-13T01:03:02.765899013Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tp469,Uid:c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af,Namespace:kube-system,Attempt:0,}" Aug 13 01:03:02.823599 containerd[1575]: time="2025-08-13T01:03:02.823435125Z" level=error msg="Failed to destroy network for sandbox \"0217f391cac5df35a9217e4736dd6d52c40779f62b5fd9a40747b4ff97a28139\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:02.827712 containerd[1575]: time="2025-08-13T01:03:02.827678584Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tp469,Uid:c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0217f391cac5df35a9217e4736dd6d52c40779f62b5fd9a40747b4ff97a28139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:02.827997 kubelet[2717]: E0813 01:03:02.827956 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0217f391cac5df35a9217e4736dd6d52c40779f62b5fd9a40747b4ff97a28139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:02.828050 kubelet[2717]: E0813 01:03:02.828018 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0217f391cac5df35a9217e4736dd6d52c40779f62b5fd9a40747b4ff97a28139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:03:02.828050 kubelet[2717]: E0813 01:03:02.828038 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0217f391cac5df35a9217e4736dd6d52c40779f62b5fd9a40747b4ff97a28139\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:03:02.828100 kubelet[2717]: E0813 01:03:02.828080 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tp469_kube-system(c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tp469_kube-system(c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0217f391cac5df35a9217e4736dd6d52c40779f62b5fd9a40747b4ff97a28139\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tp469" podUID="c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af" Aug 13 01:03:02.829383 systemd[1]: run-netns-cni\x2dfa09902d\x2da0e9\x2da723\x2dbfcf\x2d0debd51d8d81.mount: Deactivated successfully. Aug 13 01:03:03.764295 containerd[1575]: time="2025-08-13T01:03:03.764247392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564d8b8748-ps97n,Uid:a7e8405c-2c82-420c-bac7-a7277571f968,Namespace:calico-system,Attempt:0,}" Aug 13 01:03:03.764580 containerd[1575]: time="2025-08-13T01:03:03.764255182Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c855dbd4-dfn6z,Uid:f76a049b-a07b-423e-bd17-f1f74f3ae333,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:03:03.828244 containerd[1575]: time="2025-08-13T01:03:03.826501441Z" level=error msg="Failed to destroy network for sandbox \"899fbbc3646a63b41604d8e9b2dc6ec41e50987b12c825583a04caaa228cbe85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:03.831684 systemd[1]: run-netns-cni\x2d0dfeefd1\x2d91e3\x2dcc7a\x2d3443\x2d65a90fc99398.mount: Deactivated successfully. Aug 13 01:03:03.833273 containerd[1575]: time="2025-08-13T01:03:03.833229286Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-76c855dbd4-dfn6z,Uid:f76a049b-a07b-423e-bd17-f1f74f3ae333,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"899fbbc3646a63b41604d8e9b2dc6ec41e50987b12c825583a04caaa228cbe85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:03.833834 kubelet[2717]: E0813 01:03:03.833787 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"899fbbc3646a63b41604d8e9b2dc6ec41e50987b12c825583a04caaa228cbe85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:03.834587 kubelet[2717]: E0813 01:03:03.833834 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"899fbbc3646a63b41604d8e9b2dc6ec41e50987b12c825583a04caaa228cbe85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c855dbd4-dfn6z" Aug 13 01:03:03.834587 kubelet[2717]: E0813 01:03:03.833851 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"899fbbc3646a63b41604d8e9b2dc6ec41e50987b12c825583a04caaa228cbe85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-76c855dbd4-dfn6z" Aug 13 01:03:03.834587 kubelet[2717]: E0813 01:03:03.833883 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-76c855dbd4-dfn6z_calico-apiserver(f76a049b-a07b-423e-bd17-f1f74f3ae333)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-76c855dbd4-dfn6z_calico-apiserver(f76a049b-a07b-423e-bd17-f1f74f3ae333)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"899fbbc3646a63b41604d8e9b2dc6ec41e50987b12c825583a04caaa228cbe85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-76c855dbd4-dfn6z" podUID="f76a049b-a07b-423e-bd17-f1f74f3ae333" Aug 13 01:03:03.847201 containerd[1575]: time="2025-08-13T01:03:03.846941795Z" level=error msg="Failed to destroy network for sandbox \"44c3efad31665138764fca8ed4fdb3e11889f161ee5f65ad7badcdab2ae5ee76\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:03.850269 containerd[1575]: time="2025-08-13T01:03:03.847963462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564d8b8748-ps97n,Uid:a7e8405c-2c82-420c-bac7-a7277571f968,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"44c3efad31665138764fca8ed4fdb3e11889f161ee5f65ad7badcdab2ae5ee76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:03.850346 kubelet[2717]: E0813 01:03:03.848413 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44c3efad31665138764fca8ed4fdb3e11889f161ee5f65ad7badcdab2ae5ee76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:03.850346 kubelet[2717]: E0813 01:03:03.848494 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44c3efad31665138764fca8ed4fdb3e11889f161ee5f65ad7badcdab2ae5ee76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:03:03.850346 kubelet[2717]: E0813 01:03:03.848515 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44c3efad31665138764fca8ed4fdb3e11889f161ee5f65ad7badcdab2ae5ee76\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:03:03.850346 kubelet[2717]: E0813 01:03:03.848577 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44c3efad31665138764fca8ed4fdb3e11889f161ee5f65ad7badcdab2ae5ee76\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:03:03.851635 systemd[1]: run-netns-cni\x2d5e5a9097\x2d4305\x2d9ba9\x2d5b9d\x2d87af9b968c96.mount: Deactivated successfully. Aug 13 01:03:05.765637 containerd[1575]: time="2025-08-13T01:03:05.765589100Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:03:05.985841 kubelet[2717]: I0813 01:03:05.985800 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:03:05.985841 kubelet[2717]: I0813 01:03:05.985837 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:03:05.988341 kubelet[2717]: I0813 01:03:05.988317 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:03:06.002979 kubelet[2717]: I0813 01:03:06.002934 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:03:06.003122 kubelet[2717]: I0813 01:03:06.003059 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-76c855dbd4-dfn6z","calico-system/goldmane-58fd7646b9-n678g","calico-apiserver/calico-apiserver-76c855dbd4-2qlmw","kube-system/coredns-7c65d6cfc9-dk5p7","kube-system/coredns-7c65d6cfc9-tp469","calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-node-7pdcs","calico-system/csi-node-driver-84hvc","tigera-operator/tigera-operator-5bf8dfcb4-fdng7","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:03:06.008998 kubelet[2717]: I0813 01:03:06.008979 2717 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-76c855dbd4-dfn6z" Aug 13 01:03:06.008998 kubelet[2717]: I0813 01:03:06.008996 2717 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-76c855dbd4-dfn6z"] Aug 13 01:03:06.142709 kubelet[2717]: I0813 01:03:06.142580 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f76a049b-a07b-423e-bd17-f1f74f3ae333-calico-apiserver-certs\") pod \"f76a049b-a07b-423e-bd17-f1f74f3ae333\" (UID: \"f76a049b-a07b-423e-bd17-f1f74f3ae333\") " Aug 13 01:03:06.142709 kubelet[2717]: I0813 01:03:06.142680 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jz2qk\" (UniqueName: \"kubernetes.io/projected/f76a049b-a07b-423e-bd17-f1f74f3ae333-kube-api-access-jz2qk\") pod \"f76a049b-a07b-423e-bd17-f1f74f3ae333\" (UID: \"f76a049b-a07b-423e-bd17-f1f74f3ae333\") " Aug 13 01:03:06.149034 kubelet[2717]: I0813 01:03:06.148965 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f76a049b-a07b-423e-bd17-f1f74f3ae333-kube-api-access-jz2qk" (OuterVolumeSpecName: "kube-api-access-jz2qk") pod "f76a049b-a07b-423e-bd17-f1f74f3ae333" (UID: "f76a049b-a07b-423e-bd17-f1f74f3ae333"). InnerVolumeSpecName "kube-api-access-jz2qk". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:03:06.149371 systemd[1]: var-lib-kubelet-pods-f76a049b\x2da07b\x2d423e\x2dbd17\x2df1f74f3ae333-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2djz2qk.mount: Deactivated successfully. Aug 13 01:03:06.154515 kubelet[2717]: I0813 01:03:06.154491 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f76a049b-a07b-423e-bd17-f1f74f3ae333-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "f76a049b-a07b-423e-bd17-f1f74f3ae333" (UID: "f76a049b-a07b-423e-bd17-f1f74f3ae333"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:03:06.154598 systemd[1]: var-lib-kubelet-pods-f76a049b\x2da07b\x2d423e\x2dbd17\x2df1f74f3ae333-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:03:06.242947 kubelet[2717]: I0813 01:03:06.242915 2717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jz2qk\" (UniqueName: \"kubernetes.io/projected/f76a049b-a07b-423e-bd17-f1f74f3ae333-kube-api-access-jz2qk\") on node \"172-233-209-21\" DevicePath \"\"" Aug 13 01:03:06.242947 kubelet[2717]: I0813 01:03:06.242943 2717 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/f76a049b-a07b-423e-bd17-f1f74f3ae333-calico-apiserver-certs\") on node \"172-233-209-21\" DevicePath \"\"" Aug 13 01:03:06.774455 systemd[1]: Removed slice kubepods-besteffort-podf76a049b_a07b_423e_bd17_f1f74f3ae333.slice - libcontainer container kubepods-besteffort-podf76a049b_a07b_423e_bd17_f1f74f3ae333.slice. Aug 13 01:03:07.009165 kubelet[2717]: I0813 01:03:07.009116 2717 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-76c855dbd4-dfn6z"] Aug 13 01:03:07.022720 kubelet[2717]: I0813 01:03:07.022694 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:03:07.022804 kubelet[2717]: I0813 01:03:07.022735 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:03:07.026896 kubelet[2717]: I0813 01:03:07.026796 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:03:07.058634 kubelet[2717]: I0813 01:03:07.058578 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:03:07.058707 kubelet[2717]: I0813 01:03:07.058687 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-76c855dbd4-2qlmw","calico-system/goldmane-58fd7646b9-n678g","kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/calico-kube-controllers-564d8b8748-ps97n","kube-system/coredns-7c65d6cfc9-tp469","calico-system/csi-node-driver-84hvc","calico-system/calico-node-7pdcs","tigera-operator/tigera-operator-5bf8dfcb4-fdng7","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:03:07.064528 kubelet[2717]: I0813 01:03:07.064499 2717 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-76c855dbd4-2qlmw" Aug 13 01:03:07.064528 kubelet[2717]: I0813 01:03:07.064520 2717 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-76c855dbd4-2qlmw"] Aug 13 01:03:07.251988 kubelet[2717]: I0813 01:03:07.249364 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/655bcff7-644b-4ab8-8ce6-cb473702553b-calico-apiserver-certs\") pod \"655bcff7-644b-4ab8-8ce6-cb473702553b\" (UID: \"655bcff7-644b-4ab8-8ce6-cb473702553b\") " Aug 13 01:03:07.251988 kubelet[2717]: I0813 01:03:07.249410 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tb5kr\" (UniqueName: \"kubernetes.io/projected/655bcff7-644b-4ab8-8ce6-cb473702553b-kube-api-access-tb5kr\") pod \"655bcff7-644b-4ab8-8ce6-cb473702553b\" (UID: \"655bcff7-644b-4ab8-8ce6-cb473702553b\") " Aug 13 01:03:07.263967 kubelet[2717]: I0813 01:03:07.263904 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/655bcff7-644b-4ab8-8ce6-cb473702553b-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "655bcff7-644b-4ab8-8ce6-cb473702553b" (UID: "655bcff7-644b-4ab8-8ce6-cb473702553b"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:03:07.264505 systemd[1]: var-lib-kubelet-pods-655bcff7\x2d644b\x2d4ab8\x2d8ce6\x2dcb473702553b-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:03:07.274768 systemd[1]: var-lib-kubelet-pods-655bcff7\x2d644b\x2d4ab8\x2d8ce6\x2dcb473702553b-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtb5kr.mount: Deactivated successfully. Aug 13 01:03:07.276875 kubelet[2717]: I0813 01:03:07.276738 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/655bcff7-644b-4ab8-8ce6-cb473702553b-kube-api-access-tb5kr" (OuterVolumeSpecName: "kube-api-access-tb5kr") pod "655bcff7-644b-4ab8-8ce6-cb473702553b" (UID: "655bcff7-644b-4ab8-8ce6-cb473702553b"). InnerVolumeSpecName "kube-api-access-tb5kr". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:03:07.350610 kubelet[2717]: I0813 01:03:07.350302 2717 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/655bcff7-644b-4ab8-8ce6-cb473702553b-calico-apiserver-certs\") on node \"172-233-209-21\" DevicePath \"\"" Aug 13 01:03:07.350610 kubelet[2717]: I0813 01:03:07.350335 2717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tb5kr\" (UniqueName: \"kubernetes.io/projected/655bcff7-644b-4ab8-8ce6-cb473702553b-kube-api-access-tb5kr\") on node \"172-233-209-21\" DevicePath \"\"" Aug 13 01:03:07.945357 systemd[1]: Removed slice kubepods-besteffort-pod655bcff7_644b_4ab8_8ce6_cb473702553b.slice - libcontainer container kubepods-besteffort-pod655bcff7_644b_4ab8_8ce6_cb473702553b.slice. Aug 13 01:03:08.065557 kubelet[2717]: I0813 01:03:08.065485 2717 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-76c855dbd4-2qlmw"] Aug 13 01:03:08.079931 kubelet[2717]: I0813 01:03:08.079904 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:03:08.080417 kubelet[2717]: I0813 01:03:08.080036 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:03:08.084999 kubelet[2717]: I0813 01:03:08.084985 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:03:08.101755 kubelet[2717]: I0813 01:03:08.101479 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:03:08.101755 kubelet[2717]: I0813 01:03:08.101548 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/goldmane-58fd7646b9-n678g","kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/calico-kube-controllers-564d8b8748-ps97n","kube-system/coredns-7c65d6cfc9-tp469","calico-system/csi-node-driver-84hvc","calico-system/calico-node-7pdcs","tigera-operator/tigera-operator-5bf8dfcb4-fdng7","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:03:08.108631 kubelet[2717]: I0813 01:03:08.108612 2717 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-system/goldmane-58fd7646b9-n678g" Aug 13 01:03:08.108849 kubelet[2717]: I0813 01:03:08.108835 2717 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/goldmane-58fd7646b9-n678g"] Aug 13 01:03:08.259547 kubelet[2717]: I0813 01:03:08.258419 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vjm89\" (UniqueName: \"kubernetes.io/projected/71e7bd3a-41d9-438e-8fb2-d3ebfa340687-kube-api-access-vjm89\") pod \"71e7bd3a-41d9-438e-8fb2-d3ebfa340687\" (UID: \"71e7bd3a-41d9-438e-8fb2-d3ebfa340687\") " Aug 13 01:03:08.259547 kubelet[2717]: I0813 01:03:08.259033 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/71e7bd3a-41d9-438e-8fb2-d3ebfa340687-goldmane-key-pair\") pod \"71e7bd3a-41d9-438e-8fb2-d3ebfa340687\" (UID: \"71e7bd3a-41d9-438e-8fb2-d3ebfa340687\") " Aug 13 01:03:08.263816 kubelet[2717]: I0813 01:03:08.262432 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71e7bd3a-41d9-438e-8fb2-d3ebfa340687-config\") pod \"71e7bd3a-41d9-438e-8fb2-d3ebfa340687\" (UID: \"71e7bd3a-41d9-438e-8fb2-d3ebfa340687\") " Aug 13 01:03:08.263816 kubelet[2717]: I0813 01:03:08.262459 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71e7bd3a-41d9-438e-8fb2-d3ebfa340687-goldmane-ca-bundle\") pod \"71e7bd3a-41d9-438e-8fb2-d3ebfa340687\" (UID: \"71e7bd3a-41d9-438e-8fb2-d3ebfa340687\") " Aug 13 01:03:08.263816 kubelet[2717]: I0813 01:03:08.262782 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71e7bd3a-41d9-438e-8fb2-d3ebfa340687-goldmane-ca-bundle" (OuterVolumeSpecName: "goldmane-ca-bundle") pod "71e7bd3a-41d9-438e-8fb2-d3ebfa340687" (UID: "71e7bd3a-41d9-438e-8fb2-d3ebfa340687"). InnerVolumeSpecName "goldmane-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:03:08.263816 kubelet[2717]: I0813 01:03:08.263024 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/71e7bd3a-41d9-438e-8fb2-d3ebfa340687-config" (OuterVolumeSpecName: "config") pod "71e7bd3a-41d9-438e-8fb2-d3ebfa340687" (UID: "71e7bd3a-41d9-438e-8fb2-d3ebfa340687"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:03:08.267577 kubelet[2717]: I0813 01:03:08.267512 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/71e7bd3a-41d9-438e-8fb2-d3ebfa340687-goldmane-key-pair" (OuterVolumeSpecName: "goldmane-key-pair") pod "71e7bd3a-41d9-438e-8fb2-d3ebfa340687" (UID: "71e7bd3a-41d9-438e-8fb2-d3ebfa340687"). InnerVolumeSpecName "goldmane-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:03:08.269932 systemd[1]: var-lib-kubelet-pods-71e7bd3a\x2d41d9\x2d438e\x2d8fb2\x2dd3ebfa340687-volumes-kubernetes.io\x7esecret-goldmane\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:03:08.274267 kubelet[2717]: I0813 01:03:08.273781 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71e7bd3a-41d9-438e-8fb2-d3ebfa340687-kube-api-access-vjm89" (OuterVolumeSpecName: "kube-api-access-vjm89") pod "71e7bd3a-41d9-438e-8fb2-d3ebfa340687" (UID: "71e7bd3a-41d9-438e-8fb2-d3ebfa340687"). InnerVolumeSpecName "kube-api-access-vjm89". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:03:08.274077 systemd[1]: var-lib-kubelet-pods-71e7bd3a\x2d41d9\x2d438e\x2d8fb2\x2dd3ebfa340687-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dvjm89.mount: Deactivated successfully. Aug 13 01:03:08.363078 kubelet[2717]: I0813 01:03:08.363014 2717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vjm89\" (UniqueName: \"kubernetes.io/projected/71e7bd3a-41d9-438e-8fb2-d3ebfa340687-kube-api-access-vjm89\") on node \"172-233-209-21\" DevicePath \"\"" Aug 13 01:03:08.363078 kubelet[2717]: I0813 01:03:08.363060 2717 reconciler_common.go:293] "Volume detached for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/71e7bd3a-41d9-438e-8fb2-d3ebfa340687-goldmane-key-pair\") on node \"172-233-209-21\" DevicePath \"\"" Aug 13 01:03:08.363078 kubelet[2717]: I0813 01:03:08.363077 2717 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/71e7bd3a-41d9-438e-8fb2-d3ebfa340687-config\") on node \"172-233-209-21\" DevicePath \"\"" Aug 13 01:03:08.363390 kubelet[2717]: I0813 01:03:08.363090 2717 reconciler_common.go:293] "Volume detached for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/71e7bd3a-41d9-438e-8fb2-d3ebfa340687-goldmane-ca-bundle\") on node \"172-233-209-21\" DevicePath \"\"" Aug 13 01:03:08.377359 containerd[1575]: time="2025-08-13T01:03:08.375712831Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3360880273: write /var/lib/containerd/tmpmounts/containerd-mount3360880273/usr/bin/calico-node: no space left on device" Aug 13 01:03:08.377359 containerd[1575]: time="2025-08-13T01:03:08.375799201Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:03:08.379162 kubelet[2717]: E0813 01:03:08.375964 2717 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3360880273: write /var/lib/containerd/tmpmounts/containerd-mount3360880273/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:03:08.379162 kubelet[2717]: E0813 01:03:08.376005 2717 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3360880273: write /var/lib/containerd/tmpmounts/containerd-mount3360880273/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:03:08.377986 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3360880273.mount: Deactivated successfully. Aug 13 01:03:08.381028 kubelet[2717]: E0813 01:03:08.376151 2717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmcp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-7pdcs_calico-system(e6731efa-3c96-4227-b83c-f4c3adff36c6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3360880273: write /var/lib/containerd/tmpmounts/containerd-mount3360880273/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:03:08.381531 kubelet[2717]: E0813 01:03:08.378631 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3360880273: write /var/lib/containerd/tmpmounts/containerd-mount3360880273/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-7pdcs" podUID="e6731efa-3c96-4227-b83c-f4c3adff36c6" Aug 13 01:03:08.771450 systemd[1]: Removed slice kubepods-besteffort-pod71e7bd3a_41d9_438e_8fb2_d3ebfa340687.slice - libcontainer container kubepods-besteffort-pod71e7bd3a_41d9_438e_8fb2_d3ebfa340687.slice. Aug 13 01:03:08.790893 kubelet[2717]: I0813 01:03:08.790817 2717 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:03:08.791340 kubelet[2717]: E0813 01:03:08.791328 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:03:08.938174 kubelet[2717]: E0813 01:03:08.937651 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:03:09.109316 kubelet[2717]: I0813 01:03:09.109137 2717 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-system/goldmane-58fd7646b9-n678g"] Aug 13 01:03:14.765053 containerd[1575]: time="2025-08-13T01:03:14.764745996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564d8b8748-ps97n,Uid:a7e8405c-2c82-420c-bac7-a7277571f968,Namespace:calico-system,Attempt:0,}" Aug 13 01:03:14.765630 containerd[1575]: time="2025-08-13T01:03:14.764762796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84hvc,Uid:f2b74998-29fc-4213-8313-543c9154bc64,Namespace:calico-system,Attempt:0,}" Aug 13 01:03:14.830279 containerd[1575]: time="2025-08-13T01:03:14.830038764Z" level=error msg="Failed to destroy network for sandbox \"47c93b8689490381c6e5e90be5d5ed4e1d9207a3bf707f65b5674b53d5e6dcb1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:14.833025 systemd[1]: run-netns-cni\x2db58cd557\x2d479a\x2d8820\x2d400f\x2d23dda2ce22de.mount: Deactivated successfully. Aug 13 01:03:14.834789 containerd[1575]: time="2025-08-13T01:03:14.834724839Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84hvc,Uid:f2b74998-29fc-4213-8313-543c9154bc64,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"47c93b8689490381c6e5e90be5d5ed4e1d9207a3bf707f65b5674b53d5e6dcb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:14.835529 kubelet[2717]: E0813 01:03:14.835472 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47c93b8689490381c6e5e90be5d5ed4e1d9207a3bf707f65b5674b53d5e6dcb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:14.836258 kubelet[2717]: E0813 01:03:14.835525 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47c93b8689490381c6e5e90be5d5ed4e1d9207a3bf707f65b5674b53d5e6dcb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:03:14.836258 kubelet[2717]: E0813 01:03:14.835544 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"47c93b8689490381c6e5e90be5d5ed4e1d9207a3bf707f65b5674b53d5e6dcb1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:03:14.836258 kubelet[2717]: E0813 01:03:14.835577 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-84hvc_calico-system(f2b74998-29fc-4213-8313-543c9154bc64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-84hvc_calico-system(f2b74998-29fc-4213-8313-543c9154bc64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"47c93b8689490381c6e5e90be5d5ed4e1d9207a3bf707f65b5674b53d5e6dcb1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-84hvc" podUID="f2b74998-29fc-4213-8313-543c9154bc64" Aug 13 01:03:14.836970 containerd[1575]: time="2025-08-13T01:03:14.836709316Z" level=error msg="Failed to destroy network for sandbox \"9987b7f1b5d230ee6fab217b632b08437ac33775ba894ed49784fbd015ec074a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:14.840349 containerd[1575]: time="2025-08-13T01:03:14.840277982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564d8b8748-ps97n,Uid:a7e8405c-2c82-420c-bac7-a7277571f968,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9987b7f1b5d230ee6fab217b632b08437ac33775ba894ed49784fbd015ec074a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:14.840667 systemd[1]: run-netns-cni\x2d58fbb118\x2de79c\x2defb5\x2dd41f\x2dbd451877a18c.mount: Deactivated successfully. Aug 13 01:03:14.841977 kubelet[2717]: E0813 01:03:14.840666 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9987b7f1b5d230ee6fab217b632b08437ac33775ba894ed49784fbd015ec074a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:14.841977 kubelet[2717]: E0813 01:03:14.841014 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9987b7f1b5d230ee6fab217b632b08437ac33775ba894ed49784fbd015ec074a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:03:14.841977 kubelet[2717]: E0813 01:03:14.841079 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9987b7f1b5d230ee6fab217b632b08437ac33775ba894ed49784fbd015ec074a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:03:14.841977 kubelet[2717]: E0813 01:03:14.841239 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9987b7f1b5d230ee6fab217b632b08437ac33775ba894ed49784fbd015ec074a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:03:16.765367 kubelet[2717]: E0813 01:03:16.764161 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:03:16.765367 kubelet[2717]: E0813 01:03:16.764775 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:03:16.765862 containerd[1575]: time="2025-08-13T01:03:16.765051734Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,}" Aug 13 01:03:16.765862 containerd[1575]: time="2025-08-13T01:03:16.765509654Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tp469,Uid:c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af,Namespace:kube-system,Attempt:0,}" Aug 13 01:03:16.826860 containerd[1575]: time="2025-08-13T01:03:16.826752064Z" level=error msg="Failed to destroy network for sandbox \"ba3ee4d9543ec1faf6ccb5e6e4b831770111b5058ebb0c5d05a2ac0356290e6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:16.828970 systemd[1]: run-netns-cni\x2d64972f43\x2d06f9\x2de6a2\x2d6861\x2d7cf55a676b70.mount: Deactivated successfully. Aug 13 01:03:16.830002 containerd[1575]: time="2025-08-13T01:03:16.829972471Z" level=error msg="Failed to destroy network for sandbox \"45c8a64376960b5e5ea7fcbfec33f5e445304a7b42c4308d0fe7a04769142dc6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:16.830209 containerd[1575]: time="2025-08-13T01:03:16.830149201Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tp469,Uid:c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba3ee4d9543ec1faf6ccb5e6e4b831770111b5058ebb0c5d05a2ac0356290e6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:16.832275 kubelet[2717]: E0813 01:03:16.831139 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba3ee4d9543ec1faf6ccb5e6e4b831770111b5058ebb0c5d05a2ac0356290e6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:16.832275 kubelet[2717]: E0813 01:03:16.831285 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba3ee4d9543ec1faf6ccb5e6e4b831770111b5058ebb0c5d05a2ac0356290e6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:03:16.832275 kubelet[2717]: E0813 01:03:16.831305 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba3ee4d9543ec1faf6ccb5e6e4b831770111b5058ebb0c5d05a2ac0356290e6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:03:16.832538 kubelet[2717]: E0813 01:03:16.832275 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tp469_kube-system(c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tp469_kube-system(c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba3ee4d9543ec1faf6ccb5e6e4b831770111b5058ebb0c5d05a2ac0356290e6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tp469" podUID="c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af" Aug 13 01:03:16.834407 containerd[1575]: time="2025-08-13T01:03:16.833323178Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"45c8a64376960b5e5ea7fcbfec33f5e445304a7b42c4308d0fe7a04769142dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:16.834468 kubelet[2717]: E0813 01:03:16.834289 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45c8a64376960b5e5ea7fcbfec33f5e445304a7b42c4308d0fe7a04769142dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:16.834468 kubelet[2717]: E0813 01:03:16.834332 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45c8a64376960b5e5ea7fcbfec33f5e445304a7b42c4308d0fe7a04769142dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:03:16.834468 kubelet[2717]: E0813 01:03:16.834347 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45c8a64376960b5e5ea7fcbfec33f5e445304a7b42c4308d0fe7a04769142dc6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:03:16.834468 kubelet[2717]: E0813 01:03:16.834376 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45c8a64376960b5e5ea7fcbfec33f5e445304a7b42c4308d0fe7a04769142dc6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dk5p7" podUID="6787cc4f-56e4-4094-978c-958d0d7a35ba" Aug 13 01:03:16.834826 systemd[1]: run-netns-cni\x2d0a7fec90\x2d9998\x2d6e1d\x2de89b\x2d084b0f34489e.mount: Deactivated successfully. Aug 13 01:03:19.132550 kubelet[2717]: I0813 01:03:19.132520 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:03:19.132550 kubelet[2717]: I0813 01:03:19.132558 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:03:19.135692 kubelet[2717]: I0813 01:03:19.135642 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:03:19.145913 kubelet[2717]: I0813 01:03:19.145892 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:03:19.145996 kubelet[2717]: I0813 01:03:19.145954 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","kube-system/coredns-7c65d6cfc9-tp469","kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/csi-node-driver-84hvc","calico-system/calico-node-7pdcs","tigera-operator/tigera-operator-5bf8dfcb4-fdng7","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:03:19.145996 kubelet[2717]: E0813 01:03:19.145978 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:03:19.145996 kubelet[2717]: E0813 01:03:19.145987 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:03:19.145996 kubelet[2717]: E0813 01:03:19.145993 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:03:19.145996 kubelet[2717]: E0813 01:03:19.145999 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:03:19.146148 kubelet[2717]: E0813 01:03:19.146006 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:03:19.146582 containerd[1575]: time="2025-08-13T01:03:19.146547623Z" level=info msg="StopContainer for \"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733\" with timeout 2 (s)" Aug 13 01:03:19.148366 containerd[1575]: time="2025-08-13T01:03:19.148323202Z" level=info msg="Stop container \"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733\" with signal terminated" Aug 13 01:03:19.166123 systemd[1]: cri-containerd-60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733.scope: Deactivated successfully. Aug 13 01:03:19.167441 systemd[1]: cri-containerd-60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733.scope: Consumed 3.951s CPU time, 90.3M memory peak. Aug 13 01:03:19.170164 containerd[1575]: time="2025-08-13T01:03:19.170140174Z" level=info msg="received exit event container_id:\"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733\" id:\"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733\" pid:3036 exited_at:{seconds:1755046999 nanos:169857384}" Aug 13 01:03:19.170696 containerd[1575]: time="2025-08-13T01:03:19.170362614Z" level=info msg="TaskExit event in podsandbox handler container_id:\"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733\" id:\"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733\" pid:3036 exited_at:{seconds:1755046999 nanos:169857384}" Aug 13 01:03:19.192273 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733-rootfs.mount: Deactivated successfully. Aug 13 01:03:19.198886 containerd[1575]: time="2025-08-13T01:03:19.198850091Z" level=info msg="StopContainer for \"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733\" returns successfully" Aug 13 01:03:19.199459 containerd[1575]: time="2025-08-13T01:03:19.199431351Z" level=info msg="StopPodSandbox for \"dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd\"" Aug 13 01:03:19.199621 containerd[1575]: time="2025-08-13T01:03:19.199568450Z" level=info msg="Container to stop \"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:03:19.206119 systemd[1]: cri-containerd-dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd.scope: Deactivated successfully. Aug 13 01:03:19.211863 containerd[1575]: time="2025-08-13T01:03:19.211837781Z" level=info msg="TaskExit event in podsandbox handler container_id:\"dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd\" id:\"dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd\" pid:2798 exit_status:137 exited_at:{seconds:1755046999 nanos:211635951}" Aug 13 01:03:19.236612 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd-rootfs.mount: Deactivated successfully. Aug 13 01:03:19.238228 containerd[1575]: time="2025-08-13T01:03:19.237932650Z" level=info msg="received exit event sandbox_id:\"dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd\" exit_status:137 exited_at:{seconds:1755046999 nanos:211635951}" Aug 13 01:03:19.238804 containerd[1575]: time="2025-08-13T01:03:19.238781519Z" level=info msg="shim disconnected" id=dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd namespace=k8s.io Aug 13 01:03:19.238804 containerd[1575]: time="2025-08-13T01:03:19.238801899Z" level=warning msg="cleaning up after shim disconnected" id=dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd namespace=k8s.io Aug 13 01:03:19.238888 containerd[1575]: time="2025-08-13T01:03:19.238811789Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:03:19.239946 containerd[1575]: time="2025-08-13T01:03:19.239906668Z" level=info msg="TearDown network for sandbox \"dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd\" successfully" Aug 13 01:03:19.240009 containerd[1575]: time="2025-08-13T01:03:19.239929398Z" level=info msg="StopPodSandbox for \"dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd\" returns successfully" Aug 13 01:03:19.240578 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd-shm.mount: Deactivated successfully. Aug 13 01:03:19.249141 kubelet[2717]: I0813 01:03:19.249125 2717 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-5bf8dfcb4-fdng7" Aug 13 01:03:19.249592 kubelet[2717]: I0813 01:03:19.249485 2717 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-5bf8dfcb4-fdng7"] Aug 13 01:03:19.271071 kubelet[2717]: I0813 01:03:19.270882 2717 kubelet.go:2306] "Pod admission denied" podUID="5a7ed436-88de-475d-860f-7e08eccfdb38" pod="tigera-operator/tigera-operator-5bf8dfcb4-sp2zv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:19.290284 kubelet[2717]: I0813 01:03:19.290243 2717 kubelet.go:2306] "Pod admission denied" podUID="d7231360-2d42-41c8-a4bd-56eda40e292b" pod="tigera-operator/tigera-operator-5bf8dfcb4-fm2zz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:19.308857 kubelet[2717]: I0813 01:03:19.308816 2717 kubelet.go:2306] "Pod admission denied" podUID="94b823dc-b625-4b5b-a92b-dd017b6120d5" pod="tigera-operator/tigera-operator-5bf8dfcb4-5q4cc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:19.323401 kubelet[2717]: I0813 01:03:19.323378 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-95lcs\" (UniqueName: \"kubernetes.io/projected/131552f4-ce9b-4b8b-9d37-b3dff23e0591-kube-api-access-95lcs\") pod \"131552f4-ce9b-4b8b-9d37-b3dff23e0591\" (UID: \"131552f4-ce9b-4b8b-9d37-b3dff23e0591\") " Aug 13 01:03:19.324331 kubelet[2717]: I0813 01:03:19.323957 2717 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/131552f4-ce9b-4b8b-9d37-b3dff23e0591-var-lib-calico\") pod \"131552f4-ce9b-4b8b-9d37-b3dff23e0591\" (UID: \"131552f4-ce9b-4b8b-9d37-b3dff23e0591\") " Aug 13 01:03:19.324331 kubelet[2717]: I0813 01:03:19.324172 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/131552f4-ce9b-4b8b-9d37-b3dff23e0591-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "131552f4-ce9b-4b8b-9d37-b3dff23e0591" (UID: "131552f4-ce9b-4b8b-9d37-b3dff23e0591"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:03:19.330118 systemd[1]: var-lib-kubelet-pods-131552f4\x2dce9b\x2d4b8b\x2d9d37\x2db3dff23e0591-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d95lcs.mount: Deactivated successfully. Aug 13 01:03:19.332071 kubelet[2717]: I0813 01:03:19.330696 2717 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/131552f4-ce9b-4b8b-9d37-b3dff23e0591-kube-api-access-95lcs" (OuterVolumeSpecName: "kube-api-access-95lcs") pod "131552f4-ce9b-4b8b-9d37-b3dff23e0591" (UID: "131552f4-ce9b-4b8b-9d37-b3dff23e0591"). InnerVolumeSpecName "kube-api-access-95lcs". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:03:19.334429 kubelet[2717]: I0813 01:03:19.334394 2717 kubelet.go:2306] "Pod admission denied" podUID="bf4c5ca7-0a5b-4516-b987-a98f9610aa2a" pod="tigera-operator/tigera-operator-5bf8dfcb4-kqftb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:19.354851 kubelet[2717]: I0813 01:03:19.354567 2717 kubelet.go:2306] "Pod admission denied" podUID="f0319edf-fd67-43d4-a276-cc236cb02328" pod="tigera-operator/tigera-operator-5bf8dfcb4-r6sww" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:19.376916 kubelet[2717]: I0813 01:03:19.376891 2717 kubelet.go:2306] "Pod admission denied" podUID="e2e243a3-1e89-47ad-9991-2ca548229203" pod="tigera-operator/tigera-operator-5bf8dfcb4-xn24v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:19.411968 kubelet[2717]: I0813 01:03:19.411848 2717 kubelet.go:2306] "Pod admission denied" podUID="62d00777-fad3-452b-bf49-80f92b515f73" pod="tigera-operator/tigera-operator-5bf8dfcb4-d5fqf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:19.425435 kubelet[2717]: I0813 01:03:19.425392 2717 reconciler_common.go:293] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/131552f4-ce9b-4b8b-9d37-b3dff23e0591-var-lib-calico\") on node \"172-233-209-21\" DevicePath \"\"" Aug 13 01:03:19.425435 kubelet[2717]: I0813 01:03:19.425419 2717 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-95lcs\" (UniqueName: \"kubernetes.io/projected/131552f4-ce9b-4b8b-9d37-b3dff23e0591-kube-api-access-95lcs\") on node \"172-233-209-21\" DevicePath \"\"" Aug 13 01:03:19.447160 kubelet[2717]: I0813 01:03:19.446636 2717 kubelet.go:2306] "Pod admission denied" podUID="369fbd6c-258a-4d84-9cb6-aa2afd0cec4b" pod="tigera-operator/tigera-operator-5bf8dfcb4-fnh5r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:19.475890 kubelet[2717]: I0813 01:03:19.475836 2717 kubelet.go:2306] "Pod admission denied" podUID="3ac8ac92-439c-4524-b4d6-318ab98799ee" pod="tigera-operator/tigera-operator-5bf8dfcb4-lsntw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:19.526381 kubelet[2717]: I0813 01:03:19.525970 2717 kubelet.go:2306] "Pod admission denied" podUID="7c80c911-3014-406c-8446-b5f618da0034" pod="tigera-operator/tigera-operator-5bf8dfcb4-gvtr7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:19.670338 kubelet[2717]: I0813 01:03:19.669891 2717 kubelet.go:2306] "Pod admission denied" podUID="d9604f06-f5a1-44cc-9416-1a5ca836a35d" pod="tigera-operator/tigera-operator-5bf8dfcb4-n6mdb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:19.765954 kubelet[2717]: E0813 01:03:19.765908 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-7pdcs" podUID="e6731efa-3c96-4227-b83c-f4c3adff36c6" Aug 13 01:03:19.819314 kubelet[2717]: I0813 01:03:19.819235 2717 kubelet.go:2306] "Pod admission denied" podUID="a4621324-4666-491a-ad9e-2d68a7d22b6d" pod="tigera-operator/tigera-operator-5bf8dfcb4-9n9ww" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:19.957396 kubelet[2717]: I0813 01:03:19.957328 2717 scope.go:117] "RemoveContainer" containerID="60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733" Aug 13 01:03:19.958767 containerd[1575]: time="2025-08-13T01:03:19.958730510Z" level=info msg="RemoveContainer for \"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733\"" Aug 13 01:03:19.963501 containerd[1575]: time="2025-08-13T01:03:19.963436556Z" level=info msg="RemoveContainer for \"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733\" returns successfully" Aug 13 01:03:19.963901 kubelet[2717]: I0813 01:03:19.963884 2717 scope.go:117] "RemoveContainer" containerID="60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733" Aug 13 01:03:19.964145 containerd[1575]: time="2025-08-13T01:03:19.964115005Z" level=error msg="ContainerStatus for \"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733\": not found" Aug 13 01:03:19.964349 kubelet[2717]: E0813 01:03:19.964274 2717 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733\": not found" containerID="60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733" Aug 13 01:03:19.964349 kubelet[2717]: I0813 01:03:19.964318 2717 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733"} err="failed to get container status \"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733\": rpc error: code = NotFound desc = an error occurred when try to find container \"60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733\": not found" Aug 13 01:03:19.965579 systemd[1]: Removed slice kubepods-besteffort-pod131552f4_ce9b_4b8b_9d37_b3dff23e0591.slice - libcontainer container kubepods-besteffort-pod131552f4_ce9b_4b8b_9d37_b3dff23e0591.slice. Aug 13 01:03:19.965790 systemd[1]: kubepods-besteffort-pod131552f4_ce9b_4b8b_9d37_b3dff23e0591.slice: Consumed 3.978s CPU time, 90.6M memory peak. Aug 13 01:03:19.974510 kubelet[2717]: I0813 01:03:19.974379 2717 kubelet.go:2306] "Pod admission denied" podUID="3a8f5863-00f1-4fc0-b6a4-ef6470bbed05" pod="tigera-operator/tigera-operator-5bf8dfcb4-98h6s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:20.120982 kubelet[2717]: I0813 01:03:20.120077 2717 kubelet.go:2306] "Pod admission denied" podUID="8436f8f3-8077-4834-94c0-afb782ee7958" pod="tigera-operator/tigera-operator-5bf8dfcb4-rckvk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:20.250062 kubelet[2717]: I0813 01:03:20.249943 2717 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-5bf8dfcb4-fdng7"] Aug 13 01:03:20.269772 kubelet[2717]: I0813 01:03:20.269733 2717 kubelet.go:2306] "Pod admission denied" podUID="545d0059-eab2-4254-b57e-888449477e8c" pod="tigera-operator/tigera-operator-5bf8dfcb4-thl2t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:20.421756 kubelet[2717]: I0813 01:03:20.421702 2717 kubelet.go:2306] "Pod admission denied" podUID="c4c51594-df0f-44cc-9e2e-78f8e3a7039a" pod="tigera-operator/tigera-operator-5bf8dfcb4-ldg4g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:20.571247 kubelet[2717]: I0813 01:03:20.571103 2717 kubelet.go:2306] "Pod admission denied" podUID="9c75a4cc-ed49-45b3-b51f-43eef9cd245b" pod="tigera-operator/tigera-operator-5bf8dfcb4-dql5x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:20.720853 kubelet[2717]: I0813 01:03:20.720783 2717 kubelet.go:2306] "Pod admission denied" podUID="43e3474c-008f-41d2-8d6f-0f6ecd796a6d" pod="tigera-operator/tigera-operator-5bf8dfcb4-qbwhp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:20.819679 kubelet[2717]: I0813 01:03:20.819598 2717 kubelet.go:2306] "Pod admission denied" podUID="6a1d9c79-9233-420a-a1da-6a4a43ab16e1" pod="tigera-operator/tigera-operator-5bf8dfcb4-rgd9l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:20.969837 kubelet[2717]: I0813 01:03:20.969795 2717 kubelet.go:2306] "Pod admission denied" podUID="265e86ce-91c3-4900-bdfc-c61edcaafca8" pod="tigera-operator/tigera-operator-5bf8dfcb4-gxpr8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:21.219682 kubelet[2717]: I0813 01:03:21.219635 2717 kubelet.go:2306] "Pod admission denied" podUID="e835578c-5c2f-4a5d-b345-56a09942f5ab" pod="tigera-operator/tigera-operator-5bf8dfcb4-6www7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:21.370230 kubelet[2717]: I0813 01:03:21.369975 2717 kubelet.go:2306] "Pod admission denied" podUID="c5e6ab20-5ca3-48f9-9d74-15514e6d4bad" pod="tigera-operator/tigera-operator-5bf8dfcb4-g9z7l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:21.520852 kubelet[2717]: I0813 01:03:21.520805 2717 kubelet.go:2306] "Pod admission denied" podUID="324c9045-d61f-40bc-b016-eee3947188c0" pod="tigera-operator/tigera-operator-5bf8dfcb4-ss25m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:21.672453 kubelet[2717]: I0813 01:03:21.672316 2717 kubelet.go:2306] "Pod admission denied" podUID="cd8aaa29-013c-45c8-a0a7-e645a8828879" pod="tigera-operator/tigera-operator-5bf8dfcb4-c4dgt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:21.822864 kubelet[2717]: I0813 01:03:21.822798 2717 kubelet.go:2306] "Pod admission denied" podUID="2fe3e80b-db72-401c-83ad-37565c642ca9" pod="tigera-operator/tigera-operator-5bf8dfcb4-rcpm5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:21.920833 kubelet[2717]: I0813 01:03:21.920795 2717 kubelet.go:2306] "Pod admission denied" podUID="2768798e-00a6-487a-b3c0-fb2af38f3c07" pod="tigera-operator/tigera-operator-5bf8dfcb4-xjvhp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:21.975387 kubelet[2717]: I0813 01:03:21.975334 2717 kubelet.go:2306] "Pod admission denied" podUID="119bce75-f20d-4163-abfe-f341a2ec6b1a" pod="tigera-operator/tigera-operator-5bf8dfcb4-xsx5d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:22.068995 kubelet[2717]: I0813 01:03:22.068948 2717 kubelet.go:2306] "Pod admission denied" podUID="db4af58d-247e-4542-8eb2-b478f9f261dc" pod="tigera-operator/tigera-operator-5bf8dfcb4-jhdg9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:22.270331 kubelet[2717]: I0813 01:03:22.270231 2717 kubelet.go:2306] "Pod admission denied" podUID="b4268a67-c46d-4ec6-90f0-2f7bd4dd68d5" pod="tigera-operator/tigera-operator-5bf8dfcb4-sjsgv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:22.371634 kubelet[2717]: I0813 01:03:22.371564 2717 kubelet.go:2306] "Pod admission denied" podUID="5280399f-4369-48a2-8146-91fec3f03e3a" pod="tigera-operator/tigera-operator-5bf8dfcb4-k5f26" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:22.469803 kubelet[2717]: I0813 01:03:22.469722 2717 kubelet.go:2306] "Pod admission denied" podUID="5ec0a44d-ca54-4d10-a31c-a0ab5a0867f6" pod="tigera-operator/tigera-operator-5bf8dfcb4-85vnx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:22.569681 kubelet[2717]: I0813 01:03:22.569577 2717 kubelet.go:2306] "Pod admission denied" podUID="fbc92a69-15f9-4f72-9373-8ae86b6c92f1" pod="tigera-operator/tigera-operator-5bf8dfcb4-t7t48" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:22.670370 kubelet[2717]: I0813 01:03:22.670326 2717 kubelet.go:2306] "Pod admission denied" podUID="f22e276f-5659-4c1f-967e-4f17aba2b70e" pod="tigera-operator/tigera-operator-5bf8dfcb4-mnd4g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:22.773151 kubelet[2717]: I0813 01:03:22.773071 2717 kubelet.go:2306] "Pod admission denied" podUID="a352eaf3-7f30-4f1b-8bce-17dd168dce68" pod="tigera-operator/tigera-operator-5bf8dfcb4-92v4p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:22.817950 kubelet[2717]: I0813 01:03:22.817913 2717 kubelet.go:2306] "Pod admission denied" podUID="9c81317c-204c-45e1-8b1c-89e36defee23" pod="tigera-operator/tigera-operator-5bf8dfcb4-vpdn9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:22.920171 kubelet[2717]: I0813 01:03:22.920066 2717 kubelet.go:2306] "Pod admission denied" podUID="9fe4ed5a-d11d-4c60-a05c-6292d240770c" pod="tigera-operator/tigera-operator-5bf8dfcb4-f84zw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:23.120799 kubelet[2717]: I0813 01:03:23.120640 2717 kubelet.go:2306] "Pod admission denied" podUID="9d952d9e-dbdc-4850-9cc5-944327077785" pod="tigera-operator/tigera-operator-5bf8dfcb4-gz8vz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:23.224748 kubelet[2717]: I0813 01:03:23.224714 2717 kubelet.go:2306] "Pod admission denied" podUID="0239e28f-e03f-435c-b6af-0e8bd9ec9a1c" pod="tigera-operator/tigera-operator-5bf8dfcb4-6mdpj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:23.321713 kubelet[2717]: I0813 01:03:23.321670 2717 kubelet.go:2306] "Pod admission denied" podUID="042611a0-3f34-468a-b72e-8965ccb2dc82" pod="tigera-operator/tigera-operator-5bf8dfcb4-fzf98" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:23.418949 kubelet[2717]: I0813 01:03:23.418911 2717 kubelet.go:2306] "Pod admission denied" podUID="4dcd43cd-8884-4fa6-9456-fadcba19857f" pod="tigera-operator/tigera-operator-5bf8dfcb4-l95d2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:23.519637 kubelet[2717]: I0813 01:03:23.519539 2717 kubelet.go:2306] "Pod admission denied" podUID="80e3bb7e-80a7-41dd-94e3-8576dc0974af" pod="tigera-operator/tigera-operator-5bf8dfcb4-m9hzs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:23.620505 kubelet[2717]: I0813 01:03:23.620442 2717 kubelet.go:2306] "Pod admission denied" podUID="0753c023-19e5-4756-b83f-516ff467bad4" pod="tigera-operator/tigera-operator-5bf8dfcb4-cl2hm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:23.669413 kubelet[2717]: I0813 01:03:23.669180 2717 kubelet.go:2306] "Pod admission denied" podUID="484b714c-07d1-44a5-8e7e-37f767649487" pod="tigera-operator/tigera-operator-5bf8dfcb4-pjzl7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:23.770932 kubelet[2717]: I0813 01:03:23.770824 2717 kubelet.go:2306] "Pod admission denied" podUID="30b29424-1dab-4fd5-bc6e-5e654c406141" pod="tigera-operator/tigera-operator-5bf8dfcb4-2rn9g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:23.980681 kubelet[2717]: I0813 01:03:23.980446 2717 kubelet.go:2306] "Pod admission denied" podUID="6ecb3c57-e064-4f0f-ae4f-9a9a196bc615" pod="tigera-operator/tigera-operator-5bf8dfcb4-5rzxm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:24.071603 kubelet[2717]: I0813 01:03:24.071486 2717 kubelet.go:2306] "Pod admission denied" podUID="c7832e4e-b6ca-4cb3-9db0-f6ed0bd1acef" pod="tigera-operator/tigera-operator-5bf8dfcb4-dls8r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:24.169270 kubelet[2717]: I0813 01:03:24.169150 2717 kubelet.go:2306] "Pod admission denied" podUID="6627dc15-4160-4434-bfc2-93e6cbcf6ad6" pod="tigera-operator/tigera-operator-5bf8dfcb4-rzcqt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:24.271455 kubelet[2717]: I0813 01:03:24.271408 2717 kubelet.go:2306] "Pod admission denied" podUID="43caafb5-7019-4b79-a17d-dcafa03e8bf2" pod="tigera-operator/tigera-operator-5bf8dfcb4-96fvm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:24.369866 kubelet[2717]: I0813 01:03:24.369734 2717 kubelet.go:2306] "Pod admission denied" podUID="51e00c38-4e1a-4fcc-92f2-1daa8f7d8e12" pod="tigera-operator/tigera-operator-5bf8dfcb4-xhzlp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:24.469321 kubelet[2717]: I0813 01:03:24.469268 2717 kubelet.go:2306] "Pod admission denied" podUID="f6dd4753-55d1-463e-91e1-ce78cddc26ff" pod="tigera-operator/tigera-operator-5bf8dfcb4-nmtnn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:24.571175 kubelet[2717]: I0813 01:03:24.571003 2717 kubelet.go:2306] "Pod admission denied" podUID="e3432522-9763-4b4c-8eef-a0c2d3c1018a" pod="tigera-operator/tigera-operator-5bf8dfcb4-kz5vr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:24.673439 kubelet[2717]: I0813 01:03:24.673076 2717 kubelet.go:2306] "Pod admission denied" podUID="19017dce-36a0-4773-9891-f0986344261f" pod="tigera-operator/tigera-operator-5bf8dfcb4-5r88z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:24.758506 containerd[1575]: time="2025-08-13T01:03:24.758341018Z" level=info msg="StopPodSandbox for \"dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd\"" Aug 13 01:03:24.758506 containerd[1575]: time="2025-08-13T01:03:24.758453748Z" level=info msg="TearDown network for sandbox \"dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd\" successfully" Aug 13 01:03:24.758506 containerd[1575]: time="2025-08-13T01:03:24.758465128Z" level=info msg="StopPodSandbox for \"dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd\" returns successfully" Aug 13 01:03:24.760385 containerd[1575]: time="2025-08-13T01:03:24.759241927Z" level=info msg="RemovePodSandbox for \"dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd\"" Aug 13 01:03:24.760385 containerd[1575]: time="2025-08-13T01:03:24.759258297Z" level=info msg="Forcibly stopping sandbox \"dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd\"" Aug 13 01:03:24.760385 containerd[1575]: time="2025-08-13T01:03:24.759309827Z" level=info msg="TearDown network for sandbox \"dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd\" successfully" Aug 13 01:03:24.761735 containerd[1575]: time="2025-08-13T01:03:24.761543856Z" level=info msg="Ensure that sandbox dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd in task-service has been cleanup successfully" Aug 13 01:03:24.764255 containerd[1575]: time="2025-08-13T01:03:24.764237104Z" level=info msg="RemovePodSandbox \"dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd\" returns successfully" Aug 13 01:03:24.781119 kubelet[2717]: I0813 01:03:24.780429 2717 kubelet.go:2306] "Pod admission denied" podUID="704af3bf-fb58-4e28-a0f4-c88b154e8a4e" pod="tigera-operator/tigera-operator-5bf8dfcb4-tqvlr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:24.971917 kubelet[2717]: I0813 01:03:24.971865 2717 kubelet.go:2306] "Pod admission denied" podUID="e2195838-8cf8-4d62-a49b-895c970a9d45" pod="tigera-operator/tigera-operator-5bf8dfcb4-dck59" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:25.070071 kubelet[2717]: I0813 01:03:25.070033 2717 kubelet.go:2306] "Pod admission denied" podUID="4647c9e7-5de4-4f33-9bb8-27c3811f24c7" pod="tigera-operator/tigera-operator-5bf8dfcb4-5s7k2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:25.172516 kubelet[2717]: I0813 01:03:25.172474 2717 kubelet.go:2306] "Pod admission denied" podUID="3ab01309-42d8-40cb-b1c8-16e53248872a" pod="tigera-operator/tigera-operator-5bf8dfcb4-r94fd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:25.270034 kubelet[2717]: I0813 01:03:25.269875 2717 kubelet.go:2306] "Pod admission denied" podUID="cac39687-3ae4-4c4b-97d0-ad812e64b473" pod="tigera-operator/tigera-operator-5bf8dfcb4-dgv87" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:25.370082 kubelet[2717]: I0813 01:03:25.370044 2717 kubelet.go:2306] "Pod admission denied" podUID="0340ad9e-c87d-4e23-a92e-9f25fdc4f130" pod="tigera-operator/tigera-operator-5bf8dfcb4-7qkzd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:25.471922 kubelet[2717]: I0813 01:03:25.471878 2717 kubelet.go:2306] "Pod admission denied" podUID="6f6ac97f-b011-4b89-9150-d93036c5332e" pod="tigera-operator/tigera-operator-5bf8dfcb4-g9665" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:25.519700 kubelet[2717]: I0813 01:03:25.519651 2717 kubelet.go:2306] "Pod admission denied" podUID="e2bd9a8a-749f-4130-8462-49f6950f19e0" pod="tigera-operator/tigera-operator-5bf8dfcb4-ng2j4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:25.620733 kubelet[2717]: I0813 01:03:25.620507 2717 kubelet.go:2306] "Pod admission denied" podUID="dee08955-a8f3-487a-bb28-ffb7d6b7f5db" pod="tigera-operator/tigera-operator-5bf8dfcb4-xn7wn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:25.825539 kubelet[2717]: I0813 01:03:25.825505 2717 kubelet.go:2306] "Pod admission denied" podUID="7c1ee6cd-44bc-4746-853f-62ccd916cafb" pod="tigera-operator/tigera-operator-5bf8dfcb4-nb78x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:25.921339 kubelet[2717]: I0813 01:03:25.921024 2717 kubelet.go:2306] "Pod admission denied" podUID="0cc0f16a-abdb-4fdb-8efa-3c8655d56b55" pod="tigera-operator/tigera-operator-5bf8dfcb4-qz66n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:26.020803 kubelet[2717]: I0813 01:03:26.020753 2717 kubelet.go:2306] "Pod admission denied" podUID="fbde6614-060b-4ead-8cb2-ca7a7c3e0549" pod="tigera-operator/tigera-operator-5bf8dfcb4-rzslv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:26.221581 kubelet[2717]: I0813 01:03:26.221534 2717 kubelet.go:2306] "Pod admission denied" podUID="a9ad9027-baf8-4796-b20c-86d5dc801310" pod="tigera-operator/tigera-operator-5bf8dfcb4-ftv8k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:26.320529 kubelet[2717]: I0813 01:03:26.320464 2717 kubelet.go:2306] "Pod admission denied" podUID="8aa813d9-4591-4cae-a2bc-42c502912dcb" pod="tigera-operator/tigera-operator-5bf8dfcb4-gfwgm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:26.421960 kubelet[2717]: I0813 01:03:26.421734 2717 kubelet.go:2306] "Pod admission denied" podUID="bc6e7057-6bc5-4e51-bbaf-a286fbe5c5e0" pod="tigera-operator/tigera-operator-5bf8dfcb4-hkdhm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:26.520473 kubelet[2717]: I0813 01:03:26.520358 2717 kubelet.go:2306] "Pod admission denied" podUID="1ecc4abb-59f8-4902-9714-56b6cfea4946" pod="tigera-operator/tigera-operator-5bf8dfcb4-2l8mf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:26.622099 kubelet[2717]: I0813 01:03:26.622040 2717 kubelet.go:2306] "Pod admission denied" podUID="8d6c45a5-2db8-43e9-9c8f-ba016276e43d" pod="tigera-operator/tigera-operator-5bf8dfcb4-d7w2x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:26.720805 kubelet[2717]: I0813 01:03:26.720770 2717 kubelet.go:2306] "Pod admission denied" podUID="f0274dc7-3f1d-4f2c-b3bb-88d5c4e89764" pod="tigera-operator/tigera-operator-5bf8dfcb4-cwsr5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:26.766922 containerd[1575]: time="2025-08-13T01:03:26.766492767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84hvc,Uid:f2b74998-29fc-4213-8313-543c9154bc64,Namespace:calico-system,Attempt:0,}" Aug 13 01:03:26.822024 containerd[1575]: time="2025-08-13T01:03:26.821625569Z" level=error msg="Failed to destroy network for sandbox \"fe77bf57866eaa43339435676a9075cfdacb36e29bd542b645ff72738d4013cf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:26.824440 kubelet[2717]: I0813 01:03:26.823335 2717 kubelet.go:2306] "Pod admission denied" podUID="0fc47aaa-bfb8-4101-841a-9ed1241c94e1" pod="tigera-operator/tigera-operator-5bf8dfcb4-4x7t5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:26.826174 systemd[1]: run-netns-cni\x2d8e31effc\x2dbe84\x2d1c4e\x2d4061\x2da5a79c046097.mount: Deactivated successfully. Aug 13 01:03:26.827574 containerd[1575]: time="2025-08-13T01:03:26.826826007Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84hvc,Uid:f2b74998-29fc-4213-8313-543c9154bc64,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe77bf57866eaa43339435676a9075cfdacb36e29bd542b645ff72738d4013cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:26.828775 kubelet[2717]: E0813 01:03:26.828738 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe77bf57866eaa43339435676a9075cfdacb36e29bd542b645ff72738d4013cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:26.829118 kubelet[2717]: E0813 01:03:26.828787 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe77bf57866eaa43339435676a9075cfdacb36e29bd542b645ff72738d4013cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:03:26.829118 kubelet[2717]: E0813 01:03:26.828805 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe77bf57866eaa43339435676a9075cfdacb36e29bd542b645ff72738d4013cf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:03:26.829118 kubelet[2717]: E0813 01:03:26.828853 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-84hvc_calico-system(f2b74998-29fc-4213-8313-543c9154bc64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-84hvc_calico-system(f2b74998-29fc-4213-8313-543c9154bc64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe77bf57866eaa43339435676a9075cfdacb36e29bd542b645ff72738d4013cf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-84hvc" podUID="f2b74998-29fc-4213-8313-543c9154bc64" Aug 13 01:03:26.920406 kubelet[2717]: I0813 01:03:26.920368 2717 kubelet.go:2306] "Pod admission denied" podUID="72bc0a81-0913-4168-a810-75c9bde59124" pod="tigera-operator/tigera-operator-5bf8dfcb4-lb9nj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:27.021283 kubelet[2717]: I0813 01:03:27.020925 2717 kubelet.go:2306] "Pod admission denied" podUID="8f723b3b-0976-4950-a2b6-224a10522f72" pod="tigera-operator/tigera-operator-5bf8dfcb4-vl7ph" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:27.222046 kubelet[2717]: I0813 01:03:27.221989 2717 kubelet.go:2306] "Pod admission denied" podUID="52a191be-259e-49f3-b34b-0a7159b23bf2" pod="tigera-operator/tigera-operator-5bf8dfcb4-p964h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:27.323993 kubelet[2717]: I0813 01:03:27.323948 2717 kubelet.go:2306] "Pod admission denied" podUID="ac105f9b-18d8-4038-88cc-924d7025c475" pod="tigera-operator/tigera-operator-5bf8dfcb4-cdxxk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:27.371968 kubelet[2717]: I0813 01:03:27.371929 2717 kubelet.go:2306] "Pod admission denied" podUID="53763808-9e0f-4b3b-8ca7-5b42b14a53b0" pod="tigera-operator/tigera-operator-5bf8dfcb4-ljv62" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:27.471423 kubelet[2717]: I0813 01:03:27.471384 2717 kubelet.go:2306] "Pod admission denied" podUID="e9217d11-5288-4f34-810d-4bcf5f0d3d46" pod="tigera-operator/tigera-operator-5bf8dfcb4-8hh46" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:27.570414 kubelet[2717]: I0813 01:03:27.569681 2717 kubelet.go:2306] "Pod admission denied" podUID="de4322da-881a-4527-840f-90fb1935238b" pod="tigera-operator/tigera-operator-5bf8dfcb4-cwssw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:27.673911 kubelet[2717]: I0813 01:03:27.673855 2717 kubelet.go:2306] "Pod admission denied" podUID="8c82cd64-5838-4b27-ab88-fa41593834c8" pod="tigera-operator/tigera-operator-5bf8dfcb4-wfgs5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:27.765386 kubelet[2717]: E0813 01:03:27.765342 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:03:27.767030 containerd[1575]: time="2025-08-13T01:03:27.766992670Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,}" Aug 13 01:03:27.768169 containerd[1575]: time="2025-08-13T01:03:27.768081249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564d8b8748-ps97n,Uid:a7e8405c-2c82-420c-bac7-a7277571f968,Namespace:calico-system,Attempt:0,}" Aug 13 01:03:27.795377 kubelet[2717]: I0813 01:03:27.794989 2717 kubelet.go:2306] "Pod admission denied" podUID="13063515-5586-4a51-bfeb-c6af0e191cae" pod="tigera-operator/tigera-operator-5bf8dfcb4-q7nll" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:27.856928 containerd[1575]: time="2025-08-13T01:03:27.856448777Z" level=error msg="Failed to destroy network for sandbox \"1556776cfdb620e051db08510944e65b86a8403cf3703e87278df6f074d69ee9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:27.858443 containerd[1575]: time="2025-08-13T01:03:27.858179846Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564d8b8748-ps97n,Uid:a7e8405c-2c82-420c-bac7-a7277571f968,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1556776cfdb620e051db08510944e65b86a8403cf3703e87278df6f074d69ee9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:27.860059 systemd[1]: run-netns-cni\x2da5c7e0bc\x2da610\x2d4dc5\x2dccb3\x2d70e26128ccad.mount: Deactivated successfully. Aug 13 01:03:27.860954 kubelet[2717]: E0813 01:03:27.860440 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1556776cfdb620e051db08510944e65b86a8403cf3703e87278df6f074d69ee9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:27.860954 kubelet[2717]: E0813 01:03:27.860492 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1556776cfdb620e051db08510944e65b86a8403cf3703e87278df6f074d69ee9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:03:27.860954 kubelet[2717]: E0813 01:03:27.860530 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1556776cfdb620e051db08510944e65b86a8403cf3703e87278df6f074d69ee9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:03:27.860954 kubelet[2717]: E0813 01:03:27.860569 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1556776cfdb620e051db08510944e65b86a8403cf3703e87278df6f074d69ee9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:03:27.863264 containerd[1575]: time="2025-08-13T01:03:27.862475924Z" level=error msg="Failed to destroy network for sandbox \"3d3916db580c59970d0ed8e35fe5aba7dd854e882f38446eb9b68de1e0e4b940\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:27.865212 containerd[1575]: time="2025-08-13T01:03:27.864416753Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d3916db580c59970d0ed8e35fe5aba7dd854e882f38446eb9b68de1e0e4b940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:27.865536 kubelet[2717]: E0813 01:03:27.865414 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d3916db580c59970d0ed8e35fe5aba7dd854e882f38446eb9b68de1e0e4b940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:27.865536 kubelet[2717]: E0813 01:03:27.865447 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d3916db580c59970d0ed8e35fe5aba7dd854e882f38446eb9b68de1e0e4b940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:03:27.865536 kubelet[2717]: E0813 01:03:27.865463 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d3916db580c59970d0ed8e35fe5aba7dd854e882f38446eb9b68de1e0e4b940\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:03:27.865536 kubelet[2717]: E0813 01:03:27.865491 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d3916db580c59970d0ed8e35fe5aba7dd854e882f38446eb9b68de1e0e4b940\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dk5p7" podUID="6787cc4f-56e4-4094-978c-958d0d7a35ba" Aug 13 01:03:27.865513 systemd[1]: run-netns-cni\x2df5d691fe\x2d5efb\x2d9c39\x2dd2e2\x2d7dcd175fcee9.mount: Deactivated successfully. Aug 13 01:03:27.877079 kubelet[2717]: I0813 01:03:27.877045 2717 kubelet.go:2306] "Pod admission denied" podUID="d0843238-219e-4a68-bf84-2afe902f4e8d" pod="tigera-operator/tigera-operator-5bf8dfcb4-8gm2s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:28.071915 kubelet[2717]: I0813 01:03:28.071862 2717 kubelet.go:2306] "Pod admission denied" podUID="f10cad52-aa6f-4dd5-90da-4720beb657c1" pod="tigera-operator/tigera-operator-5bf8dfcb4-c52j7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:28.170909 kubelet[2717]: I0813 01:03:28.170050 2717 kubelet.go:2306] "Pod admission denied" podUID="429f069d-ec1d-4579-ae9a-6804983fd3d0" pod="tigera-operator/tigera-operator-5bf8dfcb4-vnf7w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:28.271634 kubelet[2717]: I0813 01:03:28.271598 2717 kubelet.go:2306] "Pod admission denied" podUID="43bade5d-2988-4ec8-a661-6b3672bd717d" pod="tigera-operator/tigera-operator-5bf8dfcb4-2txnk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:28.369743 kubelet[2717]: I0813 01:03:28.369705 2717 kubelet.go:2306] "Pod admission denied" podUID="cab6b155-1137-4bb6-ba5d-b431c6d5b08a" pod="tigera-operator/tigera-operator-5bf8dfcb4-lm8l8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:28.419715 kubelet[2717]: I0813 01:03:28.419685 2717 kubelet.go:2306] "Pod admission denied" podUID="a53c3734-07e3-4a70-ba6f-83add35ef9c2" pod="tigera-operator/tigera-operator-5bf8dfcb4-9pxb2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:28.522530 kubelet[2717]: I0813 01:03:28.521755 2717 kubelet.go:2306] "Pod admission denied" podUID="678729ec-81e1-450a-a360-855d8743443b" pod="tigera-operator/tigera-operator-5bf8dfcb4-5qcfc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:28.721296 kubelet[2717]: I0813 01:03:28.721245 2717 kubelet.go:2306] "Pod admission denied" podUID="6da232c2-bffe-48f0-9343-78035dee6ada" pod="tigera-operator/tigera-operator-5bf8dfcb4-svmtg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:28.822056 kubelet[2717]: I0813 01:03:28.821381 2717 kubelet.go:2306] "Pod admission denied" podUID="205e5f2f-ff4a-468d-a3e7-e26876054e40" pod="tigera-operator/tigera-operator-5bf8dfcb4-42gnt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:28.921945 kubelet[2717]: I0813 01:03:28.921906 2717 kubelet.go:2306] "Pod admission denied" podUID="8210fd02-302e-40ad-ab89-a547cf1abfc6" pod="tigera-operator/tigera-operator-5bf8dfcb4-qkfwd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:29.021986 kubelet[2717]: I0813 01:03:29.021939 2717 kubelet.go:2306] "Pod admission denied" podUID="9968ef7b-f66c-4a30-a7ce-a3bec9085713" pod="tigera-operator/tigera-operator-5bf8dfcb4-797mf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:29.072299 kubelet[2717]: I0813 01:03:29.072013 2717 kubelet.go:2306] "Pod admission denied" podUID="ec30a4a7-c16c-4287-844d-db2962bb2da0" pod="tigera-operator/tigera-operator-5bf8dfcb4-7v428" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:29.170280 kubelet[2717]: I0813 01:03:29.170244 2717 kubelet.go:2306] "Pod admission denied" podUID="b5f23ffd-4913-4cd8-bf91-c0cdab3917a6" pod="tigera-operator/tigera-operator-5bf8dfcb4-k9srt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:29.371124 kubelet[2717]: I0813 01:03:29.371019 2717 kubelet.go:2306] "Pod admission denied" podUID="81f036d5-af58-4b2e-a878-7d37d72eb048" pod="tigera-operator/tigera-operator-5bf8dfcb4-c8fjd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:29.475892 kubelet[2717]: I0813 01:03:29.475825 2717 kubelet.go:2306] "Pod admission denied" podUID="45e164f5-ac23-4cea-819b-77ed3a5cf647" pod="tigera-operator/tigera-operator-5bf8dfcb4-hw98z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:29.572811 kubelet[2717]: I0813 01:03:29.572760 2717 kubelet.go:2306] "Pod admission denied" podUID="50331a5f-409c-4891-8a9b-ac259d0b31f0" pod="tigera-operator/tigera-operator-5bf8dfcb4-qw4k8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:29.771633 kubelet[2717]: I0813 01:03:29.771585 2717 kubelet.go:2306] "Pod admission denied" podUID="8215e4dc-a53c-4faa-962d-693f9c81a6cb" pod="tigera-operator/tigera-operator-5bf8dfcb4-4kb7b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:29.872646 kubelet[2717]: I0813 01:03:29.872611 2717 kubelet.go:2306] "Pod admission denied" podUID="f86b67bd-d335-455f-8e44-b3b539a1a693" pod="tigera-operator/tigera-operator-5bf8dfcb4-lq94g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:29.970619 kubelet[2717]: I0813 01:03:29.970568 2717 kubelet.go:2306] "Pod admission denied" podUID="d6d2f887-4da0-4721-b236-289ceb94c464" pod="tigera-operator/tigera-operator-5bf8dfcb4-s952d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:30.172320 kubelet[2717]: I0813 01:03:30.172184 2717 kubelet.go:2306] "Pod admission denied" podUID="8ee11ec7-b9d7-4ca4-9740-4c6740f31d3a" pod="tigera-operator/tigera-operator-5bf8dfcb4-xf4dr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:30.277279 kubelet[2717]: I0813 01:03:30.277021 2717 kubelet.go:2306] "Pod admission denied" podUID="4e992f8f-1e4c-49e8-aaa9-bad75ec9c2e1" pod="tigera-operator/tigera-operator-5bf8dfcb4-k9k49" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:30.277508 kubelet[2717]: I0813 01:03:30.277444 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:03:30.277715 kubelet[2717]: I0813 01:03:30.277567 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:03:30.281203 kubelet[2717]: I0813 01:03:30.281176 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:03:30.296103 kubelet[2717]: I0813 01:03:30.296083 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:03:30.296212 kubelet[2717]: I0813 01:03:30.296173 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/calico-kube-controllers-564d8b8748-ps97n","kube-system/coredns-7c65d6cfc9-tp469","calico-system/csi-node-driver-84hvc","calico-system/calico-node-7pdcs","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:03:30.296505 kubelet[2717]: E0813 01:03:30.296240 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:03:30.296505 kubelet[2717]: E0813 01:03:30.296441 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:03:30.296505 kubelet[2717]: E0813 01:03:30.296448 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:03:30.296505 kubelet[2717]: E0813 01:03:30.296454 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:03:30.296505 kubelet[2717]: E0813 01:03:30.296460 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:03:30.296505 kubelet[2717]: E0813 01:03:30.296470 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:03:30.296505 kubelet[2717]: E0813 01:03:30.296478 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:03:30.296505 kubelet[2717]: E0813 01:03:30.296486 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:03:30.296505 kubelet[2717]: E0813 01:03:30.296494 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:03:30.296505 kubelet[2717]: E0813 01:03:30.296502 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:03:30.296505 kubelet[2717]: I0813 01:03:30.296511 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:03:30.322076 kubelet[2717]: I0813 01:03:30.322025 2717 kubelet.go:2306] "Pod admission denied" podUID="9fdbf566-c595-4b55-89a9-332b48d6f3cc" pod="tigera-operator/tigera-operator-5bf8dfcb4-p8khg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:30.421720 kubelet[2717]: I0813 01:03:30.421669 2717 kubelet.go:2306] "Pod admission denied" podUID="bc97c263-8a85-427a-85de-2f84a8d6c7eb" pod="tigera-operator/tigera-operator-5bf8dfcb4-mvgzh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:30.521018 kubelet[2717]: I0813 01:03:30.520964 2717 kubelet.go:2306] "Pod admission denied" podUID="5aac9c36-ab19-4395-b14d-1d1fe6e4c822" pod="tigera-operator/tigera-operator-5bf8dfcb4-7w7sb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:30.621643 kubelet[2717]: I0813 01:03:30.621603 2717 kubelet.go:2306] "Pod admission denied" podUID="f62a32ef-6d11-49cf-830d-d4e2a50fba2d" pod="tigera-operator/tigera-operator-5bf8dfcb4-pwhx8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:30.764222 kubelet[2717]: E0813 01:03:30.763911 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:03:30.765584 containerd[1575]: time="2025-08-13T01:03:30.765324313Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tp469,Uid:c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af,Namespace:kube-system,Attempt:0,}" Aug 13 01:03:30.826661 kubelet[2717]: I0813 01:03:30.826545 2717 kubelet.go:2306] "Pod admission denied" podUID="05cebaa4-3a78-4ee9-bd59-5b6ee1be2423" pod="tigera-operator/tigera-operator-5bf8dfcb4-7mx7x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:30.827800 containerd[1575]: time="2025-08-13T01:03:30.827625378Z" level=error msg="Failed to destroy network for sandbox \"b6cc9833b04872268e1e3eee710bdda1e4e32260a398da6e23f2c5d5f42c231a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:30.830689 systemd[1]: run-netns-cni\x2d76910faa\x2d73f0\x2dd9d5\x2dfbed\x2d71ea2607309d.mount: Deactivated successfully. Aug 13 01:03:30.832821 containerd[1575]: time="2025-08-13T01:03:30.831888956Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tp469,Uid:c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6cc9833b04872268e1e3eee710bdda1e4e32260a398da6e23f2c5d5f42c231a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:30.832916 kubelet[2717]: E0813 01:03:30.832406 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6cc9833b04872268e1e3eee710bdda1e4e32260a398da6e23f2c5d5f42c231a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:30.832916 kubelet[2717]: E0813 01:03:30.832495 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6cc9833b04872268e1e3eee710bdda1e4e32260a398da6e23f2c5d5f42c231a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:03:30.832916 kubelet[2717]: E0813 01:03:30.832540 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b6cc9833b04872268e1e3eee710bdda1e4e32260a398da6e23f2c5d5f42c231a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:03:30.833069 kubelet[2717]: E0813 01:03:30.833038 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tp469_kube-system(c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tp469_kube-system(c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b6cc9833b04872268e1e3eee710bdda1e4e32260a398da6e23f2c5d5f42c231a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tp469" podUID="c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af" Aug 13 01:03:30.922745 kubelet[2717]: I0813 01:03:30.922693 2717 kubelet.go:2306] "Pod admission denied" podUID="a853ec83-4194-4da0-b593-e7d3aae9823b" pod="tigera-operator/tigera-operator-5bf8dfcb4-kdsxd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:31.021332 kubelet[2717]: I0813 01:03:31.021283 2717 kubelet.go:2306] "Pod admission denied" podUID="ae0c0ae1-f85f-4a50-9683-13d0964b7391" pod="tigera-operator/tigera-operator-5bf8dfcb4-5rt4m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:31.222900 kubelet[2717]: I0813 01:03:31.222856 2717 kubelet.go:2306] "Pod admission denied" podUID="e068ffd8-fc5c-4bd8-aaf9-68cd04c6c72f" pod="tigera-operator/tigera-operator-5bf8dfcb4-6lm6g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:31.321414 kubelet[2717]: I0813 01:03:31.321359 2717 kubelet.go:2306] "Pod admission denied" podUID="4aca5853-41e2-4187-85e9-f4e8a92b31d1" pod="tigera-operator/tigera-operator-5bf8dfcb4-6wfnm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:31.425514 kubelet[2717]: I0813 01:03:31.425480 2717 kubelet.go:2306] "Pod admission denied" podUID="923f716e-b21c-4a69-8dc7-f2fe966597e5" pod="tigera-operator/tigera-operator-5bf8dfcb4-hm6gs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:31.522046 kubelet[2717]: I0813 01:03:31.521538 2717 kubelet.go:2306] "Pod admission denied" podUID="f45643eb-d27c-421b-a67c-a47670b16fe5" pod="tigera-operator/tigera-operator-5bf8dfcb4-l2mp9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:31.620851 kubelet[2717]: I0813 01:03:31.620815 2717 kubelet.go:2306] "Pod admission denied" podUID="55210368-b88a-48e3-9135-967b28231780" pod="tigera-operator/tigera-operator-5bf8dfcb4-nks6x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:31.721294 kubelet[2717]: I0813 01:03:31.721241 2717 kubelet.go:2306] "Pod admission denied" podUID="cf89b8cf-337d-420d-bada-7243196d9e30" pod="tigera-operator/tigera-operator-5bf8dfcb4-gfkll" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:31.822336 kubelet[2717]: I0813 01:03:31.822235 2717 kubelet.go:2306] "Pod admission denied" podUID="d012d03d-bed7-4828-bc19-01f6cea23587" pod="tigera-operator/tigera-operator-5bf8dfcb4-2mqxx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:31.919273 kubelet[2717]: I0813 01:03:31.919229 2717 kubelet.go:2306] "Pod admission denied" podUID="28a8215d-732a-4f7a-91c5-8b29d813833a" pod="tigera-operator/tigera-operator-5bf8dfcb4-lc4th" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:32.023775 kubelet[2717]: I0813 01:03:32.023740 2717 kubelet.go:2306] "Pod admission denied" podUID="5b1d8606-edec-4899-a532-be787802e941" pod="tigera-operator/tigera-operator-5bf8dfcb4-d46qf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:32.220980 kubelet[2717]: I0813 01:03:32.220937 2717 kubelet.go:2306] "Pod admission denied" podUID="bd4ce5f4-dd02-49b3-84d6-26954a93286e" pod="tigera-operator/tigera-operator-5bf8dfcb4-hxv8n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:32.321224 kubelet[2717]: I0813 01:03:32.321140 2717 kubelet.go:2306] "Pod admission denied" podUID="0246925b-e03c-49e2-aa43-fb6ffb595f7d" pod="tigera-operator/tigera-operator-5bf8dfcb4-vkj46" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:32.422229 kubelet[2717]: I0813 01:03:32.422167 2717 kubelet.go:2306] "Pod admission denied" podUID="91925455-483a-4000-808d-0899e3675d66" pod="tigera-operator/tigera-operator-5bf8dfcb4-nw2vw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:32.523245 kubelet[2717]: I0813 01:03:32.523132 2717 kubelet.go:2306] "Pod admission denied" podUID="bfa76e28-a8b4-4b45-b48c-b3b7de3078b2" pod="tigera-operator/tigera-operator-5bf8dfcb4-xj6z2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:32.619949 kubelet[2717]: I0813 01:03:32.619918 2717 kubelet.go:2306] "Pod admission denied" podUID="6ba7afff-faed-43d4-bd39-ac7fe4dc5220" pod="tigera-operator/tigera-operator-5bf8dfcb4-87xb4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:32.722819 kubelet[2717]: I0813 01:03:32.722778 2717 kubelet.go:2306] "Pod admission denied" podUID="a36bac72-20e6-4d36-b5d8-9cc992b39c0e" pod="tigera-operator/tigera-operator-5bf8dfcb4-fhw79" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:32.784028 kubelet[2717]: I0813 01:03:32.783662 2717 kubelet.go:2306] "Pod admission denied" podUID="25ba8423-28e0-4c3e-a162-f4b34273c9bf" pod="tigera-operator/tigera-operator-5bf8dfcb4-j7wmr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:32.872927 kubelet[2717]: I0813 01:03:32.872877 2717 kubelet.go:2306] "Pod admission denied" podUID="ead84813-a53b-4e06-8c87-71f0a3f4ec3a" pod="tigera-operator/tigera-operator-5bf8dfcb4-2l5lt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:32.972333 kubelet[2717]: I0813 01:03:32.972277 2717 kubelet.go:2306] "Pod admission denied" podUID="f61b3ba1-27b1-4841-b073-473e84d4ab31" pod="tigera-operator/tigera-operator-5bf8dfcb4-pvbff" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:33.074517 kubelet[2717]: I0813 01:03:33.074330 2717 kubelet.go:2306] "Pod admission denied" podUID="2b54c79b-af68-4201-a350-995efc0b74fc" pod="tigera-operator/tigera-operator-5bf8dfcb4-kc5lp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:33.172245 kubelet[2717]: I0813 01:03:33.172185 2717 kubelet.go:2306] "Pod admission denied" podUID="659d5213-910b-4ffa-a2c3-e23f2174c9f9" pod="tigera-operator/tigera-operator-5bf8dfcb4-lhw4x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:33.271265 kubelet[2717]: I0813 01:03:33.271228 2717 kubelet.go:2306] "Pod admission denied" podUID="726afcd7-0b2e-4bb5-97e2-920be7d38053" pod="tigera-operator/tigera-operator-5bf8dfcb4-khzng" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:33.472592 kubelet[2717]: I0813 01:03:33.472536 2717 kubelet.go:2306] "Pod admission denied" podUID="5dc49f68-4a74-4fa3-b74a-5188122cca52" pod="tigera-operator/tigera-operator-5bf8dfcb4-jzskj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:33.573371 kubelet[2717]: I0813 01:03:33.573317 2717 kubelet.go:2306] "Pod admission denied" podUID="0bf900fd-c632-443d-ac75-b77f28d7a402" pod="tigera-operator/tigera-operator-5bf8dfcb4-b2c8m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:33.673587 kubelet[2717]: I0813 01:03:33.673536 2717 kubelet.go:2306] "Pod admission denied" podUID="fd980fca-9520-4400-b7ea-ddc082a53f25" pod="tigera-operator/tigera-operator-5bf8dfcb4-nzm72" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:33.766740 containerd[1575]: time="2025-08-13T01:03:33.766069291Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:03:33.775724 kubelet[2717]: I0813 01:03:33.775690 2717 kubelet.go:2306] "Pod admission denied" podUID="4024fe8e-fa19-40f3-ac27-ef4ac37609f1" pod="tigera-operator/tigera-operator-5bf8dfcb4-lkrhv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:33.782885 kubelet[2717]: I0813 01:03:33.782852 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-lkrhv" podStartSLOduration=0.782837822 podStartE2EDuration="782.837822ms" podCreationTimestamp="2025-08-13 01:03:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:03:33.782415338 +0000 UTC m=+69.107233702" watchObservedRunningTime="2025-08-13 01:03:33.782837822 +0000 UTC m=+69.107656186" Aug 13 01:03:33.875701 kubelet[2717]: I0813 01:03:33.875395 2717 kubelet.go:2306] "Pod admission denied" podUID="9cbdae23-8a47-4ca8-a89f-8cf8d484b058" pod="tigera-operator/tigera-operator-5bf8dfcb4-8tgcj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:34.070277 kubelet[2717]: I0813 01:03:34.070012 2717 kubelet.go:2306] "Pod admission denied" podUID="33120f97-6020-44d8-a3b1-d7fcb2f84e0a" pod="tigera-operator/tigera-operator-5bf8dfcb4-pmh5d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:34.170492 kubelet[2717]: I0813 01:03:34.170445 2717 kubelet.go:2306] "Pod admission denied" podUID="5b8d66f1-54b8-48b8-bafc-1121ec457ff5" pod="tigera-operator/tigera-operator-5bf8dfcb4-c7zr2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:34.271441 kubelet[2717]: I0813 01:03:34.271401 2717 kubelet.go:2306] "Pod admission denied" podUID="db595c24-8924-44ac-a5b1-691ad605fa95" pod="tigera-operator/tigera-operator-5bf8dfcb4-dlrhv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:34.374184 kubelet[2717]: I0813 01:03:34.373819 2717 kubelet.go:2306] "Pod admission denied" podUID="975d06ca-8c44-4d61-94b9-c29cd834b500" pod="tigera-operator/tigera-operator-5bf8dfcb4-kf7w9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:34.472995 kubelet[2717]: I0813 01:03:34.472930 2717 kubelet.go:2306] "Pod admission denied" podUID="155716ad-9fe3-4cc1-90bc-53faaa1a458e" pod="tigera-operator/tigera-operator-5bf8dfcb4-xrhh7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:34.575414 kubelet[2717]: I0813 01:03:34.574981 2717 kubelet.go:2306] "Pod admission denied" podUID="7cc77541-5fdb-403f-b3bd-52533364d280" pod="tigera-operator/tigera-operator-5bf8dfcb4-nmb4j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:34.678421 kubelet[2717]: I0813 01:03:34.678301 2717 kubelet.go:2306] "Pod admission denied" podUID="31fd577e-7a0c-485c-abed-b324e4a6444a" pod="tigera-operator/tigera-operator-5bf8dfcb4-8vbxh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:34.771888 kubelet[2717]: E0813 01:03:34.771847 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:03:34.783152 kubelet[2717]: I0813 01:03:34.782870 2717 kubelet.go:2306] "Pod admission denied" podUID="1c0ad7e5-8232-4ebe-8a13-162c4e3c6b99" pod="tigera-operator/tigera-operator-5bf8dfcb4-dpwch" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:34.879934 kubelet[2717]: I0813 01:03:34.879904 2717 kubelet.go:2306] "Pod admission denied" podUID="904aa24e-ee35-4c08-9535-74162243270d" pod="tigera-operator/tigera-operator-5bf8dfcb4-hdvgq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:34.978876 kubelet[2717]: I0813 01:03:34.978814 2717 kubelet.go:2306] "Pod admission denied" podUID="df7da27b-2dff-453e-9d78-4d9f8897dd84" pod="tigera-operator/tigera-operator-5bf8dfcb4-p56tv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:35.047626 kubelet[2717]: I0813 01:03:35.047058 2717 kubelet.go:2306] "Pod admission denied" podUID="dd63fccf-9386-4128-b8d8-461a6c8d180d" pod="tigera-operator/tigera-operator-5bf8dfcb4-lhlb7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:35.144661 kubelet[2717]: I0813 01:03:35.144604 2717 kubelet.go:2306] "Pod admission denied" podUID="e116221a-d0c6-4a8c-9e2d-f9173b4d5010" pod="tigera-operator/tigera-operator-5bf8dfcb4-fzx4c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:35.237321 kubelet[2717]: I0813 01:03:35.236938 2717 kubelet.go:2306] "Pod admission denied" podUID="c6e1d070-2bec-4d7e-a0a3-e69d9758d299" pod="tigera-operator/tigera-operator-5bf8dfcb4-pwqkd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:35.326465 kubelet[2717]: I0813 01:03:35.326288 2717 kubelet.go:2306] "Pod admission denied" podUID="b8259b91-4ac5-4a53-89e3-78e3af4c7164" pod="tigera-operator/tigera-operator-5bf8dfcb4-c8fgq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:35.427540 kubelet[2717]: I0813 01:03:35.427491 2717 kubelet.go:2306] "Pod admission denied" podUID="18c0ac0e-e1ce-425a-ba6f-65ff966122f8" pod="tigera-operator/tigera-operator-5bf8dfcb4-j2v6v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:35.545163 kubelet[2717]: I0813 01:03:35.544729 2717 kubelet.go:2306] "Pod admission denied" podUID="95ac56b4-355b-4e59-a740-1bfc56fba0f8" pod="tigera-operator/tigera-operator-5bf8dfcb4-n7jh4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:35.630728 kubelet[2717]: I0813 01:03:35.630688 2717 kubelet.go:2306] "Pod admission denied" podUID="55d5fb6f-0e03-466a-b8c9-b37b600a3256" pod="tigera-operator/tigera-operator-5bf8dfcb4-s7p56" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:35.727326 kubelet[2717]: I0813 01:03:35.727273 2717 kubelet.go:2306] "Pod admission denied" podUID="b00bd3f0-3791-4913-a055-52f097ccf51f" pod="tigera-operator/tigera-operator-5bf8dfcb4-wg6t8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:35.764456 kubelet[2717]: E0813 01:03:35.764410 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:03:35.929676 kubelet[2717]: I0813 01:03:35.929298 2717 kubelet.go:2306] "Pod admission denied" podUID="2fa55e51-d7c3-4236-b7d9-7b89594ab780" pod="tigera-operator/tigera-operator-5bf8dfcb4-b677h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:36.030943 kubelet[2717]: I0813 01:03:36.030901 2717 kubelet.go:2306] "Pod admission denied" podUID="962bfb13-628a-456a-89b8-f98dff703891" pod="tigera-operator/tigera-operator-5bf8dfcb4-4rqfp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:36.129555 kubelet[2717]: I0813 01:03:36.129033 2717 kubelet.go:2306] "Pod admission denied" podUID="a9bea03f-c6d4-45ac-8056-dc1a1c78a00d" pod="tigera-operator/tigera-operator-5bf8dfcb4-n8nlw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:36.229532 kubelet[2717]: I0813 01:03:36.229502 2717 kubelet.go:2306] "Pod admission denied" podUID="c8dbbe02-935e-44fb-b11d-d7cf721d8d96" pod="tigera-operator/tigera-operator-5bf8dfcb4-r74nn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:36.328166 kubelet[2717]: I0813 01:03:36.328123 2717 kubelet.go:2306] "Pod admission denied" podUID="62ea9344-61c2-40bc-ade6-d949ee8c700d" pod="tigera-operator/tigera-operator-5bf8dfcb4-rhz2w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:36.496855 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3625695015.mount: Deactivated successfully. Aug 13 01:03:36.498875 containerd[1575]: time="2025-08-13T01:03:36.497710430Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3625695015: write /var/lib/containerd/tmpmounts/containerd-mount3625695015/usr/bin/calico-node: no space left on device" Aug 13 01:03:36.498875 containerd[1575]: time="2025-08-13T01:03:36.498117217Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:03:36.499739 kubelet[2717]: E0813 01:03:36.499707 2717 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3625695015: write /var/lib/containerd/tmpmounts/containerd-mount3625695015/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:03:36.500372 kubelet[2717]: E0813 01:03:36.499903 2717 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3625695015: write /var/lib/containerd/tmpmounts/containerd-mount3625695015/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:03:36.501484 kubelet[2717]: E0813 01:03:36.501377 2717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-pmcp8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-7pdcs_calico-system(e6731efa-3c96-4227-b83c-f4c3adff36c6): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3625695015: write /var/lib/containerd/tmpmounts/containerd-mount3625695015/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:03:36.503051 kubelet[2717]: E0813 01:03:36.503019 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3625695015: write /var/lib/containerd/tmpmounts/containerd-mount3625695015/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-7pdcs" podUID="e6731efa-3c96-4227-b83c-f4c3adff36c6" Aug 13 01:03:36.522210 kubelet[2717]: I0813 01:03:36.522152 2717 kubelet.go:2306] "Pod admission denied" podUID="b2815df6-149c-4afa-adc0-755be038d299" pod="tigera-operator/tigera-operator-5bf8dfcb4-x4lrh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:36.624315 kubelet[2717]: I0813 01:03:36.624080 2717 kubelet.go:2306] "Pod admission denied" podUID="951f44da-6066-4780-971c-30fe4a7150f7" pod="tigera-operator/tigera-operator-5bf8dfcb4-2hzsp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:36.719817 kubelet[2717]: I0813 01:03:36.719780 2717 kubelet.go:2306] "Pod admission denied" podUID="faf3d094-13ff-49e7-b0bc-f95cd905d19d" pod="tigera-operator/tigera-operator-5bf8dfcb4-cm852" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:36.921849 kubelet[2717]: I0813 01:03:36.921424 2717 kubelet.go:2306] "Pod admission denied" podUID="8a024bb5-1a77-4720-bfc9-417f1a9c4e0b" pod="tigera-operator/tigera-operator-5bf8dfcb4-r8mdr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:37.024917 kubelet[2717]: I0813 01:03:37.024871 2717 kubelet.go:2306] "Pod admission denied" podUID="71c7d5ca-fc9c-4072-aeab-9a1bfa3f4339" pod="tigera-operator/tigera-operator-5bf8dfcb4-46xsm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:37.122473 kubelet[2717]: I0813 01:03:37.122411 2717 kubelet.go:2306] "Pod admission denied" podUID="9e92092d-bf3d-44e5-af6a-2c2f0e9b1b94" pod="tigera-operator/tigera-operator-5bf8dfcb4-l7wwx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:37.222518 kubelet[2717]: I0813 01:03:37.222482 2717 kubelet.go:2306] "Pod admission denied" podUID="a2f5a7f2-4951-4b43-bffd-c144797e5532" pod="tigera-operator/tigera-operator-5bf8dfcb4-q9697" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:37.321980 kubelet[2717]: I0813 01:03:37.321921 2717 kubelet.go:2306] "Pod admission denied" podUID="0c4d701a-1506-46b1-b400-65466dc5c1e5" pod="tigera-operator/tigera-operator-5bf8dfcb4-pzrt9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:37.526975 kubelet[2717]: I0813 01:03:37.525779 2717 kubelet.go:2306] "Pod admission denied" podUID="a81c4b31-d1f8-4974-b9f8-4f50167c8f3e" pod="tigera-operator/tigera-operator-5bf8dfcb4-ldnhd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:37.623472 kubelet[2717]: I0813 01:03:37.623420 2717 kubelet.go:2306] "Pod admission denied" podUID="57f8a156-a647-41b7-958f-8d268ffab5db" pod="tigera-operator/tigera-operator-5bf8dfcb4-fckr2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:37.721730 kubelet[2717]: I0813 01:03:37.721693 2717 kubelet.go:2306] "Pod admission denied" podUID="1833a083-b88a-48f9-9b48-a248b5dec9b4" pod="tigera-operator/tigera-operator-5bf8dfcb4-n78zp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:37.924584 kubelet[2717]: I0813 01:03:37.924240 2717 kubelet.go:2306] "Pod admission denied" podUID="b719040f-60d4-4ef5-8aa9-9b504da5ab8d" pod="tigera-operator/tigera-operator-5bf8dfcb4-tjmd8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:38.025076 kubelet[2717]: I0813 01:03:38.025030 2717 kubelet.go:2306] "Pod admission denied" podUID="3efc2a8f-1767-484b-9789-74a8afba8694" pod="tigera-operator/tigera-operator-5bf8dfcb4-k9w6h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:38.073419 kubelet[2717]: I0813 01:03:38.073359 2717 kubelet.go:2306] "Pod admission denied" podUID="13e704cd-a7d1-48f3-94a4-7a0f56887e2c" pod="tigera-operator/tigera-operator-5bf8dfcb4-prmsb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:38.174176 kubelet[2717]: I0813 01:03:38.173923 2717 kubelet.go:2306] "Pod admission denied" podUID="0c39a76c-4180-4deb-9dae-c60a7d948712" pod="tigera-operator/tigera-operator-5bf8dfcb4-556bs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:38.270622 kubelet[2717]: I0813 01:03:38.270580 2717 kubelet.go:2306] "Pod admission denied" podUID="2f8b88ca-1626-4339-beb5-ffcfbe4e6d92" pod="tigera-operator/tigera-operator-5bf8dfcb4-bwhp2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:38.374729 kubelet[2717]: I0813 01:03:38.374683 2717 kubelet.go:2306] "Pod admission denied" podUID="da00c95c-6621-48e5-a222-25bfd7613640" pod="tigera-operator/tigera-operator-5bf8dfcb4-vktnc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:38.574997 kubelet[2717]: I0813 01:03:38.574652 2717 kubelet.go:2306] "Pod admission denied" podUID="8a83cd96-6911-4e9c-868e-17c4efe54262" pod="tigera-operator/tigera-operator-5bf8dfcb4-m6rjz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:38.672211 kubelet[2717]: I0813 01:03:38.672045 2717 kubelet.go:2306] "Pod admission denied" podUID="29791e6b-5579-4e3a-9cbd-84240a14befe" pod="tigera-operator/tigera-operator-5bf8dfcb4-66mmd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:38.766365 kubelet[2717]: E0813 01:03:38.766327 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:03:38.767604 kubelet[2717]: E0813 01:03:38.767388 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:03:38.768263 containerd[1575]: time="2025-08-13T01:03:38.768221394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,}" Aug 13 01:03:38.786178 kubelet[2717]: I0813 01:03:38.786125 2717 kubelet.go:2306] "Pod admission denied" podUID="181e05e2-3644-495b-8bbc-a57f2302d6cd" pod="tigera-operator/tigera-operator-5bf8dfcb4-s9wwv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:38.842807 containerd[1575]: time="2025-08-13T01:03:38.840758924Z" level=error msg="Failed to destroy network for sandbox \"91ea750052258e5dfa1eda631cf6235d1171ebf9f643c56b6a7523d5decd2360\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:38.842684 systemd[1]: run-netns-cni\x2d873cbc59\x2dad8e\x2dea85\x2de2fd\x2df54a5a930bc5.mount: Deactivated successfully. Aug 13 01:03:38.844395 containerd[1575]: time="2025-08-13T01:03:38.844349381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ea750052258e5dfa1eda631cf6235d1171ebf9f643c56b6a7523d5decd2360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:38.845122 kubelet[2717]: E0813 01:03:38.844918 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ea750052258e5dfa1eda631cf6235d1171ebf9f643c56b6a7523d5decd2360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:38.845122 kubelet[2717]: E0813 01:03:38.844979 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ea750052258e5dfa1eda631cf6235d1171ebf9f643c56b6a7523d5decd2360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:03:38.845122 kubelet[2717]: E0813 01:03:38.844997 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"91ea750052258e5dfa1eda631cf6235d1171ebf9f643c56b6a7523d5decd2360\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:03:38.845122 kubelet[2717]: E0813 01:03:38.845044 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"91ea750052258e5dfa1eda631cf6235d1171ebf9f643c56b6a7523d5decd2360\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dk5p7" podUID="6787cc4f-56e4-4094-978c-958d0d7a35ba" Aug 13 01:03:38.875274 kubelet[2717]: I0813 01:03:38.875227 2717 kubelet.go:2306] "Pod admission denied" podUID="470ab0f2-af4f-4ee2-bcdf-1d907d32873a" pod="tigera-operator/tigera-operator-5bf8dfcb4-6ttfn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:38.921084 kubelet[2717]: I0813 01:03:38.921041 2717 kubelet.go:2306] "Pod admission denied" podUID="5e26f6fa-a29c-44f7-9f14-8b2ea3c67e20" pod="tigera-operator/tigera-operator-5bf8dfcb4-2b2nn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:39.023201 kubelet[2717]: I0813 01:03:39.023129 2717 kubelet.go:2306] "Pod admission denied" podUID="cd5eb146-8c97-4c73-b797-4bdc0f499b45" pod="tigera-operator/tigera-operator-5bf8dfcb4-v6vx7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:39.122640 kubelet[2717]: I0813 01:03:39.122520 2717 kubelet.go:2306] "Pod admission denied" podUID="79ea6e0f-fe9e-4b61-9d4a-77f822dcc920" pod="tigera-operator/tigera-operator-5bf8dfcb4-v4gbv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:39.224163 kubelet[2717]: I0813 01:03:39.224113 2717 kubelet.go:2306] "Pod admission denied" podUID="5cea3865-e405-493f-8ac2-185952c97332" pod="tigera-operator/tigera-operator-5bf8dfcb4-pnx24" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:39.424097 kubelet[2717]: I0813 01:03:39.423553 2717 kubelet.go:2306] "Pod admission denied" podUID="3742a8f4-5f96-4f6f-a3ba-e8c79f39a110" pod="tigera-operator/tigera-operator-5bf8dfcb4-z7vmd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:39.524144 kubelet[2717]: I0813 01:03:39.524103 2717 kubelet.go:2306] "Pod admission denied" podUID="f3849856-9140-43fe-ae61-e8510e39ffc9" pod="tigera-operator/tigera-operator-5bf8dfcb4-nz659" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:39.572853 kubelet[2717]: I0813 01:03:39.572801 2717 kubelet.go:2306] "Pod admission denied" podUID="07518d09-f9cd-4d37-b2af-8e44011b99a6" pod="tigera-operator/tigera-operator-5bf8dfcb4-zwxdf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:39.671491 kubelet[2717]: I0813 01:03:39.671432 2717 kubelet.go:2306] "Pod admission denied" podUID="fc5857ef-6224-4e85-afe6-6559ad87cbed" pod="tigera-operator/tigera-operator-5bf8dfcb4-zk8q7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:39.764807 containerd[1575]: time="2025-08-13T01:03:39.764763094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564d8b8748-ps97n,Uid:a7e8405c-2c82-420c-bac7-a7277571f968,Namespace:calico-system,Attempt:0,}" Aug 13 01:03:39.818653 containerd[1575]: time="2025-08-13T01:03:39.818609340Z" level=error msg="Failed to destroy network for sandbox \"23a255ea542fc8b868f49afde1f471773146f2696e9b6cc4f90cfad9434b6b0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:39.820919 systemd[1]: run-netns-cni\x2d4c277bae\x2d8af6\x2dcbff\x2dcd20\x2d3508baefc6d0.mount: Deactivated successfully. Aug 13 01:03:39.821498 containerd[1575]: time="2025-08-13T01:03:39.821445292Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564d8b8748-ps97n,Uid:a7e8405c-2c82-420c-bac7-a7277571f968,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23a255ea542fc8b868f49afde1f471773146f2696e9b6cc4f90cfad9434b6b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:39.821736 kubelet[2717]: E0813 01:03:39.821695 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23a255ea542fc8b868f49afde1f471773146f2696e9b6cc4f90cfad9434b6b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:39.821795 kubelet[2717]: E0813 01:03:39.821746 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23a255ea542fc8b868f49afde1f471773146f2696e9b6cc4f90cfad9434b6b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:03:39.821795 kubelet[2717]: E0813 01:03:39.821767 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23a255ea542fc8b868f49afde1f471773146f2696e9b6cc4f90cfad9434b6b0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:03:39.821846 kubelet[2717]: E0813 01:03:39.821805 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23a255ea542fc8b868f49afde1f471773146f2696e9b6cc4f90cfad9434b6b0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:03:39.873093 kubelet[2717]: I0813 01:03:39.873050 2717 kubelet.go:2306] "Pod admission denied" podUID="7dbbb10f-c101-4646-b87b-c9c14e83fb22" pod="tigera-operator/tigera-operator-5bf8dfcb4-8ngfm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:39.973531 kubelet[2717]: I0813 01:03:39.973481 2717 kubelet.go:2306] "Pod admission denied" podUID="957257b2-0d25-4985-a8a6-ca91a395405a" pod="tigera-operator/tigera-operator-5bf8dfcb4-kn7w5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:40.072370 kubelet[2717]: I0813 01:03:40.072259 2717 kubelet.go:2306] "Pod admission denied" podUID="6630eaf3-3fbd-40c2-98e5-16397c17584c" pod="tigera-operator/tigera-operator-5bf8dfcb4-8fgb9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:40.279259 kubelet[2717]: I0813 01:03:40.279207 2717 kubelet.go:2306] "Pod admission denied" podUID="df8a74bb-2836-4e50-81c8-80f3ded537e6" pod="tigera-operator/tigera-operator-5bf8dfcb4-chb4p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:40.310106 kubelet[2717]: I0813 01:03:40.310051 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:03:40.310106 kubelet[2717]: I0813 01:03:40.310111 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:03:40.313473 kubelet[2717]: I0813 01:03:40.313399 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:03:40.328012 kubelet[2717]: I0813 01:03:40.327812 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:03:40.328012 kubelet[2717]: I0813 01:03:40.327879 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-tp469","kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/csi-node-driver-84hvc","calico-system/calico-node-7pdcs","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:03:40.328012 kubelet[2717]: E0813 01:03:40.327905 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:03:40.328012 kubelet[2717]: E0813 01:03:40.327915 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:03:40.328012 kubelet[2717]: E0813 01:03:40.327921 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:03:40.328012 kubelet[2717]: E0813 01:03:40.327927 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:03:40.328012 kubelet[2717]: E0813 01:03:40.327933 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:03:40.328012 kubelet[2717]: E0813 01:03:40.327944 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:03:40.328012 kubelet[2717]: E0813 01:03:40.327952 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:03:40.328012 kubelet[2717]: E0813 01:03:40.327960 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:03:40.328012 kubelet[2717]: E0813 01:03:40.327967 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:03:40.328012 kubelet[2717]: E0813 01:03:40.327975 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:03:40.328012 kubelet[2717]: I0813 01:03:40.327984 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:03:40.373815 kubelet[2717]: I0813 01:03:40.373762 2717 kubelet.go:2306] "Pod admission denied" podUID="83c4a41f-2040-4898-9b8d-33a3cfe82fd9" pod="tigera-operator/tigera-operator-5bf8dfcb4-cmlqb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:40.472524 kubelet[2717]: I0813 01:03:40.472471 2717 kubelet.go:2306] "Pod admission denied" podUID="432d5e5b-5bd4-4702-82fb-24b2e88d454d" pod="tigera-operator/tigera-operator-5bf8dfcb4-82vp9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:40.674447 kubelet[2717]: I0813 01:03:40.674015 2717 kubelet.go:2306] "Pod admission denied" podUID="400457bd-124f-4dcd-b659-9d4e72626249" pod="tigera-operator/tigera-operator-5bf8dfcb4-26rqg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:40.765322 containerd[1575]: time="2025-08-13T01:03:40.764893895Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84hvc,Uid:f2b74998-29fc-4213-8313-543c9154bc64,Namespace:calico-system,Attempt:0,}" Aug 13 01:03:40.797594 kubelet[2717]: I0813 01:03:40.796717 2717 kubelet.go:2306] "Pod admission denied" podUID="3118ba21-6a62-4781-a9fb-b3f7c86267d6" pod="tigera-operator/tigera-operator-5bf8dfcb4-bcbwl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:40.845401 containerd[1575]: time="2025-08-13T01:03:40.845359691Z" level=error msg="Failed to destroy network for sandbox \"cfb6ac5bd1e18e4886be5b94290cac7751734cb21b015a21edc025116e944a1e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:40.848585 containerd[1575]: time="2025-08-13T01:03:40.847416196Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84hvc,Uid:f2b74998-29fc-4213-8313-543c9154bc64,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfb6ac5bd1e18e4886be5b94290cac7751734cb21b015a21edc025116e944a1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:40.847856 systemd[1]: run-netns-cni\x2d683348ae\x2d2fe0\x2dc5ad\x2dfcf5\x2dbdbe594edf9e.mount: Deactivated successfully. Aug 13 01:03:40.849184 kubelet[2717]: E0813 01:03:40.849038 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfb6ac5bd1e18e4886be5b94290cac7751734cb21b015a21edc025116e944a1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:40.849184 kubelet[2717]: E0813 01:03:40.849124 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfb6ac5bd1e18e4886be5b94290cac7751734cb21b015a21edc025116e944a1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:03:40.849184 kubelet[2717]: E0813 01:03:40.849145 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cfb6ac5bd1e18e4886be5b94290cac7751734cb21b015a21edc025116e944a1e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:03:40.849413 kubelet[2717]: E0813 01:03:40.849388 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-84hvc_calico-system(f2b74998-29fc-4213-8313-543c9154bc64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-84hvc_calico-system(f2b74998-29fc-4213-8313-543c9154bc64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cfb6ac5bd1e18e4886be5b94290cac7751734cb21b015a21edc025116e944a1e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-84hvc" podUID="f2b74998-29fc-4213-8313-543c9154bc64" Aug 13 01:03:40.875735 kubelet[2717]: I0813 01:03:40.875688 2717 kubelet.go:2306] "Pod admission denied" podUID="cc93acaf-6a15-4b5e-83e7-15ab6a26ffbb" pod="tigera-operator/tigera-operator-5bf8dfcb4-2gchn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:40.975587 kubelet[2717]: I0813 01:03:40.975538 2717 kubelet.go:2306] "Pod admission denied" podUID="3320287e-b26e-4ce2-b28c-7d3c9edf19ad" pod="tigera-operator/tigera-operator-5bf8dfcb4-f5rsg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:41.073152 kubelet[2717]: I0813 01:03:41.073100 2717 kubelet.go:2306] "Pod admission denied" podUID="3edc5111-2d78-40df-929a-2f4ebc7ae171" pod="tigera-operator/tigera-operator-5bf8dfcb4-nvpm7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:41.172999 kubelet[2717]: I0813 01:03:41.172946 2717 kubelet.go:2306] "Pod admission denied" podUID="e6ad6f30-bd29-4aad-9584-6d37e52129af" pod="tigera-operator/tigera-operator-5bf8dfcb4-94vw2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:41.274056 kubelet[2717]: I0813 01:03:41.273935 2717 kubelet.go:2306] "Pod admission denied" podUID="7fc2a561-a184-4928-bb9e-104ea43c15fb" pod="tigera-operator/tigera-operator-5bf8dfcb4-vcpcx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:41.473768 kubelet[2717]: I0813 01:03:41.473708 2717 kubelet.go:2306] "Pod admission denied" podUID="1b10628d-5fc2-4b22-90ab-5c8e6748d556" pod="tigera-operator/tigera-operator-5bf8dfcb4-t2pzh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:41.573398 kubelet[2717]: I0813 01:03:41.573162 2717 kubelet.go:2306] "Pod admission denied" podUID="0ecf08af-6a36-40ce-af2f-cf314f7443a1" pod="tigera-operator/tigera-operator-5bf8dfcb4-r7spq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:41.673206 kubelet[2717]: I0813 01:03:41.673161 2717 kubelet.go:2306] "Pod admission denied" podUID="25e51e0a-7ef1-466e-97ef-73aec19e4f46" pod="tigera-operator/tigera-operator-5bf8dfcb4-k89tj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:41.778211 kubelet[2717]: I0813 01:03:41.777327 2717 kubelet.go:2306] "Pod admission denied" podUID="d7626e84-caac-40b7-affe-9a0c54567ddf" pod="tigera-operator/tigera-operator-5bf8dfcb4-n8rkq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:41.874105 kubelet[2717]: I0813 01:03:41.873849 2717 kubelet.go:2306] "Pod admission denied" podUID="7b07baf2-478b-4b0f-91e8-9c5c719f8cfe" pod="tigera-operator/tigera-operator-5bf8dfcb4-tjqjx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:42.073168 kubelet[2717]: I0813 01:03:42.073105 2717 kubelet.go:2306] "Pod admission denied" podUID="8e237d97-8d96-407e-9af8-9f9f8f323e11" pod="tigera-operator/tigera-operator-5bf8dfcb4-wx4gl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:42.174119 kubelet[2717]: I0813 01:03:42.173347 2717 kubelet.go:2306] "Pod admission denied" podUID="4725e9e2-26fa-4fa8-9cec-fef0d9cf266e" pod="tigera-operator/tigera-operator-5bf8dfcb4-kvz2p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:42.276076 kubelet[2717]: I0813 01:03:42.275852 2717 kubelet.go:2306] "Pod admission denied" podUID="99aa456b-8d3b-49f0-9352-7d0e0b0ce2c4" pod="tigera-operator/tigera-operator-5bf8dfcb4-thzth" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:42.476212 kubelet[2717]: I0813 01:03:42.476151 2717 kubelet.go:2306] "Pod admission denied" podUID="fdf341c7-8137-4464-9d54-083368dd17ae" pod="tigera-operator/tigera-operator-5bf8dfcb4-6mmkp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:42.576918 kubelet[2717]: I0813 01:03:42.576863 2717 kubelet.go:2306] "Pod admission denied" podUID="af01444c-d21b-41d7-9c0f-c33bb6e3b44a" pod="tigera-operator/tigera-operator-5bf8dfcb4-8bdtc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:42.674649 kubelet[2717]: I0813 01:03:42.674603 2717 kubelet.go:2306] "Pod admission denied" podUID="b963a515-28e5-42b0-bb04-f07bbfde4706" pod="tigera-operator/tigera-operator-5bf8dfcb4-5d4fs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:42.775604 kubelet[2717]: I0813 01:03:42.775472 2717 kubelet.go:2306] "Pod admission denied" podUID="1529dfa1-a0c7-4d60-a88a-65847406cd18" pod="tigera-operator/tigera-operator-5bf8dfcb4-7gnpr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:42.882637 kubelet[2717]: I0813 01:03:42.882589 2717 kubelet.go:2306] "Pod admission denied" podUID="1f9296a6-e0fc-4a6d-bcc8-e2af32035da1" pod="tigera-operator/tigera-operator-5bf8dfcb4-748zl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:42.972617 kubelet[2717]: I0813 01:03:42.972574 2717 kubelet.go:2306] "Pod admission denied" podUID="c8d94629-235b-451a-b5ca-10025d228419" pod="tigera-operator/tigera-operator-5bf8dfcb4-cjtmc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:43.074798 kubelet[2717]: I0813 01:03:43.074691 2717 kubelet.go:2306] "Pod admission denied" podUID="2d080967-f4dc-4d58-80da-b21a6bb6c9aa" pod="tigera-operator/tigera-operator-5bf8dfcb4-dn66p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:43.185285 kubelet[2717]: I0813 01:03:43.185240 2717 kubelet.go:2306] "Pod admission denied" podUID="eace8faa-3942-424f-bf69-63d66a6e489a" pod="tigera-operator/tigera-operator-5bf8dfcb4-9bd26" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:43.232946 kubelet[2717]: I0813 01:03:43.232897 2717 kubelet.go:2306] "Pod admission denied" podUID="57cc8622-10ed-4187-a5f9-b60c2e4b9d95" pod="tigera-operator/tigera-operator-5bf8dfcb4-nv8q4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:43.323177 kubelet[2717]: I0813 01:03:43.323130 2717 kubelet.go:2306] "Pod admission denied" podUID="6331aa4d-8c6a-4039-920c-c0fc6e0e5cfc" pod="tigera-operator/tigera-operator-5bf8dfcb4-sd9vv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:43.421700 kubelet[2717]: I0813 01:03:43.421576 2717 kubelet.go:2306] "Pod admission denied" podUID="5f14f8b9-4788-41c3-884b-9a956f44fc10" pod="tigera-operator/tigera-operator-5bf8dfcb4-kdbzb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:43.525086 kubelet[2717]: I0813 01:03:43.525028 2717 kubelet.go:2306] "Pod admission denied" podUID="9b9a6a78-252f-4828-bb53-49b6504940d5" pod="tigera-operator/tigera-operator-5bf8dfcb4-q4hbm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:43.631013 kubelet[2717]: I0813 01:03:43.630759 2717 kubelet.go:2306] "Pod admission denied" podUID="049cd1a6-b2b0-4dac-8d2d-07a9a1c43ecd" pod="tigera-operator/tigera-operator-5bf8dfcb4-p54mp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:43.735721 kubelet[2717]: I0813 01:03:43.735177 2717 kubelet.go:2306] "Pod admission denied" podUID="4158a6f5-8607-499e-bc1a-7bed9cd62753" pod="tigera-operator/tigera-operator-5bf8dfcb4-llsk7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:43.823834 kubelet[2717]: I0813 01:03:43.823782 2717 kubelet.go:2306] "Pod admission denied" podUID="6eb6fc4d-67ee-4788-b88c-a96d8eff5aeb" pod="tigera-operator/tigera-operator-5bf8dfcb4-756xz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:43.924957 kubelet[2717]: I0813 01:03:43.924907 2717 kubelet.go:2306] "Pod admission denied" podUID="749b3de8-897c-416e-a98c-443de9b58445" pod="tigera-operator/tigera-operator-5bf8dfcb4-tzh69" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:44.122903 kubelet[2717]: I0813 01:03:44.122798 2717 kubelet.go:2306] "Pod admission denied" podUID="c0d184cc-c2ee-41d6-b1d1-ee90cd0b69d4" pod="tigera-operator/tigera-operator-5bf8dfcb4-tswdl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:44.224762 kubelet[2717]: I0813 01:03:44.224719 2717 kubelet.go:2306] "Pod admission denied" podUID="eb889567-8ac9-476a-9a93-37e8627cf51c" pod="tigera-operator/tigera-operator-5bf8dfcb4-z2j8z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:44.321935 kubelet[2717]: I0813 01:03:44.321884 2717 kubelet.go:2306] "Pod admission denied" podUID="4175a0f7-98e8-4ff9-a101-56d8a7d21c20" pod="tigera-operator/tigera-operator-5bf8dfcb4-ttd9n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:44.425441 kubelet[2717]: I0813 01:03:44.424497 2717 kubelet.go:2306] "Pod admission denied" podUID="51995cc2-b0c7-4575-ac02-4c5ac63d54f0" pod="tigera-operator/tigera-operator-5bf8dfcb4-dsdz2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:44.522058 kubelet[2717]: I0813 01:03:44.522023 2717 kubelet.go:2306] "Pod admission denied" podUID="acecf7ac-1884-43ba-9680-15a6e21356de" pod="tigera-operator/tigera-operator-5bf8dfcb4-vdwfg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:44.624048 kubelet[2717]: I0813 01:03:44.624006 2717 kubelet.go:2306] "Pod admission denied" podUID="29dd821c-c5d3-4848-9322-fd5b1930a1ab" pod="tigera-operator/tigera-operator-5bf8dfcb4-sbfmx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:44.730278 kubelet[2717]: I0813 01:03:44.730136 2717 kubelet.go:2306] "Pod admission denied" podUID="78a61451-5481-4bad-a398-4f89be2f1132" pod="tigera-operator/tigera-operator-5bf8dfcb4-5drm4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:44.765883 kubelet[2717]: E0813 01:03:44.765865 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:03:44.766799 containerd[1575]: time="2025-08-13T01:03:44.766607389Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tp469,Uid:c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af,Namespace:kube-system,Attempt:0,}" Aug 13 01:03:44.813208 containerd[1575]: time="2025-08-13T01:03:44.811520365Z" level=error msg="Failed to destroy network for sandbox \"1bc718477952df17748c8cadd77fdebe15496f29b2d4921a9fb5f48118d1cffb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:44.814331 containerd[1575]: time="2025-08-13T01:03:44.814294428Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tp469,Uid:c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bc718477952df17748c8cadd77fdebe15496f29b2d4921a9fb5f48118d1cffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:44.814969 systemd[1]: run-netns-cni\x2dcaffa75d\x2dd9f1\x2dfa41\x2dc385\x2d1f3a89aa355f.mount: Deactivated successfully. Aug 13 01:03:44.815474 kubelet[2717]: E0813 01:03:44.815436 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bc718477952df17748c8cadd77fdebe15496f29b2d4921a9fb5f48118d1cffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:44.815518 kubelet[2717]: E0813 01:03:44.815484 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bc718477952df17748c8cadd77fdebe15496f29b2d4921a9fb5f48118d1cffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:03:44.815518 kubelet[2717]: E0813 01:03:44.815502 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1bc718477952df17748c8cadd77fdebe15496f29b2d4921a9fb5f48118d1cffb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:03:44.815567 kubelet[2717]: E0813 01:03:44.815536 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tp469_kube-system(c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tp469_kube-system(c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1bc718477952df17748c8cadd77fdebe15496f29b2d4921a9fb5f48118d1cffb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tp469" podUID="c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af" Aug 13 01:03:44.924205 kubelet[2717]: I0813 01:03:44.924163 2717 kubelet.go:2306] "Pod admission denied" podUID="fa816699-479b-429a-84a4-594cc78ee55c" pod="tigera-operator/tigera-operator-5bf8dfcb4-g775j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:45.024513 kubelet[2717]: I0813 01:03:45.024402 2717 kubelet.go:2306] "Pod admission denied" podUID="a38c6c9f-c198-4a0a-a913-4ec5fb9e0f10" pod="tigera-operator/tigera-operator-5bf8dfcb4-ckbq6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:45.122229 kubelet[2717]: I0813 01:03:45.122156 2717 kubelet.go:2306] "Pod admission denied" podUID="17778639-4d3e-47da-ae2a-a97173d57a54" pod="tigera-operator/tigera-operator-5bf8dfcb4-c2pjh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:45.325431 kubelet[2717]: I0813 01:03:45.325329 2717 kubelet.go:2306] "Pod admission denied" podUID="991357e9-7d48-4bfb-8cc1-efbefa34a51c" pod="tigera-operator/tigera-operator-5bf8dfcb4-xzb6k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:45.423153 kubelet[2717]: I0813 01:03:45.423111 2717 kubelet.go:2306] "Pod admission denied" podUID="93367687-73a1-4549-89e3-9caab4e151de" pod="tigera-operator/tigera-operator-5bf8dfcb4-sdxd2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:45.525036 kubelet[2717]: I0813 01:03:45.524982 2717 kubelet.go:2306] "Pod admission denied" podUID="a62db538-feca-4041-93bd-8ce7da62a4e4" pod="tigera-operator/tigera-operator-5bf8dfcb4-brvnf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:45.623594 kubelet[2717]: I0813 01:03:45.623469 2717 kubelet.go:2306] "Pod admission denied" podUID="7e6740b7-5ce8-444d-b96b-12efbc92b916" pod="tigera-operator/tigera-operator-5bf8dfcb4-kv9tw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:45.724267 kubelet[2717]: I0813 01:03:45.724222 2717 kubelet.go:2306] "Pod admission denied" podUID="4aed07d3-6149-40b4-941d-09fbe8b87445" pod="tigera-operator/tigera-operator-5bf8dfcb4-mlvwj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:45.923170 kubelet[2717]: I0813 01:03:45.923054 2717 kubelet.go:2306] "Pod admission denied" podUID="4ff17dc9-e912-4a56-accf-b73808db934e" pod="tigera-operator/tigera-operator-5bf8dfcb4-r8cg5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:46.024104 kubelet[2717]: I0813 01:03:46.024041 2717 kubelet.go:2306] "Pod admission denied" podUID="ffa17858-2d65-4bac-96d7-cf6e0f469f07" pod="tigera-operator/tigera-operator-5bf8dfcb4-tdrml" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:46.122795 kubelet[2717]: I0813 01:03:46.122762 2717 kubelet.go:2306] "Pod admission denied" podUID="d05d4fca-8aaa-490e-9c0e-b2a4bd218d97" pod="tigera-operator/tigera-operator-5bf8dfcb4-jb975" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:46.325453 kubelet[2717]: I0813 01:03:46.325382 2717 kubelet.go:2306] "Pod admission denied" podUID="3bc5738a-c604-459e-a560-bfaa6dad38d7" pod="tigera-operator/tigera-operator-5bf8dfcb4-p55cp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:46.424857 kubelet[2717]: I0813 01:03:46.424608 2717 kubelet.go:2306] "Pod admission denied" podUID="4c9c9edd-d1ac-4c7f-86bd-644c10e14846" pod="tigera-operator/tigera-operator-5bf8dfcb4-bdtrz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:46.524225 kubelet[2717]: I0813 01:03:46.524170 2717 kubelet.go:2306] "Pod admission denied" podUID="04c297cb-1ecd-4dfc-9c50-68d13c6d0e87" pod="tigera-operator/tigera-operator-5bf8dfcb4-8p775" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:46.625797 kubelet[2717]: I0813 01:03:46.625254 2717 kubelet.go:2306] "Pod admission denied" podUID="2b1f04b5-af0d-4a2a-8c0e-c1c623563c20" pod="tigera-operator/tigera-operator-5bf8dfcb4-mrd54" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:46.724587 kubelet[2717]: I0813 01:03:46.724538 2717 kubelet.go:2306] "Pod admission denied" podUID="2719f423-e94f-4fa0-892a-f551070ccc0f" pod="tigera-operator/tigera-operator-5bf8dfcb4-84rph" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:46.823443 kubelet[2717]: I0813 01:03:46.823392 2717 kubelet.go:2306] "Pod admission denied" podUID="36c67265-d170-486b-88f0-e33de9261b83" pod="tigera-operator/tigera-operator-5bf8dfcb4-cdc8h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:46.922837 kubelet[2717]: I0813 01:03:46.922715 2717 kubelet.go:2306] "Pod admission denied" podUID="645723d9-58fd-427e-9ec1-29aa55d425fa" pod="tigera-operator/tigera-operator-5bf8dfcb4-nmbcv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:47.026799 kubelet[2717]: I0813 01:03:47.026742 2717 kubelet.go:2306] "Pod admission denied" podUID="ca7d53f4-4507-4fef-9ae5-7122b6836c92" pod="tigera-operator/tigera-operator-5bf8dfcb4-6f4h6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:47.125290 kubelet[2717]: I0813 01:03:47.125232 2717 kubelet.go:2306] "Pod admission denied" podUID="be2b0752-29ea-452a-bd7a-b6c093925286" pod="tigera-operator/tigera-operator-5bf8dfcb4-pkx4q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:47.224015 kubelet[2717]: I0813 01:03:47.223966 2717 kubelet.go:2306] "Pod admission denied" podUID="ca0968cf-f0f5-4e2b-8f3c-dfae20ee7b00" pod="tigera-operator/tigera-operator-5bf8dfcb4-xgfg8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:47.276141 kubelet[2717]: I0813 01:03:47.276081 2717 kubelet.go:2306] "Pod admission denied" podUID="161a0ef6-a90d-4e1d-97cf-8f024db08e33" pod="tigera-operator/tigera-operator-5bf8dfcb4-hr7sm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:47.376000 kubelet[2717]: I0813 01:03:47.375940 2717 kubelet.go:2306] "Pod admission denied" podUID="69ef2dfc-b7e1-4f04-a8c4-0593e3ff6aa0" pod="tigera-operator/tigera-operator-5bf8dfcb4-kh64n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:47.473844 kubelet[2717]: I0813 01:03:47.473799 2717 kubelet.go:2306] "Pod admission denied" podUID="7072a77c-3d76-4a9a-9d7a-bd0f57e0f719" pod="tigera-operator/tigera-operator-5bf8dfcb4-cs54v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:47.578404 kubelet[2717]: I0813 01:03:47.578290 2717 kubelet.go:2306] "Pod admission denied" podUID="a4708c4d-0753-464c-8cdf-378540e61f16" pod="tigera-operator/tigera-operator-5bf8dfcb4-tzh95" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:47.676955 kubelet[2717]: I0813 01:03:47.676911 2717 kubelet.go:2306] "Pod admission denied" podUID="7fa0c9cc-ff3d-411f-ae63-5242280fe934" pod="tigera-operator/tigera-operator-5bf8dfcb4-mcsls" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:47.772091 kubelet[2717]: I0813 01:03:47.772047 2717 kubelet.go:2306] "Pod admission denied" podUID="82690687-e14d-4c29-b584-3f357c32b77a" pod="tigera-operator/tigera-operator-5bf8dfcb4-nfcfs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:47.872310 kubelet[2717]: I0813 01:03:47.871895 2717 kubelet.go:2306] "Pod admission denied" podUID="c84dcad9-7f51-4a19-a0d5-6d5d365dc72f" pod="tigera-operator/tigera-operator-5bf8dfcb4-w6rtp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:47.972739 kubelet[2717]: I0813 01:03:47.972707 2717 kubelet.go:2306] "Pod admission denied" podUID="1d3e0d98-29a9-4987-aedf-cf5779c61a65" pod="tigera-operator/tigera-operator-5bf8dfcb4-hgsjm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:48.073150 kubelet[2717]: I0813 01:03:48.073110 2717 kubelet.go:2306] "Pod admission denied" podUID="9750c440-71a8-404b-9707-59f42c67b023" pod="tigera-operator/tigera-operator-5bf8dfcb4-bxxgk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:48.121230 kubelet[2717]: I0813 01:03:48.120133 2717 kubelet.go:2306] "Pod admission denied" podUID="b0e2cda7-2939-423c-90c3-78af831ad939" pod="tigera-operator/tigera-operator-5bf8dfcb4-fcxfg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:48.225660 kubelet[2717]: I0813 01:03:48.225617 2717 kubelet.go:2306] "Pod admission denied" podUID="0d959ad8-86ad-49ed-a350-6fd18dd51145" pod="tigera-operator/tigera-operator-5bf8dfcb4-j4ls8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:48.423299 kubelet[2717]: I0813 01:03:48.423252 2717 kubelet.go:2306] "Pod admission denied" podUID="c135b754-6201-4e4a-a6ff-e4eaa367a25e" pod="tigera-operator/tigera-operator-5bf8dfcb4-sdpfj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:48.522181 kubelet[2717]: I0813 01:03:48.521840 2717 kubelet.go:2306] "Pod admission denied" podUID="b6d4ca5f-4728-4e25-a64f-d4a6560cc728" pod="tigera-operator/tigera-operator-5bf8dfcb4-ztbtx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:48.623800 kubelet[2717]: I0813 01:03:48.623759 2717 kubelet.go:2306] "Pod admission denied" podUID="91f0cda9-edbc-43dc-a93f-cd1fbd60afc5" pod="tigera-operator/tigera-operator-5bf8dfcb4-wjjg9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:48.725904 kubelet[2717]: I0813 01:03:48.725849 2717 kubelet.go:2306] "Pod admission denied" podUID="6f8b0861-7fe2-4abb-acf4-cf74d9d13727" pod="tigera-operator/tigera-operator-5bf8dfcb4-xbxkc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:48.824158 kubelet[2717]: I0813 01:03:48.823922 2717 kubelet.go:2306] "Pod admission denied" podUID="e5366fc7-4ff6-4e24-bedf-dcff0105661b" pod="tigera-operator/tigera-operator-5bf8dfcb4-5czbg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:48.922615 kubelet[2717]: I0813 01:03:48.922579 2717 kubelet.go:2306] "Pod admission denied" podUID="c7dbd0e9-bc7f-423d-abea-0ec3f881f6b7" pod="tigera-operator/tigera-operator-5bf8dfcb4-2z2cc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:49.021809 kubelet[2717]: I0813 01:03:49.021773 2717 kubelet.go:2306] "Pod admission denied" podUID="3d114565-a12d-4ded-8edb-15b38c328a5b" pod="tigera-operator/tigera-operator-5bf8dfcb4-5rdnk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:49.224938 kubelet[2717]: I0813 01:03:49.224890 2717 kubelet.go:2306] "Pod admission denied" podUID="ebbbd19e-69d7-4503-83de-b3afd2e06ce1" pod="tigera-operator/tigera-operator-5bf8dfcb4-l8qkg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:49.325540 kubelet[2717]: I0813 01:03:49.325495 2717 kubelet.go:2306] "Pod admission denied" podUID="e0b3b643-462c-4b98-be37-36f8ac742187" pod="tigera-operator/tigera-operator-5bf8dfcb4-h528c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:49.425930 kubelet[2717]: I0813 01:03:49.425882 2717 kubelet.go:2306] "Pod admission denied" podUID="f59b92f4-3921-4520-9673-4344fe53a10e" pod="tigera-operator/tigera-operator-5bf8dfcb4-nxbcs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:49.525199 kubelet[2717]: I0813 01:03:49.525079 2717 kubelet.go:2306] "Pod admission denied" podUID="1363898e-d6f4-4c62-bc21-ff19455d3e8a" pod="tigera-operator/tigera-operator-5bf8dfcb4-jglm5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:49.625929 kubelet[2717]: I0813 01:03:49.625882 2717 kubelet.go:2306] "Pod admission denied" podUID="834bd2f8-d724-4382-b4e9-3951d3fbbd0c" pod="tigera-operator/tigera-operator-5bf8dfcb4-thhbz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:49.725577 kubelet[2717]: I0813 01:03:49.725533 2717 kubelet.go:2306] "Pod admission denied" podUID="26111d09-b87e-4923-ae12-34731611f951" pod="tigera-operator/tigera-operator-5bf8dfcb4-jtjnt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:49.826168 kubelet[2717]: I0813 01:03:49.826053 2717 kubelet.go:2306] "Pod admission denied" podUID="44bd5d6b-0632-4484-95fb-d60866e7c222" pod="tigera-operator/tigera-operator-5bf8dfcb4-9qzdv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:49.924976 kubelet[2717]: I0813 01:03:49.924930 2717 kubelet.go:2306] "Pod admission denied" podUID="daef1517-2723-4a30-8698-867153c5f4f1" pod="tigera-operator/tigera-operator-5bf8dfcb4-gl877" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:50.029173 kubelet[2717]: I0813 01:03:50.029120 2717 kubelet.go:2306] "Pod admission denied" podUID="9caf9d37-e901-437f-b2c5-09e56c2ce0a5" pod="tigera-operator/tigera-operator-5bf8dfcb4-zgqqr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:50.126108 kubelet[2717]: I0813 01:03:50.125492 2717 kubelet.go:2306] "Pod admission denied" podUID="4342d042-414e-4dff-ace9-a3de0467084f" pod="tigera-operator/tigera-operator-5bf8dfcb4-df6l4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:50.226478 kubelet[2717]: I0813 01:03:50.226426 2717 kubelet.go:2306] "Pod admission denied" podUID="b082d773-6ecd-4c62-ad16-430e85b4463d" pod="tigera-operator/tigera-operator-5bf8dfcb4-slk26" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:50.323307 kubelet[2717]: I0813 01:03:50.323268 2717 kubelet.go:2306] "Pod admission denied" podUID="83e754c2-5151-4c0d-a857-3fe7df412953" pod="tigera-operator/tigera-operator-5bf8dfcb4-kqbz9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:50.354628 kubelet[2717]: I0813 01:03:50.354594 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:03:50.354628 kubelet[2717]: I0813 01:03:50.354631 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:03:50.357209 kubelet[2717]: I0813 01:03:50.356733 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:03:50.370397 kubelet[2717]: I0813 01:03:50.370379 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:03:50.370612 kubelet[2717]: I0813 01:03:50.370591 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/calico-kube-controllers-564d8b8748-ps97n","kube-system/coredns-7c65d6cfc9-tp469","calico-system/csi-node-driver-84hvc","calico-system/calico-node-7pdcs","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:03:50.370672 kubelet[2717]: E0813 01:03:50.370612 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:03:50.370672 kubelet[2717]: E0813 01:03:50.370621 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:03:50.370672 kubelet[2717]: E0813 01:03:50.370627 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:03:50.370672 kubelet[2717]: E0813 01:03:50.370634 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:03:50.370672 kubelet[2717]: E0813 01:03:50.370640 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:03:50.370672 kubelet[2717]: E0813 01:03:50.370649 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:03:50.370672 kubelet[2717]: E0813 01:03:50.370657 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:03:50.370672 kubelet[2717]: E0813 01:03:50.370665 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:03:50.370672 kubelet[2717]: E0813 01:03:50.370673 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:03:50.370880 kubelet[2717]: E0813 01:03:50.370681 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:03:50.370880 kubelet[2717]: I0813 01:03:50.370689 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:03:50.423532 kubelet[2717]: I0813 01:03:50.423438 2717 kubelet.go:2306] "Pod admission denied" podUID="4d811445-f08b-43f3-a015-586e4e2a25d7" pod="tigera-operator/tigera-operator-5bf8dfcb4-4h829" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:50.523814 kubelet[2717]: I0813 01:03:50.523770 2717 kubelet.go:2306] "Pod admission denied" podUID="6e90a09b-ee71-44e1-a2be-cf8c65e6b5b3" pod="tigera-operator/tigera-operator-5bf8dfcb4-kdjkl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:50.575047 kubelet[2717]: I0813 01:03:50.575002 2717 kubelet.go:2306] "Pod admission denied" podUID="33260f57-4e63-4863-9c52-aad3997aeab9" pod="tigera-operator/tigera-operator-5bf8dfcb4-b25mv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:50.676241 kubelet[2717]: I0813 01:03:50.676108 2717 kubelet.go:2306] "Pod admission denied" podUID="d6b29865-25af-4970-99f7-feffed53c532" pod="tigera-operator/tigera-operator-5bf8dfcb4-2j2r2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:50.765919 kubelet[2717]: E0813 01:03:50.765654 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-7pdcs" podUID="e6731efa-3c96-4227-b83c-f4c3adff36c6" Aug 13 01:03:50.785318 kubelet[2717]: I0813 01:03:50.785280 2717 kubelet.go:2306] "Pod admission denied" podUID="6bd22c7c-95bc-40d1-b117-3473e74824f2" pod="tigera-operator/tigera-operator-5bf8dfcb4-fgvh5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:50.877880 kubelet[2717]: I0813 01:03:50.877833 2717 kubelet.go:2306] "Pod admission denied" podUID="54469d00-5974-46bb-baee-ae01a156383e" pod="tigera-operator/tigera-operator-5bf8dfcb4-rlsfm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:50.976178 kubelet[2717]: I0813 01:03:50.976131 2717 kubelet.go:2306] "Pod admission denied" podUID="81444000-a3d1-43de-acca-b327786ee336" pod="tigera-operator/tigera-operator-5bf8dfcb4-24l2g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:51.075827 kubelet[2717]: I0813 01:03:51.075772 2717 kubelet.go:2306] "Pod admission denied" podUID="a701fed9-75a3-44b3-93d3-d8bebd1f9c4e" pod="tigera-operator/tigera-operator-5bf8dfcb4-cqkh7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:51.176552 kubelet[2717]: I0813 01:03:51.176503 2717 kubelet.go:2306] "Pod admission denied" podUID="46b79c4e-13da-4458-9e00-b783ff46c8ed" pod="tigera-operator/tigera-operator-5bf8dfcb4-mtqvx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:51.226594 kubelet[2717]: I0813 01:03:51.226240 2717 kubelet.go:2306] "Pod admission denied" podUID="67ef736a-c5f3-4fb5-aed6-44213e5ab7e5" pod="tigera-operator/tigera-operator-5bf8dfcb4-9khrj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:51.327212 kubelet[2717]: I0813 01:03:51.326280 2717 kubelet.go:2306] "Pod admission denied" podUID="a39f27a7-8ba9-4307-b433-a118725a6a6f" pod="tigera-operator/tigera-operator-5bf8dfcb4-95rvr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:51.430649 kubelet[2717]: I0813 01:03:51.430396 2717 kubelet.go:2306] "Pod admission denied" podUID="365bcc1c-128d-4503-b559-51f67f0eb7b8" pod="tigera-operator/tigera-operator-5bf8dfcb4-tv527" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:51.528219 kubelet[2717]: I0813 01:03:51.527357 2717 kubelet.go:2306] "Pod admission denied" podUID="624880c3-6d01-40bc-8323-54dccc7f4d53" pod="tigera-operator/tigera-operator-5bf8dfcb4-c5cbl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:51.725296 kubelet[2717]: I0813 01:03:51.725246 2717 kubelet.go:2306] "Pod admission denied" podUID="64b2d6f9-8504-4dcc-bcfe-17e007571ba2" pod="tigera-operator/tigera-operator-5bf8dfcb4-f9zkn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:51.825679 kubelet[2717]: I0813 01:03:51.825162 2717 kubelet.go:2306] "Pod admission denied" podUID="af6c75d3-f231-4684-8c28-1e073ffa8309" pod="tigera-operator/tigera-operator-5bf8dfcb4-5jkcd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:51.927828 kubelet[2717]: I0813 01:03:51.927776 2717 kubelet.go:2306] "Pod admission denied" podUID="53c2b846-4762-40d3-9d0b-924d7a68df03" pod="tigera-operator/tigera-operator-5bf8dfcb4-7nlmm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:52.126007 kubelet[2717]: I0813 01:03:52.125866 2717 kubelet.go:2306] "Pod admission denied" podUID="ac51594b-ce7b-4f15-bb30-f14307668cad" pod="tigera-operator/tigera-operator-5bf8dfcb4-8djrs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:52.231302 kubelet[2717]: I0813 01:03:52.230369 2717 kubelet.go:2306] "Pod admission denied" podUID="42708527-11f8-441e-82ad-aa6a0ef7c83c" pod="tigera-operator/tigera-operator-5bf8dfcb4-47lzr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:52.274996 kubelet[2717]: I0813 01:03:52.274946 2717 kubelet.go:2306] "Pod admission denied" podUID="7cd01997-dbab-4266-be2f-04ff9e198310" pod="tigera-operator/tigera-operator-5bf8dfcb4-qrhkf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:52.374207 kubelet[2717]: I0813 01:03:52.374146 2717 kubelet.go:2306] "Pod admission denied" podUID="78b3ba56-7c02-4cc7-98c1-15ff88074f09" pod="tigera-operator/tigera-operator-5bf8dfcb4-nlmxn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:52.477183 kubelet[2717]: I0813 01:03:52.477127 2717 kubelet.go:2306] "Pod admission denied" podUID="2a007724-33fd-4cce-8432-da4165091c90" pod="tigera-operator/tigera-operator-5bf8dfcb4-dr8fm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:52.577921 kubelet[2717]: I0813 01:03:52.577875 2717 kubelet.go:2306] "Pod admission denied" podUID="38ee1652-988b-4f84-ba1a-49b7fe473d58" pod="tigera-operator/tigera-operator-5bf8dfcb4-7pnvz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:52.677118 kubelet[2717]: I0813 01:03:52.677065 2717 kubelet.go:2306] "Pod admission denied" podUID="2a3c7101-84eb-46f6-8554-3e0c00eebaa1" pod="tigera-operator/tigera-operator-5bf8dfcb4-dhwjc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:52.777888 kubelet[2717]: I0813 01:03:52.777517 2717 kubelet.go:2306] "Pod admission denied" podUID="3840f422-9102-4c1c-a8a7-13219dd86caf" pod="tigera-operator/tigera-operator-5bf8dfcb4-p7thz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:52.986012 kubelet[2717]: I0813 01:03:52.985966 2717 kubelet.go:2306] "Pod admission denied" podUID="c8f1a245-5f3f-4d37-9ad1-b0623c8f82ac" pod="tigera-operator/tigera-operator-5bf8dfcb4-pjndc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:53.078091 kubelet[2717]: I0813 01:03:53.077968 2717 kubelet.go:2306] "Pod admission denied" podUID="7814fcc6-ad7d-4e8d-89a1-5bfcacea73ce" pod="tigera-operator/tigera-operator-5bf8dfcb4-z8skr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:53.175643 kubelet[2717]: I0813 01:03:53.175599 2717 kubelet.go:2306] "Pod admission denied" podUID="d9bd2275-6171-4e4d-a9df-44f7042277a4" pod="tigera-operator/tigera-operator-5bf8dfcb4-qsfcq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:53.277005 kubelet[2717]: I0813 01:03:53.276958 2717 kubelet.go:2306] "Pod admission denied" podUID="af561d1c-6390-445c-9b53-a57939214bfe" pod="tigera-operator/tigera-operator-5bf8dfcb4-scwwz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:53.323665 kubelet[2717]: I0813 01:03:53.323622 2717 kubelet.go:2306] "Pod admission denied" podUID="43c78734-e72b-43ab-b855-3b4ff9018fcc" pod="tigera-operator/tigera-operator-5bf8dfcb4-fncx7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:53.427548 kubelet[2717]: I0813 01:03:53.427434 2717 kubelet.go:2306] "Pod admission denied" podUID="bab43a30-71eb-499f-9cd3-b506e2f07649" pod="tigera-operator/tigera-operator-5bf8dfcb4-bbvv7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:53.524499 kubelet[2717]: I0813 01:03:53.524451 2717 kubelet.go:2306] "Pod admission denied" podUID="1f801aed-10eb-41c9-b48d-1644aeac78d1" pod="tigera-operator/tigera-operator-5bf8dfcb4-fvb2t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:53.634213 kubelet[2717]: I0813 01:03:53.633847 2717 kubelet.go:2306] "Pod admission denied" podUID="8843d211-bde3-4aff-8ffd-a1e0cf0759d5" pod="tigera-operator/tigera-operator-5bf8dfcb4-b82zc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:53.725518 kubelet[2717]: I0813 01:03:53.725471 2717 kubelet.go:2306] "Pod admission denied" podUID="3b06c801-620d-457e-b6e4-35dec933929a" pod="tigera-operator/tigera-operator-5bf8dfcb4-74ftf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:53.765563 containerd[1575]: time="2025-08-13T01:03:53.765484233Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564d8b8748-ps97n,Uid:a7e8405c-2c82-420c-bac7-a7277571f968,Namespace:calico-system,Attempt:0,}" Aug 13 01:03:53.787246 kubelet[2717]: I0813 01:03:53.786615 2717 kubelet.go:2306] "Pod admission denied" podUID="d7a03f0e-5740-40cc-ba79-345f69809bfa" pod="tigera-operator/tigera-operator-5bf8dfcb4-h6q2z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:53.836817 containerd[1575]: time="2025-08-13T01:03:53.834652177Z" level=error msg="Failed to destroy network for sandbox \"294e258ac0aedea0c298a7f488122c6c59f78187379a4e3c486274165ff29204\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:53.836859 systemd[1]: run-netns-cni\x2d3da0aa7f\x2d01a3\x2d033c\x2dbbf1\x2d816b6c6007d8.mount: Deactivated successfully. Aug 13 01:03:53.838489 containerd[1575]: time="2025-08-13T01:03:53.838421041Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564d8b8748-ps97n,Uid:a7e8405c-2c82-420c-bac7-a7277571f968,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"294e258ac0aedea0c298a7f488122c6c59f78187379a4e3c486274165ff29204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:53.839644 kubelet[2717]: E0813 01:03:53.838626 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"294e258ac0aedea0c298a7f488122c6c59f78187379a4e3c486274165ff29204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:53.839644 kubelet[2717]: E0813 01:03:53.838677 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"294e258ac0aedea0c298a7f488122c6c59f78187379a4e3c486274165ff29204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:03:53.839644 kubelet[2717]: E0813 01:03:53.838697 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"294e258ac0aedea0c298a7f488122c6c59f78187379a4e3c486274165ff29204\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:03:53.839644 kubelet[2717]: E0813 01:03:53.838734 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"294e258ac0aedea0c298a7f488122c6c59f78187379a4e3c486274165ff29204\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:03:53.883810 kubelet[2717]: I0813 01:03:53.883748 2717 kubelet.go:2306] "Pod admission denied" podUID="49b25c3a-18dd-4ea5-b9cd-6a1384a5889f" pod="tigera-operator/tigera-operator-5bf8dfcb4-7mwxj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:53.976260 kubelet[2717]: I0813 01:03:53.976137 2717 kubelet.go:2306] "Pod admission denied" podUID="22111206-b853-4b37-b20a-81bdcbc9dc0a" pod="tigera-operator/tigera-operator-5bf8dfcb4-wdnsd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:54.075800 kubelet[2717]: I0813 01:03:54.075759 2717 kubelet.go:2306] "Pod admission denied" podUID="9ef88695-8ae2-4973-b918-9997af23ccf8" pod="tigera-operator/tigera-operator-5bf8dfcb4-7tzcc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:54.178012 kubelet[2717]: I0813 01:03:54.177962 2717 kubelet.go:2306] "Pod admission denied" podUID="1e6a82aa-2c15-49e1-ba89-2a52a014f0f5" pod="tigera-operator/tigera-operator-5bf8dfcb4-zx7gm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:54.283228 kubelet[2717]: I0813 01:03:54.282201 2717 kubelet.go:2306] "Pod admission denied" podUID="c39343fa-cbd0-41e9-b21e-03f49764c23b" pod="tigera-operator/tigera-operator-5bf8dfcb4-bq549" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:54.376137 kubelet[2717]: I0813 01:03:54.376092 2717 kubelet.go:2306] "Pod admission denied" podUID="3d6fafcf-ab16-4ff5-95f9-d9636586db81" pod="tigera-operator/tigera-operator-5bf8dfcb4-gg2nr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:54.424650 kubelet[2717]: I0813 01:03:54.424607 2717 kubelet.go:2306] "Pod admission denied" podUID="9b545510-bda6-492a-8064-817454719238" pod="tigera-operator/tigera-operator-5bf8dfcb4-f5jqd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:54.525973 kubelet[2717]: I0813 01:03:54.525940 2717 kubelet.go:2306] "Pod admission denied" podUID="21bbc6b3-4609-49f0-b345-1aaa52c44d97" pod="tigera-operator/tigera-operator-5bf8dfcb4-rqndz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:54.729172 kubelet[2717]: I0813 01:03:54.729122 2717 kubelet.go:2306] "Pod admission denied" podUID="bac515fd-1ee0-4537-9f05-c4b34a97156a" pod="tigera-operator/tigera-operator-5bf8dfcb4-tvnpm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:54.769111 kubelet[2717]: E0813 01:03:54.768274 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:03:54.769111 kubelet[2717]: E0813 01:03:54.768727 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:03:54.769941 containerd[1575]: time="2025-08-13T01:03:54.769916768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,}" Aug 13 01:03:54.770475 containerd[1575]: time="2025-08-13T01:03:54.770458299Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84hvc,Uid:f2b74998-29fc-4213-8313-543c9154bc64,Namespace:calico-system,Attempt:0,}" Aug 13 01:03:54.834995 kubelet[2717]: I0813 01:03:54.834942 2717 kubelet.go:2306] "Pod admission denied" podUID="11b4d092-2300-4565-a258-0c8a2f493432" pod="tigera-operator/tigera-operator-5bf8dfcb4-mblbf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:54.859788 containerd[1575]: time="2025-08-13T01:03:54.859107591Z" level=error msg="Failed to destroy network for sandbox \"d382da7dbf623cb79916089a625096cb310c9f543b20e7603378a3a49d1ea0f5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:54.862065 systemd[1]: run-netns-cni\x2dc5ddacd9\x2df4b8\x2d537f\x2d3362\x2d605eb80715cd.mount: Deactivated successfully. Aug 13 01:03:54.865795 containerd[1575]: time="2025-08-13T01:03:54.865379720Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d382da7dbf623cb79916089a625096cb310c9f543b20e7603378a3a49d1ea0f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:54.867288 kubelet[2717]: E0813 01:03:54.865590 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d382da7dbf623cb79916089a625096cb310c9f543b20e7603378a3a49d1ea0f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:54.867288 kubelet[2717]: E0813 01:03:54.865648 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d382da7dbf623cb79916089a625096cb310c9f543b20e7603378a3a49d1ea0f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:03:54.867288 kubelet[2717]: E0813 01:03:54.865668 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d382da7dbf623cb79916089a625096cb310c9f543b20e7603378a3a49d1ea0f5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:03:54.867288 kubelet[2717]: E0813 01:03:54.865753 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d382da7dbf623cb79916089a625096cb310c9f543b20e7603378a3a49d1ea0f5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dk5p7" podUID="6787cc4f-56e4-4094-978c-958d0d7a35ba" Aug 13 01:03:54.888956 containerd[1575]: time="2025-08-13T01:03:54.888913883Z" level=error msg="Failed to destroy network for sandbox \"519b66950d834ec2311c2b98b86370608e14fa365531491d1ccb03b0bd5a0abc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:54.892534 systemd[1]: run-netns-cni\x2d171dae2e\x2d3b13\x2df4f7\x2da971\x2df4eba0312677.mount: Deactivated successfully. Aug 13 01:03:54.893804 containerd[1575]: time="2025-08-13T01:03:54.893763532Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84hvc,Uid:f2b74998-29fc-4213-8313-543c9154bc64,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"519b66950d834ec2311c2b98b86370608e14fa365531491d1ccb03b0bd5a0abc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:54.894031 kubelet[2717]: E0813 01:03:54.893983 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"519b66950d834ec2311c2b98b86370608e14fa365531491d1ccb03b0bd5a0abc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:54.894078 kubelet[2717]: E0813 01:03:54.894037 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"519b66950d834ec2311c2b98b86370608e14fa365531491d1ccb03b0bd5a0abc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:03:54.894078 kubelet[2717]: E0813 01:03:54.894055 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"519b66950d834ec2311c2b98b86370608e14fa365531491d1ccb03b0bd5a0abc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:03:54.894133 kubelet[2717]: E0813 01:03:54.894087 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-84hvc_calico-system(f2b74998-29fc-4213-8313-543c9154bc64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-84hvc_calico-system(f2b74998-29fc-4213-8313-543c9154bc64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"519b66950d834ec2311c2b98b86370608e14fa365531491d1ccb03b0bd5a0abc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-84hvc" podUID="f2b74998-29fc-4213-8313-543c9154bc64" Aug 13 01:03:54.925145 kubelet[2717]: I0813 01:03:54.925080 2717 kubelet.go:2306] "Pod admission denied" podUID="9560a000-67fd-4f93-a649-213833a96d8f" pod="tigera-operator/tigera-operator-5bf8dfcb4-wvxch" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:55.026692 kubelet[2717]: I0813 01:03:55.026250 2717 kubelet.go:2306] "Pod admission denied" podUID="bb3678b7-5b6e-4272-b835-6053a9a2d4da" pod="tigera-operator/tigera-operator-5bf8dfcb4-928f2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:55.128151 kubelet[2717]: I0813 01:03:55.128100 2717 kubelet.go:2306] "Pod admission denied" podUID="5baec67b-4204-4814-adcb-1f8a7240bd14" pod="tigera-operator/tigera-operator-5bf8dfcb4-47zdg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:55.327314 kubelet[2717]: I0813 01:03:55.326634 2717 kubelet.go:2306] "Pod admission denied" podUID="766436a3-953f-48e3-a6e3-a3fbbeb3ced5" pod="tigera-operator/tigera-operator-5bf8dfcb4-2z8hv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:55.426107 kubelet[2717]: I0813 01:03:55.426051 2717 kubelet.go:2306] "Pod admission denied" podUID="cf7477cb-2859-4ac7-80a3-d41b63671ecf" pod="tigera-operator/tigera-operator-5bf8dfcb4-w527p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:55.526579 kubelet[2717]: I0813 01:03:55.526526 2717 kubelet.go:2306] "Pod admission denied" podUID="01dc07ff-80ea-44dd-840b-07f2495e0715" pod="tigera-operator/tigera-operator-5bf8dfcb4-844dq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:55.629023 kubelet[2717]: I0813 01:03:55.628492 2717 kubelet.go:2306] "Pod admission denied" podUID="56686faa-b040-4326-aa84-053b325b5743" pod="tigera-operator/tigera-operator-5bf8dfcb4-8jf7x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:55.725910 kubelet[2717]: I0813 01:03:55.725855 2717 kubelet.go:2306] "Pod admission denied" podUID="b0a195dd-b506-4a4c-8f11-7436019dd3ff" pod="tigera-operator/tigera-operator-5bf8dfcb4-hrrwt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:55.763932 kubelet[2717]: E0813 01:03:55.763902 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:03:55.764648 containerd[1575]: time="2025-08-13T01:03:55.764616195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tp469,Uid:c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af,Namespace:kube-system,Attempt:0,}" Aug 13 01:03:55.813426 containerd[1575]: time="2025-08-13T01:03:55.809702139Z" level=error msg="Failed to destroy network for sandbox \"196e983e9c6719e7f3fa18fc37aee53f1c0b34a092c003f0d3f880380984d434\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:55.813125 systemd[1]: run-netns-cni\x2d3c2ac03b\x2db5f6\x2d7fef\x2dafc9\x2d8945f9c483e9.mount: Deactivated successfully. Aug 13 01:03:55.815324 containerd[1575]: time="2025-08-13T01:03:55.815250519Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tp469,Uid:c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"196e983e9c6719e7f3fa18fc37aee53f1c0b34a092c003f0d3f880380984d434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:55.815803 kubelet[2717]: E0813 01:03:55.815734 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"196e983e9c6719e7f3fa18fc37aee53f1c0b34a092c003f0d3f880380984d434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:03:55.815928 kubelet[2717]: E0813 01:03:55.815889 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"196e983e9c6719e7f3fa18fc37aee53f1c0b34a092c003f0d3f880380984d434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:03:55.815928 kubelet[2717]: E0813 01:03:55.815924 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"196e983e9c6719e7f3fa18fc37aee53f1c0b34a092c003f0d3f880380984d434\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:03:55.816020 kubelet[2717]: E0813 01:03:55.815991 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tp469_kube-system(c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tp469_kube-system(c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"196e983e9c6719e7f3fa18fc37aee53f1c0b34a092c003f0d3f880380984d434\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tp469" podUID="c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af" Aug 13 01:03:55.928092 kubelet[2717]: I0813 01:03:55.927656 2717 kubelet.go:2306] "Pod admission denied" podUID="3ea2ba15-f0ef-4159-9975-a3f85edaf51d" pod="tigera-operator/tigera-operator-5bf8dfcb4-dqv8b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:56.026254 kubelet[2717]: I0813 01:03:56.026212 2717 kubelet.go:2306] "Pod admission denied" podUID="1bbd7bd7-6956-40c1-94aa-43459b1d96b0" pod="tigera-operator/tigera-operator-5bf8dfcb4-jrc2v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:56.134667 kubelet[2717]: I0813 01:03:56.133948 2717 kubelet.go:2306] "Pod admission denied" podUID="bf6583d4-1d41-4da6-b075-f47b5c986418" pod="tigera-operator/tigera-operator-5bf8dfcb4-6mnzb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:56.224567 kubelet[2717]: I0813 01:03:56.224517 2717 kubelet.go:2306] "Pod admission denied" podUID="e71f39f5-2754-43fc-8292-371d7c54d1c0" pod="tigera-operator/tigera-operator-5bf8dfcb4-qvzvh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:56.325299 kubelet[2717]: I0813 01:03:56.325266 2717 kubelet.go:2306] "Pod admission denied" podUID="bd47bb17-0de5-4cda-a5a2-f8d6199ddace" pod="tigera-operator/tigera-operator-5bf8dfcb4-ddfrc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:56.427717 kubelet[2717]: I0813 01:03:56.427247 2717 kubelet.go:2306] "Pod admission denied" podUID="44674cda-3b70-4e0c-803f-360ba86655e4" pod="tigera-operator/tigera-operator-5bf8dfcb4-slbsh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:56.527791 kubelet[2717]: I0813 01:03:56.527338 2717 kubelet.go:2306] "Pod admission denied" podUID="a87029ae-f3a7-4151-9b8a-49404952e15e" pod="tigera-operator/tigera-operator-5bf8dfcb4-2hqch" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:56.626088 kubelet[2717]: I0813 01:03:56.626043 2717 kubelet.go:2306] "Pod admission denied" podUID="83361482-f798-4c37-b09a-c8cc90ef085b" pod="tigera-operator/tigera-operator-5bf8dfcb4-qp9nn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:56.731369 kubelet[2717]: I0813 01:03:56.731320 2717 kubelet.go:2306] "Pod admission denied" podUID="b1ebf966-5a4d-4c9b-b816-af69f7d41f07" pod="tigera-operator/tigera-operator-5bf8dfcb4-2wjk7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:56.827247 kubelet[2717]: I0813 01:03:56.827121 2717 kubelet.go:2306] "Pod admission denied" podUID="1b86ac78-e46f-487a-b0c1-e46d59ac088e" pod="tigera-operator/tigera-operator-5bf8dfcb4-pwg2l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:56.926687 kubelet[2717]: I0813 01:03:56.926650 2717 kubelet.go:2306] "Pod admission denied" podUID="5b91cfc2-fd2e-480e-b8df-f0d87ed1e98d" pod="tigera-operator/tigera-operator-5bf8dfcb4-lq549" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:57.132073 kubelet[2717]: I0813 01:03:57.131945 2717 kubelet.go:2306] "Pod admission denied" podUID="d87cf486-1457-44b4-a951-bc329fcb6b2c" pod="tigera-operator/tigera-operator-5bf8dfcb4-dh2tp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:57.227742 kubelet[2717]: I0813 01:03:57.227696 2717 kubelet.go:2306] "Pod admission denied" podUID="518e1dc4-15a7-4b29-990c-1f528a8ee2c5" pod="tigera-operator/tigera-operator-5bf8dfcb4-p77sr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:57.328275 kubelet[2717]: I0813 01:03:57.328226 2717 kubelet.go:2306] "Pod admission denied" podUID="132c1af0-fbc3-414b-9c09-b72bb3a4d425" pod="tigera-operator/tigera-operator-5bf8dfcb4-6kbdj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:57.526792 kubelet[2717]: I0813 01:03:57.526744 2717 kubelet.go:2306] "Pod admission denied" podUID="b607f7f2-6ecd-48b9-98a8-0e4ab4a4c468" pod="tigera-operator/tigera-operator-5bf8dfcb4-cspc2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:57.626448 kubelet[2717]: I0813 01:03:57.626393 2717 kubelet.go:2306] "Pod admission denied" podUID="7e756f1f-3598-4bb0-8787-35be423de773" pod="tigera-operator/tigera-operator-5bf8dfcb4-7xd5h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:57.728325 kubelet[2717]: I0813 01:03:57.728270 2717 kubelet.go:2306] "Pod admission denied" podUID="bb9b5e85-cd99-40cf-8fc6-a00b8d6d4d26" pod="tigera-operator/tigera-operator-5bf8dfcb4-dwnvl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:57.834220 kubelet[2717]: I0813 01:03:57.834080 2717 kubelet.go:2306] "Pod admission denied" podUID="9b65248e-9ab3-4111-8244-afecb8a121fa" pod="tigera-operator/tigera-operator-5bf8dfcb4-vqvnp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:57.926210 kubelet[2717]: I0813 01:03:57.926156 2717 kubelet.go:2306] "Pod admission denied" podUID="432f635d-037b-49e2-82b3-916bfa25ac02" pod="tigera-operator/tigera-operator-5bf8dfcb4-ljlkj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:58.126473 kubelet[2717]: I0813 01:03:58.126240 2717 kubelet.go:2306] "Pod admission denied" podUID="7a26dd9d-9bea-4f41-b2ce-d0954968484b" pod="tigera-operator/tigera-operator-5bf8dfcb4-ksv9n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:58.225326 kubelet[2717]: I0813 01:03:58.225282 2717 kubelet.go:2306] "Pod admission denied" podUID="1c559cbd-6101-4d7c-9faa-db996e04b59d" pod="tigera-operator/tigera-operator-5bf8dfcb4-2f47p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:58.331224 kubelet[2717]: I0813 01:03:58.330916 2717 kubelet.go:2306] "Pod admission denied" podUID="2e232ad9-8b5d-40e1-b293-f49578973245" pod="tigera-operator/tigera-operator-5bf8dfcb4-8n5fm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:58.427225 kubelet[2717]: I0813 01:03:58.426446 2717 kubelet.go:2306] "Pod admission denied" podUID="73f7b648-a1b8-4098-a08b-c6df68b32967" pod="tigera-operator/tigera-operator-5bf8dfcb4-24v9l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:58.526978 kubelet[2717]: I0813 01:03:58.526931 2717 kubelet.go:2306] "Pod admission denied" podUID="6419e470-7879-4c83-90f9-f816680dd446" pod="tigera-operator/tigera-operator-5bf8dfcb4-v79qc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:58.726718 kubelet[2717]: I0813 01:03:58.726674 2717 kubelet.go:2306] "Pod admission denied" podUID="58334404-1030-476b-9716-07da08020796" pod="tigera-operator/tigera-operator-5bf8dfcb4-zq6l4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:58.831370 kubelet[2717]: I0813 01:03:58.831066 2717 kubelet.go:2306] "Pod admission denied" podUID="0a534fae-fad7-4b68-b6b2-537fe8c6fbad" pod="tigera-operator/tigera-operator-5bf8dfcb4-qpq7j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:58.925824 kubelet[2717]: I0813 01:03:58.925776 2717 kubelet.go:2306] "Pod admission denied" podUID="16368c8f-6517-41a1-89a7-153767ca2076" pod="tigera-operator/tigera-operator-5bf8dfcb4-dl9rx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:59.025405 kubelet[2717]: I0813 01:03:59.025304 2717 kubelet.go:2306] "Pod admission denied" podUID="9cf6ed37-2272-49e3-89d9-0c32875c2739" pod="tigera-operator/tigera-operator-5bf8dfcb4-zj6g9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:59.130182 kubelet[2717]: I0813 01:03:59.130126 2717 kubelet.go:2306] "Pod admission denied" podUID="616e49e7-f2ed-48a5-8f7a-e8ec12ae6202" pod="tigera-operator/tigera-operator-5bf8dfcb4-928wx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:59.225510 kubelet[2717]: I0813 01:03:59.225453 2717 kubelet.go:2306] "Pod admission denied" podUID="2689709a-e92a-48d6-a3ef-0686fbce5ee2" pod="tigera-operator/tigera-operator-5bf8dfcb4-f2nlk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:59.325917 kubelet[2717]: I0813 01:03:59.325797 2717 kubelet.go:2306] "Pod admission denied" podUID="cbcf63c8-0e08-450d-a1e3-49c70c19cab4" pod="tigera-operator/tigera-operator-5bf8dfcb4-xgt8k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:59.526112 kubelet[2717]: I0813 01:03:59.526063 2717 kubelet.go:2306] "Pod admission denied" podUID="0344df7c-e243-4444-9ba0-51565490df03" pod="tigera-operator/tigera-operator-5bf8dfcb4-bv8s8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:59.627665 kubelet[2717]: I0813 01:03:59.626858 2717 kubelet.go:2306] "Pod admission denied" podUID="bf5d4e53-2809-4c4e-acbb-3f441b66dcf7" pod="tigera-operator/tigera-operator-5bf8dfcb4-z22kz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:59.674594 kubelet[2717]: I0813 01:03:59.674545 2717 kubelet.go:2306] "Pod admission denied" podUID="f1b8874f-88dd-4c89-b0b2-30469a25f5b2" pod="tigera-operator/tigera-operator-5bf8dfcb4-6xw8s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:59.775294 kubelet[2717]: I0813 01:03:59.775240 2717 kubelet.go:2306] "Pod admission denied" podUID="123f17c0-14f9-4a59-a740-b092ab9c4b6d" pod="tigera-operator/tigera-operator-5bf8dfcb4-4t788" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:59.882955 kubelet[2717]: I0813 01:03:59.882383 2717 kubelet.go:2306] "Pod admission denied" podUID="fbb90c56-38b8-4869-b1e4-fc92663edb28" pod="tigera-operator/tigera-operator-5bf8dfcb4-975zt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:03:59.926243 kubelet[2717]: I0813 01:03:59.926204 2717 kubelet.go:2306] "Pod admission denied" podUID="346f8655-4aef-4b5f-9287-b6af747a7ba6" pod="tigera-operator/tigera-operator-5bf8dfcb4-d6tm8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:00.024802 kubelet[2717]: I0813 01:04:00.024763 2717 kubelet.go:2306] "Pod admission denied" podUID="a5aaea70-a629-425f-a544-416440066026" pod="tigera-operator/tigera-operator-5bf8dfcb4-46v88" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:00.229209 kubelet[2717]: I0813 01:04:00.228312 2717 kubelet.go:2306] "Pod admission denied" podUID="be9b500a-2a82-43cf-bf77-a55e81fd2b6d" pod="tigera-operator/tigera-operator-5bf8dfcb4-8t6wl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:00.324574 kubelet[2717]: I0813 01:04:00.324526 2717 kubelet.go:2306] "Pod admission denied" podUID="15137ec5-7b21-420d-84a3-d0c7fdbcb71e" pod="tigera-operator/tigera-operator-5bf8dfcb4-8pxz5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:00.384125 kubelet[2717]: I0813 01:04:00.384085 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:04:00.384125 kubelet[2717]: I0813 01:04:00.384110 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:04:00.386151 kubelet[2717]: I0813 01:04:00.386136 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:04:00.394611 kubelet[2717]: I0813 01:04:00.394585 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:04:00.394782 kubelet[2717]: I0813 01:04:00.394680 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-tp469","kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-node-7pdcs","calico-system/csi-node-driver-84hvc","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:04:00.394782 kubelet[2717]: E0813 01:04:00.394708 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:04:00.394782 kubelet[2717]: E0813 01:04:00.394718 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:04:00.394782 kubelet[2717]: E0813 01:04:00.394726 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:04:00.394782 kubelet[2717]: E0813 01:04:00.394731 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:04:00.394782 kubelet[2717]: E0813 01:04:00.394737 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:04:00.394782 kubelet[2717]: E0813 01:04:00.394748 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:04:00.394782 kubelet[2717]: E0813 01:04:00.394757 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:04:00.394782 kubelet[2717]: E0813 01:04:00.394764 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:04:00.394782 kubelet[2717]: E0813 01:04:00.394772 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:04:00.394782 kubelet[2717]: E0813 01:04:00.394780 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:04:00.394782 kubelet[2717]: I0813 01:04:00.394789 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:04:00.425922 kubelet[2717]: I0813 01:04:00.425874 2717 kubelet.go:2306] "Pod admission denied" podUID="30ff7d96-5582-4e8d-927b-7ac7edef69dc" pod="tigera-operator/tigera-operator-5bf8dfcb4-dvp5j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:00.529785 kubelet[2717]: I0813 01:04:00.529668 2717 kubelet.go:2306] "Pod admission denied" podUID="5a601dfe-cc3c-4070-98e6-de71304a2fac" pod="tigera-operator/tigera-operator-5bf8dfcb4-mz57z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:00.576011 kubelet[2717]: I0813 01:04:00.575974 2717 kubelet.go:2306] "Pod admission denied" podUID="097366db-27ea-4b87-a48e-b0d061e1d4ce" pod="tigera-operator/tigera-operator-5bf8dfcb4-bptmw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:00.677498 kubelet[2717]: I0813 01:04:00.677452 2717 kubelet.go:2306] "Pod admission denied" podUID="a29a3cf4-3885-4ac7-8094-9a0bc95b02ba" pod="tigera-operator/tigera-operator-5bf8dfcb4-gtx4w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:00.777528 kubelet[2717]: I0813 01:04:00.777477 2717 kubelet.go:2306] "Pod admission denied" podUID="584ea0cd-b5e8-4a7f-a96f-618ec78ceac7" pod="tigera-operator/tigera-operator-5bf8dfcb4-z4d5d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:00.875824 kubelet[2717]: I0813 01:04:00.875720 2717 kubelet.go:2306] "Pod admission denied" podUID="c4661c38-0a15-4df3-b0cc-4dcfe07cc1c3" pod="tigera-operator/tigera-operator-5bf8dfcb4-2v6kn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:00.975663 kubelet[2717]: I0813 01:04:00.975611 2717 kubelet.go:2306] "Pod admission denied" podUID="b600c1bd-9aea-467c-8464-fc8633bc8b88" pod="tigera-operator/tigera-operator-5bf8dfcb4-jl4nt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:01.079812 kubelet[2717]: I0813 01:04:01.079767 2717 kubelet.go:2306] "Pod admission denied" podUID="94312b49-0923-4fa7-af4c-569c9322a981" pod="tigera-operator/tigera-operator-5bf8dfcb4-czjhv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:01.277110 kubelet[2717]: I0813 01:04:01.277056 2717 kubelet.go:2306] "Pod admission denied" podUID="47c7dafe-cd6c-4388-990e-e1b5a98ea449" pod="tigera-operator/tigera-operator-5bf8dfcb4-nvk94" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:01.380357 kubelet[2717]: I0813 01:04:01.380316 2717 kubelet.go:2306] "Pod admission denied" podUID="b2e17d6b-f1b8-4b10-a193-492d2b412023" pod="tigera-operator/tigera-operator-5bf8dfcb4-248bg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:01.427604 kubelet[2717]: I0813 01:04:01.427546 2717 kubelet.go:2306] "Pod admission denied" podUID="43df7bbc-417d-4424-a4dc-8ed149037e28" pod="tigera-operator/tigera-operator-5bf8dfcb4-tfj48" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:01.528225 kubelet[2717]: I0813 01:04:01.528075 2717 kubelet.go:2306] "Pod admission denied" podUID="50888d5f-249b-4abc-aa73-e0c8aff6aff8" pod="tigera-operator/tigera-operator-5bf8dfcb4-qbjfr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:01.626816 kubelet[2717]: I0813 01:04:01.626764 2717 kubelet.go:2306] "Pod admission denied" podUID="6ea1f3ee-9d8f-4679-9dbb-660e96f3389c" pod="tigera-operator/tigera-operator-5bf8dfcb4-rn99r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:01.685096 kubelet[2717]: I0813 01:04:01.683528 2717 kubelet.go:2306] "Pod admission denied" podUID="93be7eb6-d643-492d-83db-a7d6a3376091" pod="tigera-operator/tigera-operator-5bf8dfcb4-dlgwj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:01.778843 kubelet[2717]: I0813 01:04:01.778722 2717 kubelet.go:2306] "Pod admission denied" podUID="53984008-57e8-41aa-ada9-44d99ed8dece" pod="tigera-operator/tigera-operator-5bf8dfcb4-g5l4c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:01.878660 kubelet[2717]: I0813 01:04:01.878612 2717 kubelet.go:2306] "Pod admission denied" podUID="6b9ea9aa-a09a-466f-8c2f-4ae1b41ce8b4" pod="tigera-operator/tigera-operator-5bf8dfcb4-htr5z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:01.978518 kubelet[2717]: I0813 01:04:01.978280 2717 kubelet.go:2306] "Pod admission denied" podUID="38f3bcac-c893-42f7-9803-bc954e186b78" pod="tigera-operator/tigera-operator-5bf8dfcb4-jpghd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:02.081561 kubelet[2717]: I0813 01:04:02.081439 2717 kubelet.go:2306] "Pod admission denied" podUID="a89349cf-6099-49e6-a2a6-4dfb23bae7c4" pod="tigera-operator/tigera-operator-5bf8dfcb4-292zn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:02.182591 kubelet[2717]: I0813 01:04:02.182539 2717 kubelet.go:2306] "Pod admission denied" podUID="f7444e94-1b57-4651-b5f5-cef4ee2d4c80" pod="tigera-operator/tigera-operator-5bf8dfcb4-kfmlk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:02.281080 kubelet[2717]: I0813 01:04:02.281025 2717 kubelet.go:2306] "Pod admission denied" podUID="ed773a3c-4d0b-4ff9-b52a-7711b9c3def9" pod="tigera-operator/tigera-operator-5bf8dfcb4-bzs7k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:02.337806 kubelet[2717]: I0813 01:04:02.337097 2717 kubelet.go:2306] "Pod admission denied" podUID="9377b7c6-c4db-45b4-b30a-61af8a18aecc" pod="tigera-operator/tigera-operator-5bf8dfcb4-jvjr6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:02.430023 kubelet[2717]: I0813 01:04:02.429972 2717 kubelet.go:2306] "Pod admission denied" podUID="e24cc963-cdfc-492b-bfbe-772928d2ec31" pod="tigera-operator/tigera-operator-5bf8dfcb4-5jmj4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:02.530704 kubelet[2717]: I0813 01:04:02.530652 2717 kubelet.go:2306] "Pod admission denied" podUID="70715964-0c04-4bdf-9c1d-b421d6a6d194" pod="tigera-operator/tigera-operator-5bf8dfcb4-hc8n2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:02.583240 kubelet[2717]: I0813 01:04:02.582927 2717 kubelet.go:2306] "Pod admission denied" podUID="ab6f67a5-8447-4617-9267-2f34fe00efd6" pod="tigera-operator/tigera-operator-5bf8dfcb4-8dt4z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:02.678920 kubelet[2717]: I0813 01:04:02.678684 2717 kubelet.go:2306] "Pod admission denied" podUID="07f094cb-3b51-4ca2-ae18-3eab66c6a4c1" pod="tigera-operator/tigera-operator-5bf8dfcb4-xtlln" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:02.766720 kubelet[2717]: E0813 01:04:02.766557 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-7pdcs" podUID="e6731efa-3c96-4227-b83c-f4c3adff36c6" Aug 13 01:04:02.780316 kubelet[2717]: I0813 01:04:02.780281 2717 kubelet.go:2306] "Pod admission denied" podUID="6d32b2ee-7f50-414c-9bdf-4884094efe17" pod="tigera-operator/tigera-operator-5bf8dfcb4-8sr77" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:02.876347 kubelet[2717]: I0813 01:04:02.876287 2717 kubelet.go:2306] "Pod admission denied" podUID="2487488a-b8ad-45da-b9a1-d0966604d4af" pod="tigera-operator/tigera-operator-5bf8dfcb4-wrwqs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:02.980808 kubelet[2717]: I0813 01:04:02.980763 2717 kubelet.go:2306] "Pod admission denied" podUID="c320eeef-ea10-41a5-ba9d-841de8d35e57" pod="tigera-operator/tigera-operator-5bf8dfcb4-2dql6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:03.079896 kubelet[2717]: I0813 01:04:03.079863 2717 kubelet.go:2306] "Pod admission denied" podUID="4b6fc852-e3e4-4af7-8314-2a526f632cb4" pod="tigera-operator/tigera-operator-5bf8dfcb4-vxls5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:03.179688 kubelet[2717]: I0813 01:04:03.179646 2717 kubelet.go:2306] "Pod admission denied" podUID="02d676c0-adea-4ac3-ab89-0892f9f75975" pod="tigera-operator/tigera-operator-5bf8dfcb4-l9zhd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:03.276789 kubelet[2717]: I0813 01:04:03.276687 2717 kubelet.go:2306] "Pod admission denied" podUID="caf2f2d8-470e-4538-bed7-1fb5da4c25a0" pod="tigera-operator/tigera-operator-5bf8dfcb4-s8kpw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:03.476739 kubelet[2717]: I0813 01:04:03.476682 2717 kubelet.go:2306] "Pod admission denied" podUID="7f20ab2e-1c12-4d07-aa01-6bfc1e88384d" pod="tigera-operator/tigera-operator-5bf8dfcb4-qc9zn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:03.577652 kubelet[2717]: I0813 01:04:03.577145 2717 kubelet.go:2306] "Pod admission denied" podUID="767b4a00-3778-4b16-b1cb-a0f5a67a691c" pod="tigera-operator/tigera-operator-5bf8dfcb4-lhhgw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:03.628211 kubelet[2717]: I0813 01:04:03.628155 2717 kubelet.go:2306] "Pod admission denied" podUID="ec36df37-2633-43a0-b05e-e9a6a6f74c7c" pod="tigera-operator/tigera-operator-5bf8dfcb4-l6lbf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:03.728906 kubelet[2717]: I0813 01:04:03.728858 2717 kubelet.go:2306] "Pod admission denied" podUID="9b75023f-940c-4e73-80cf-7fbb16a83dd3" pod="tigera-operator/tigera-operator-5bf8dfcb4-zx8cr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:03.829404 kubelet[2717]: I0813 01:04:03.829280 2717 kubelet.go:2306] "Pod admission denied" podUID="b7400126-4353-46ba-92e4-9ccfb017e226" pod="tigera-operator/tigera-operator-5bf8dfcb4-hpd9c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:03.878683 kubelet[2717]: I0813 01:04:03.878634 2717 kubelet.go:2306] "Pod admission denied" podUID="8a60a145-0688-4f35-a864-1f1f42721e59" pod="tigera-operator/tigera-operator-5bf8dfcb4-6g5cb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:03.977882 kubelet[2717]: I0813 01:04:03.977830 2717 kubelet.go:2306] "Pod admission denied" podUID="299fd254-716c-4017-9082-e6113060db18" pod="tigera-operator/tigera-operator-5bf8dfcb4-lcjzk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:04.084804 kubelet[2717]: I0813 01:04:04.083399 2717 kubelet.go:2306] "Pod admission denied" podUID="2430bacd-d844-4a45-8eba-cd39ecf9f657" pod="tigera-operator/tigera-operator-5bf8dfcb4-xzc2k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:04.130564 kubelet[2717]: I0813 01:04:04.130495 2717 kubelet.go:2306] "Pod admission denied" podUID="52ad56a0-8c66-4d53-bf11-bd7777daab76" pod="tigera-operator/tigera-operator-5bf8dfcb4-6xsdm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:04.227149 kubelet[2717]: I0813 01:04:04.227092 2717 kubelet.go:2306] "Pod admission denied" podUID="57ad1b46-6db4-4479-ba01-090648a4a492" pod="tigera-operator/tigera-operator-5bf8dfcb4-bkxxs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:04.328657 kubelet[2717]: I0813 01:04:04.328600 2717 kubelet.go:2306] "Pod admission denied" podUID="31eca15c-c163-482d-920d-2c133d9b252e" pod="tigera-operator/tigera-operator-5bf8dfcb4-6zqzw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:04.426911 kubelet[2717]: I0813 01:04:04.426795 2717 kubelet.go:2306] "Pod admission denied" podUID="3c610b0d-9dcc-4758-95be-1203042faaa6" pod="tigera-operator/tigera-operator-5bf8dfcb4-kkhbc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:04.527056 kubelet[2717]: I0813 01:04:04.526985 2717 kubelet.go:2306] "Pod admission denied" podUID="c8d4849c-a1c8-4190-838b-de70ec165c97" pod="tigera-operator/tigera-operator-5bf8dfcb4-x7d2s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:04.630714 kubelet[2717]: I0813 01:04:04.630645 2717 kubelet.go:2306] "Pod admission denied" podUID="305fc7f6-8e9f-4659-8f1c-038ae81bd006" pod="tigera-operator/tigera-operator-5bf8dfcb4-sjcqr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:04.829434 kubelet[2717]: I0813 01:04:04.829377 2717 kubelet.go:2306] "Pod admission denied" podUID="20b1718f-5b88-4748-be09-02ecfcad9369" pod="tigera-operator/tigera-operator-5bf8dfcb4-hpxn7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:04.935318 kubelet[2717]: I0813 01:04:04.934722 2717 kubelet.go:2306] "Pod admission denied" podUID="fdd18e35-9a3f-4764-85b6-01338f980730" pod="tigera-operator/tigera-operator-5bf8dfcb4-24wrz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:05.027934 kubelet[2717]: I0813 01:04:05.027877 2717 kubelet.go:2306] "Pod admission denied" podUID="36522cc6-b416-44b3-8567-d0a9ac544687" pod="tigera-operator/tigera-operator-5bf8dfcb4-72nq6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:05.127721 kubelet[2717]: I0813 01:04:05.127077 2717 kubelet.go:2306] "Pod admission denied" podUID="efd2e36b-fc80-4853-98d1-fdd433d0c1ab" pod="tigera-operator/tigera-operator-5bf8dfcb4-pnm6q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:05.231227 kubelet[2717]: I0813 01:04:05.230850 2717 kubelet.go:2306] "Pod admission denied" podUID="cafffbc5-c5ef-44fd-a276-b3ee4235f52c" pod="tigera-operator/tigera-operator-5bf8dfcb4-284gx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:05.326860 kubelet[2717]: I0813 01:04:05.326822 2717 kubelet.go:2306] "Pod admission denied" podUID="bbf6fbf7-9016-4052-ab48-df11ceba360c" pod="tigera-operator/tigera-operator-5bf8dfcb4-mwwl2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:05.425776 kubelet[2717]: I0813 01:04:05.425672 2717 kubelet.go:2306] "Pod admission denied" podUID="0797b988-1af7-4367-822b-0ccc83c0620b" pod="tigera-operator/tigera-operator-5bf8dfcb4-z5qlk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:05.530867 kubelet[2717]: I0813 01:04:05.530668 2717 kubelet.go:2306] "Pod admission denied" podUID="62b9aee0-f038-4a21-96b2-8e308dcdba1a" pod="tigera-operator/tigera-operator-5bf8dfcb4-2kvg9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:05.627988 kubelet[2717]: I0813 01:04:05.627948 2717 kubelet.go:2306] "Pod admission denied" podUID="a34a0162-0ceb-43bb-9936-913b2a629170" pod="tigera-operator/tigera-operator-5bf8dfcb4-9b664" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:05.726335 kubelet[2717]: I0813 01:04:05.726300 2717 kubelet.go:2306] "Pod admission denied" podUID="34b89098-43cf-4539-a855-b8bd155316a0" pod="tigera-operator/tigera-operator-5bf8dfcb4-xk7m7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:05.764753 containerd[1575]: time="2025-08-13T01:04:05.764489927Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564d8b8748-ps97n,Uid:a7e8405c-2c82-420c-bac7-a7277571f968,Namespace:calico-system,Attempt:0,}" Aug 13 01:04:05.815762 containerd[1575]: time="2025-08-13T01:04:05.815637794Z" level=error msg="Failed to destroy network for sandbox \"f31884b0e894932e2c567d96d7bbb31f938e94a0e62f3e9595b854cbe5431ac1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:05.817606 containerd[1575]: time="2025-08-13T01:04:05.817252741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564d8b8748-ps97n,Uid:a7e8405c-2c82-420c-bac7-a7277571f968,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f31884b0e894932e2c567d96d7bbb31f938e94a0e62f3e9595b854cbe5431ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:05.818374 kubelet[2717]: E0813 01:04:05.817906 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f31884b0e894932e2c567d96d7bbb31f938e94a0e62f3e9595b854cbe5431ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:05.818374 kubelet[2717]: E0813 01:04:05.818067 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f31884b0e894932e2c567d96d7bbb31f938e94a0e62f3e9595b854cbe5431ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:04:05.818374 kubelet[2717]: E0813 01:04:05.818088 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f31884b0e894932e2c567d96d7bbb31f938e94a0e62f3e9595b854cbe5431ac1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:04:05.818374 kubelet[2717]: E0813 01:04:05.818131 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f31884b0e894932e2c567d96d7bbb31f938e94a0e62f3e9595b854cbe5431ac1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:04:05.819681 systemd[1]: run-netns-cni\x2dadd6708a\x2dd5fc\x2da0f8\x2da0fe\x2d90e680762979.mount: Deactivated successfully. Aug 13 01:04:05.837241 kubelet[2717]: I0813 01:04:05.836522 2717 kubelet.go:2306] "Pod admission denied" podUID="f73df90a-e0fc-46d0-9f3d-5be8982c29ad" pod="tigera-operator/tigera-operator-5bf8dfcb4-rqrf4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:05.931457 kubelet[2717]: I0813 01:04:05.931400 2717 kubelet.go:2306] "Pod admission denied" podUID="81e9870f-7f18-434f-b420-81124afb810d" pod="tigera-operator/tigera-operator-5bf8dfcb4-hgvdh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:06.027658 kubelet[2717]: I0813 01:04:06.027546 2717 kubelet.go:2306] "Pod admission denied" podUID="66678c84-7d2c-4108-9fad-b75d4fdf7ece" pod="tigera-operator/tigera-operator-5bf8dfcb4-5sq4j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:06.232218 kubelet[2717]: I0813 01:04:06.231765 2717 kubelet.go:2306] "Pod admission denied" podUID="e5dce1c2-b7ce-42a9-9446-35201ac629a1" pod="tigera-operator/tigera-operator-5bf8dfcb4-wsg8p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:06.328284 kubelet[2717]: I0813 01:04:06.327840 2717 kubelet.go:2306] "Pod admission denied" podUID="f1f3d1d0-5885-4e84-94cb-69cf26424077" pod="tigera-operator/tigera-operator-5bf8dfcb4-2ntds" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:06.429184 kubelet[2717]: I0813 01:04:06.429129 2717 kubelet.go:2306] "Pod admission denied" podUID="9fb31549-3d19-4c92-bdd3-8af08619f96b" pod="tigera-operator/tigera-operator-5bf8dfcb4-tm9p7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:06.534837 kubelet[2717]: I0813 01:04:06.533712 2717 kubelet.go:2306] "Pod admission denied" podUID="240b9ba7-4cfe-4a95-bda7-3cf2d6f6dbc0" pod="tigera-operator/tigera-operator-5bf8dfcb4-fnc9f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:06.627988 kubelet[2717]: I0813 01:04:06.627887 2717 kubelet.go:2306] "Pod admission denied" podUID="5ef2af4f-82e6-4d1d-90fa-1c69b928bb24" pod="tigera-operator/tigera-operator-5bf8dfcb4-94vfh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:06.743260 kubelet[2717]: I0813 01:04:06.743228 2717 kubelet.go:2306] "Pod admission denied" podUID="c5299011-0135-40f6-ae0b-4b6ca9dd3552" pod="tigera-operator/tigera-operator-5bf8dfcb4-dfzlg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:06.764573 kubelet[2717]: E0813 01:04:06.764549 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:04:06.765538 containerd[1575]: time="2025-08-13T01:04:06.765507890Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,}" Aug 13 01:04:06.832845 containerd[1575]: time="2025-08-13T01:04:06.832770767Z" level=error msg="Failed to destroy network for sandbox \"86fc4b8e624efebcbd8347d55c1db06139ad3686fc69e90f09fc95a517b17b50\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:06.836072 containerd[1575]: time="2025-08-13T01:04:06.835994363Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"86fc4b8e624efebcbd8347d55c1db06139ad3686fc69e90f09fc95a517b17b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:06.836884 systemd[1]: run-netns-cni\x2dc9af96f6\x2dcd30\x2d1f80\x2d38bf\x2d5cd50d497932.mount: Deactivated successfully. Aug 13 01:04:06.844091 kubelet[2717]: E0813 01:04:06.844053 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86fc4b8e624efebcbd8347d55c1db06139ad3686fc69e90f09fc95a517b17b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:06.845395 kubelet[2717]: E0813 01:04:06.844107 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86fc4b8e624efebcbd8347d55c1db06139ad3686fc69e90f09fc95a517b17b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:04:06.845395 kubelet[2717]: E0813 01:04:06.844125 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86fc4b8e624efebcbd8347d55c1db06139ad3686fc69e90f09fc95a517b17b50\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:04:06.845395 kubelet[2717]: E0813 01:04:06.844162 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86fc4b8e624efebcbd8347d55c1db06139ad3686fc69e90f09fc95a517b17b50\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dk5p7" podUID="6787cc4f-56e4-4094-978c-958d0d7a35ba" Aug 13 01:04:06.845507 kubelet[2717]: I0813 01:04:06.845483 2717 kubelet.go:2306] "Pod admission denied" podUID="d0e01d7f-7a35-481f-b149-f1195bfd7f35" pod="tigera-operator/tigera-operator-5bf8dfcb4-bxmf2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:07.027013 kubelet[2717]: I0813 01:04:07.026965 2717 kubelet.go:2306] "Pod admission denied" podUID="ef12d772-d004-4ebf-ae51-6bf2c856e670" pod="tigera-operator/tigera-operator-5bf8dfcb4-lmzv4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:07.130919 kubelet[2717]: I0813 01:04:07.130609 2717 kubelet.go:2306] "Pod admission denied" podUID="a295403d-47a3-4949-85b9-742e5c23e54b" pod="tigera-operator/tigera-operator-5bf8dfcb4-h5msd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:07.230650 kubelet[2717]: I0813 01:04:07.230616 2717 kubelet.go:2306] "Pod admission denied" podUID="1893bf8c-acd9-410a-9045-0083b70c62e8" pod="tigera-operator/tigera-operator-5bf8dfcb4-86kcb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:07.330324 kubelet[2717]: I0813 01:04:07.330220 2717 kubelet.go:2306] "Pod admission denied" podUID="7b765963-ca35-4655-aa91-ec8d6587bc81" pod="tigera-operator/tigera-operator-5bf8dfcb4-zbwrm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:07.377052 kubelet[2717]: I0813 01:04:07.377011 2717 kubelet.go:2306] "Pod admission denied" podUID="f1941a12-e879-4081-9433-5fcd513215fb" pod="tigera-operator/tigera-operator-5bf8dfcb4-qpjzr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:07.483591 kubelet[2717]: I0813 01:04:07.482549 2717 kubelet.go:2306] "Pod admission denied" podUID="f5d70c42-3f33-4c63-8fee-8ffdca0a063c" pod="tigera-operator/tigera-operator-5bf8dfcb4-dqdb4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:07.579980 kubelet[2717]: I0813 01:04:07.579924 2717 kubelet.go:2306] "Pod admission denied" podUID="9a281657-b3e4-4fe0-8377-48b53dd1c446" pod="tigera-operator/tigera-operator-5bf8dfcb4-sxx9v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:07.679997 kubelet[2717]: I0813 01:04:07.679709 2717 kubelet.go:2306] "Pod admission denied" podUID="66cf4154-cad2-47e1-bd59-6d2f16f2a5be" pod="tigera-operator/tigera-operator-5bf8dfcb4-6ks7h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:07.778763 kubelet[2717]: I0813 01:04:07.778714 2717 kubelet.go:2306] "Pod admission denied" podUID="fed43f71-033e-4cbb-a921-242986c92b83" pod="tigera-operator/tigera-operator-5bf8dfcb4-5kd8d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:07.888013 kubelet[2717]: I0813 01:04:07.887944 2717 kubelet.go:2306] "Pod admission denied" podUID="311747d7-c422-4f37-9fd6-817a81db38fe" pod="tigera-operator/tigera-operator-5bf8dfcb4-gc4zn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:07.979367 kubelet[2717]: I0813 01:04:07.979324 2717 kubelet.go:2306] "Pod admission denied" podUID="fda08aec-fe39-46bf-9170-8c35a4d35b54" pod="tigera-operator/tigera-operator-5bf8dfcb4-xz8tq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:08.087448 kubelet[2717]: I0813 01:04:08.087400 2717 kubelet.go:2306] "Pod admission denied" podUID="9c085fa5-f6a1-4d5f-ab6f-48998d7ae2f0" pod="tigera-operator/tigera-operator-5bf8dfcb4-qtw7s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:08.185486 kubelet[2717]: I0813 01:04:08.185428 2717 kubelet.go:2306] "Pod admission denied" podUID="fbf0432d-0892-424a-b50b-b9cd891f4ff3" pod="tigera-operator/tigera-operator-5bf8dfcb4-nnzkw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:08.280320 kubelet[2717]: I0813 01:04:08.279936 2717 kubelet.go:2306] "Pod admission denied" podUID="c3bfd815-2e90-49ff-b03b-494adc5f61cb" pod="tigera-operator/tigera-operator-5bf8dfcb4-bsmwh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:08.380736 kubelet[2717]: I0813 01:04:08.380686 2717 kubelet.go:2306] "Pod admission denied" podUID="394a4cd1-2f4f-4df8-8e17-48e93b5729b2" pod="tigera-operator/tigera-operator-5bf8dfcb4-sv4rc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:08.481021 kubelet[2717]: I0813 01:04:08.480947 2717 kubelet.go:2306] "Pod admission denied" podUID="4be67f9b-890b-447c-b995-53931efb3b2a" pod="tigera-operator/tigera-operator-5bf8dfcb4-scg4t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:08.579746 kubelet[2717]: I0813 01:04:08.579620 2717 kubelet.go:2306] "Pod admission denied" podUID="d23cd1ee-a4e7-4a3e-89d2-9f077b6ca581" pod="tigera-operator/tigera-operator-5bf8dfcb4-2nlb2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:08.679643 kubelet[2717]: I0813 01:04:08.679592 2717 kubelet.go:2306] "Pod admission denied" podUID="fc0a2e3c-9734-4c28-8323-ebd5d5c50650" pod="tigera-operator/tigera-operator-5bf8dfcb4-6g4q7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:08.764166 kubelet[2717]: E0813 01:04:08.763730 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:04:08.765097 containerd[1575]: time="2025-08-13T01:04:08.765046530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tp469,Uid:c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af,Namespace:kube-system,Attempt:0,}" Aug 13 01:04:08.825753 containerd[1575]: time="2025-08-13T01:04:08.825705762Z" level=error msg="Failed to destroy network for sandbox \"43a569a7459968be8a8819c1baf64a1e3d32b0f6e3213f21d6d8560cdf9b046b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:08.828471 containerd[1575]: time="2025-08-13T01:04:08.828430364Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tp469,Uid:c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43a569a7459968be8a8819c1baf64a1e3d32b0f6e3213f21d6d8560cdf9b046b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:08.829455 kubelet[2717]: E0813 01:04:08.829171 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43a569a7459968be8a8819c1baf64a1e3d32b0f6e3213f21d6d8560cdf9b046b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:08.829617 kubelet[2717]: E0813 01:04:08.829465 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43a569a7459968be8a8819c1baf64a1e3d32b0f6e3213f21d6d8560cdf9b046b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:04:08.829617 kubelet[2717]: E0813 01:04:08.829485 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43a569a7459968be8a8819c1baf64a1e3d32b0f6e3213f21d6d8560cdf9b046b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:04:08.829617 kubelet[2717]: E0813 01:04:08.829544 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tp469_kube-system(c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tp469_kube-system(c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43a569a7459968be8a8819c1baf64a1e3d32b0f6e3213f21d6d8560cdf9b046b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tp469" podUID="c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af" Aug 13 01:04:08.831133 systemd[1]: run-netns-cni\x2dc100523e\x2d545f\x2da984\x2d6e8a\x2d4a0b725cb16a.mount: Deactivated successfully. Aug 13 01:04:08.881869 kubelet[2717]: I0813 01:04:08.881805 2717 kubelet.go:2306] "Pod admission denied" podUID="55e868ff-f564-4d21-b058-377176aa14e8" pod="tigera-operator/tigera-operator-5bf8dfcb4-f6c6z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:08.979728 kubelet[2717]: I0813 01:04:08.979689 2717 kubelet.go:2306] "Pod admission denied" podUID="43dfb1de-d76d-4a5a-a512-ae5af9e3b281" pod="tigera-operator/tigera-operator-5bf8dfcb4-q4cjr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:09.029295 kubelet[2717]: I0813 01:04:09.029235 2717 kubelet.go:2306] "Pod admission denied" podUID="c9a9f8aa-dedf-46ce-8133-85a6519f1b9e" pod="tigera-operator/tigera-operator-5bf8dfcb4-8d58f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:09.133561 kubelet[2717]: I0813 01:04:09.133444 2717 kubelet.go:2306] "Pod admission denied" podUID="86c72e55-cba1-4946-86b6-5b2a8ca2b37a" pod="tigera-operator/tigera-operator-5bf8dfcb4-mwtw5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:09.230548 kubelet[2717]: I0813 01:04:09.230494 2717 kubelet.go:2306] "Pod admission denied" podUID="f195c9f7-76ad-437f-b779-67052ee90a2a" pod="tigera-operator/tigera-operator-5bf8dfcb4-n8ml5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:09.328940 kubelet[2717]: I0813 01:04:09.328888 2717 kubelet.go:2306] "Pod admission denied" podUID="ea8b56d0-dc73-48c3-ab5c-ff2213748401" pod="tigera-operator/tigera-operator-5bf8dfcb4-tq765" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:09.531675 kubelet[2717]: I0813 01:04:09.531619 2717 kubelet.go:2306] "Pod admission denied" podUID="bc7b914c-0066-414d-8839-25bf8b25645e" pod="tigera-operator/tigera-operator-5bf8dfcb4-8n6lm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:09.630121 kubelet[2717]: I0813 01:04:09.630068 2717 kubelet.go:2306] "Pod admission denied" podUID="f80958fb-085a-4516-a031-25c40b2a6949" pod="tigera-operator/tigera-operator-5bf8dfcb4-85zn2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:09.729550 kubelet[2717]: I0813 01:04:09.729264 2717 kubelet.go:2306] "Pod admission denied" podUID="b2770fbf-0f59-4d8a-b616-ef8193d04eae" pod="tigera-operator/tigera-operator-5bf8dfcb4-q7ldg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:09.766365 containerd[1575]: time="2025-08-13T01:04:09.766324868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84hvc,Uid:f2b74998-29fc-4213-8313-543c9154bc64,Namespace:calico-system,Attempt:0,}" Aug 13 01:04:09.828284 containerd[1575]: time="2025-08-13T01:04:09.825862722Z" level=error msg="Failed to destroy network for sandbox \"0d63b049a8dbd10f195f716b7e398621b3cb9532b56f5cb5e166c6545421af2e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:09.829554 containerd[1575]: time="2025-08-13T01:04:09.829482033Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84hvc,Uid:f2b74998-29fc-4213-8313-543c9154bc64,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d63b049a8dbd10f195f716b7e398621b3cb9532b56f5cb5e166c6545421af2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:09.829826 kubelet[2717]: E0813 01:04:09.829794 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d63b049a8dbd10f195f716b7e398621b3cb9532b56f5cb5e166c6545421af2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:09.829936 kubelet[2717]: E0813 01:04:09.829918 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d63b049a8dbd10f195f716b7e398621b3cb9532b56f5cb5e166c6545421af2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:04:09.829993 kubelet[2717]: E0813 01:04:09.829980 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0d63b049a8dbd10f195f716b7e398621b3cb9532b56f5cb5e166c6545421af2e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:04:09.830081 kubelet[2717]: E0813 01:04:09.830060 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-84hvc_calico-system(f2b74998-29fc-4213-8313-543c9154bc64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-84hvc_calico-system(f2b74998-29fc-4213-8313-543c9154bc64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0d63b049a8dbd10f195f716b7e398621b3cb9532b56f5cb5e166c6545421af2e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-84hvc" podUID="f2b74998-29fc-4213-8313-543c9154bc64" Aug 13 01:04:09.830553 systemd[1]: run-netns-cni\x2dba17c98a\x2d07e5\x2d89d4\x2d914c\x2d8686af126888.mount: Deactivated successfully. Aug 13 01:04:09.837686 kubelet[2717]: I0813 01:04:09.837655 2717 kubelet.go:2306] "Pod admission denied" podUID="dac02a2e-0a2f-4abd-ae37-12fa56c31074" pod="tigera-operator/tigera-operator-5bf8dfcb4-zqvxn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:09.930108 kubelet[2717]: I0813 01:04:09.930065 2717 kubelet.go:2306] "Pod admission denied" podUID="9af50b53-50a6-4ef6-85b2-91ddb8d39d0f" pod="tigera-operator/tigera-operator-5bf8dfcb4-78z84" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:10.132224 kubelet[2717]: I0813 01:04:10.131158 2717 kubelet.go:2306] "Pod admission denied" podUID="ad639c31-c7de-4f07-b217-d5552af0f9e7" pod="tigera-operator/tigera-operator-5bf8dfcb4-wk97j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:10.231124 kubelet[2717]: I0813 01:04:10.231079 2717 kubelet.go:2306] "Pod admission denied" podUID="ddb0c6d9-8380-4f9b-bac2-3caead0df723" pod="tigera-operator/tigera-operator-5bf8dfcb4-4nnh2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:10.331184 kubelet[2717]: I0813 01:04:10.331147 2717 kubelet.go:2306] "Pod admission denied" podUID="64d0ecfa-b595-4b62-b1db-1e540f3b91b8" pod="tigera-operator/tigera-operator-5bf8dfcb4-bkd9n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:10.410083 kubelet[2717]: I0813 01:04:10.409612 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:04:10.410083 kubelet[2717]: I0813 01:04:10.409641 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:04:10.411947 kubelet[2717]: I0813 01:04:10.411898 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:04:10.422595 kubelet[2717]: I0813 01:04:10.422396 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:04:10.422595 kubelet[2717]: I0813 01:04:10.422453 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-dk5p7","kube-system/coredns-7c65d6cfc9-tp469","calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-node-7pdcs","calico-system/csi-node-driver-84hvc","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:04:10.422595 kubelet[2717]: E0813 01:04:10.422483 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:04:10.422595 kubelet[2717]: E0813 01:04:10.422492 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:04:10.422595 kubelet[2717]: E0813 01:04:10.422499 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:04:10.422595 kubelet[2717]: E0813 01:04:10.422507 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:04:10.422595 kubelet[2717]: E0813 01:04:10.422513 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:04:10.422595 kubelet[2717]: E0813 01:04:10.422524 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:04:10.422595 kubelet[2717]: E0813 01:04:10.422533 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:04:10.422595 kubelet[2717]: E0813 01:04:10.422541 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:04:10.422595 kubelet[2717]: E0813 01:04:10.422549 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:04:10.422595 kubelet[2717]: E0813 01:04:10.422558 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:04:10.422595 kubelet[2717]: I0813 01:04:10.422566 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:04:10.436552 kubelet[2717]: I0813 01:04:10.436511 2717 kubelet.go:2306] "Pod admission denied" podUID="e0a4ec35-091a-412f-b597-d4bf1544af14" pod="tigera-operator/tigera-operator-5bf8dfcb4-hqh6q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:10.529842 kubelet[2717]: I0813 01:04:10.529796 2717 kubelet.go:2306] "Pod admission denied" podUID="c6d92c79-c11a-445f-8c9a-3c51c5118c4f" pod="tigera-operator/tigera-operator-5bf8dfcb4-hqpnx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:10.626968 kubelet[2717]: I0813 01:04:10.626920 2717 kubelet.go:2306] "Pod admission denied" podUID="e96b98b8-b6c0-439e-9186-d46f045bf755" pod="tigera-operator/tigera-operator-5bf8dfcb4-gdrz4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:10.727500 kubelet[2717]: I0813 01:04:10.727453 2717 kubelet.go:2306] "Pod admission denied" podUID="5f77e289-dbdf-4d02-ae14-5f0cddbb626e" pod="tigera-operator/tigera-operator-5bf8dfcb4-86492" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:10.826935 kubelet[2717]: I0813 01:04:10.826894 2717 kubelet.go:2306] "Pod admission denied" podUID="f6bc3a2d-e793-48a8-b460-9e02de35cc4b" pod="tigera-operator/tigera-operator-5bf8dfcb4-kgw4s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:10.881989 kubelet[2717]: I0813 01:04:10.881947 2717 kubelet.go:2306] "Pod admission denied" podUID="46b2a714-c161-413c-96ad-561864854098" pod="tigera-operator/tigera-operator-5bf8dfcb4-rm99r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:10.978844 kubelet[2717]: I0813 01:04:10.978736 2717 kubelet.go:2306] "Pod admission denied" podUID="f2e6fa15-2450-4add-9e87-45ec7ef8c54e" pod="tigera-operator/tigera-operator-5bf8dfcb4-p494q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:11.078597 kubelet[2717]: I0813 01:04:11.078547 2717 kubelet.go:2306] "Pod admission denied" podUID="d2999b16-c897-48fb-b522-a13dd372a0d3" pod="tigera-operator/tigera-operator-5bf8dfcb4-xngft" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:11.178909 kubelet[2717]: I0813 01:04:11.178860 2717 kubelet.go:2306] "Pod admission denied" podUID="f35fd0a6-c961-4eee-b5dd-b4220d224001" pod="tigera-operator/tigera-operator-5bf8dfcb4-fz95n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:11.383156 kubelet[2717]: I0813 01:04:11.382516 2717 kubelet.go:2306] "Pod admission denied" podUID="323c05fb-3d8a-4144-ac38-b943c9bab593" pod="tigera-operator/tigera-operator-5bf8dfcb4-b8nrt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:11.476787 kubelet[2717]: I0813 01:04:11.476748 2717 kubelet.go:2306] "Pod admission denied" podUID="793de37a-4e7c-4682-8c22-e076fa5cb71d" pod="tigera-operator/tigera-operator-5bf8dfcb4-q5f4m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:11.580347 kubelet[2717]: I0813 01:04:11.580300 2717 kubelet.go:2306] "Pod admission denied" podUID="d17611fa-cc15-4496-817f-ffa2326d9011" pod="tigera-operator/tigera-operator-5bf8dfcb4-bhssk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:11.680103 kubelet[2717]: I0813 01:04:11.679988 2717 kubelet.go:2306] "Pod admission denied" podUID="96fdbe79-f7a9-4292-9e67-801cf105f7b0" pod="tigera-operator/tigera-operator-5bf8dfcb4-vsld6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:11.778862 kubelet[2717]: I0813 01:04:11.778812 2717 kubelet.go:2306] "Pod admission denied" podUID="0df8cce9-9727-42a7-af8d-fbef8a79628c" pod="tigera-operator/tigera-operator-5bf8dfcb4-dflcl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:11.878424 kubelet[2717]: I0813 01:04:11.878380 2717 kubelet.go:2306] "Pod admission denied" podUID="ac0fbaeb-6e76-4eb0-a804-46d7c54d93c5" pod="tigera-operator/tigera-operator-5bf8dfcb4-gv2mh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:11.981155 kubelet[2717]: I0813 01:04:11.981118 2717 kubelet.go:2306] "Pod admission denied" podUID="6a90aac2-aa91-4fe2-835b-cdcf8e102963" pod="tigera-operator/tigera-operator-5bf8dfcb4-kjrwk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:12.078797 kubelet[2717]: I0813 01:04:12.078748 2717 kubelet.go:2306] "Pod admission denied" podUID="b1ee91ab-f73c-4f68-ab23-169c92f2a934" pod="tigera-operator/tigera-operator-5bf8dfcb4-8mqqv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:12.178816 kubelet[2717]: I0813 01:04:12.178766 2717 kubelet.go:2306] "Pod admission denied" podUID="fe455784-e499-4e73-af26-9ea9d83e4ef7" pod="tigera-operator/tigera-operator-5bf8dfcb4-298lb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:12.279501 kubelet[2717]: I0813 01:04:12.279390 2717 kubelet.go:2306] "Pod admission denied" podUID="863cef88-0028-46b8-8394-ea03164737fc" pod="tigera-operator/tigera-operator-5bf8dfcb4-hbdrk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:12.333390 kubelet[2717]: I0813 01:04:12.333333 2717 kubelet.go:2306] "Pod admission denied" podUID="447e6d84-07dc-4b4a-b0ff-8691ba7c3978" pod="tigera-operator/tigera-operator-5bf8dfcb4-wlrd7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:12.433704 kubelet[2717]: I0813 01:04:12.433650 2717 kubelet.go:2306] "Pod admission denied" podUID="d8226390-b17c-4be3-b272-c945a7ae25b3" pod="tigera-operator/tigera-operator-5bf8dfcb4-sprlg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:12.531480 kubelet[2717]: I0813 01:04:12.531163 2717 kubelet.go:2306] "Pod admission denied" podUID="d8ecae4c-f608-410e-8b91-853dff15b9ac" pod="tigera-operator/tigera-operator-5bf8dfcb4-b4z4t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:12.628216 kubelet[2717]: I0813 01:04:12.628152 2717 kubelet.go:2306] "Pod admission denied" podUID="b63b984e-efd8-435b-b1e7-9f0a5769abb2" pod="tigera-operator/tigera-operator-5bf8dfcb4-hsv4x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:12.833059 kubelet[2717]: I0813 01:04:12.832587 2717 kubelet.go:2306] "Pod admission denied" podUID="a07950fb-c9c7-446c-b234-bd0c464928f0" pod="tigera-operator/tigera-operator-5bf8dfcb4-5vdpf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:12.937869 kubelet[2717]: I0813 01:04:12.936456 2717 kubelet.go:2306] "Pod admission denied" podUID="0b027b7e-7cbb-4124-bdfb-81d7aba84ff0" pod="tigera-operator/tigera-operator-5bf8dfcb4-hhvsf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:12.979213 kubelet[2717]: I0813 01:04:12.979145 2717 kubelet.go:2306] "Pod admission denied" podUID="927ac332-bd4d-4247-a5be-6ce67e085c56" pod="tigera-operator/tigera-operator-5bf8dfcb4-8nhbz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:13.081411 kubelet[2717]: I0813 01:04:13.081342 2717 kubelet.go:2306] "Pod admission denied" podUID="33c113dc-55b1-4b76-b797-75f88fd8fb1e" pod="tigera-operator/tigera-operator-5bf8dfcb4-xrj5j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:13.177619 kubelet[2717]: I0813 01:04:13.177500 2717 kubelet.go:2306] "Pod admission denied" podUID="ab5a0cf7-897c-4031-bf4b-4b0feedce38e" pod="tigera-operator/tigera-operator-5bf8dfcb4-s5ltl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:13.277310 kubelet[2717]: I0813 01:04:13.277263 2717 kubelet.go:2306] "Pod admission denied" podUID="3d20fe76-b96a-4f0e-9521-164e908eaf3b" pod="tigera-operator/tigera-operator-5bf8dfcb4-kvt5r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:13.482400 kubelet[2717]: I0813 01:04:13.481569 2717 kubelet.go:2306] "Pod admission denied" podUID="978c62cb-b3b6-462e-8236-d004acb3d91d" pod="tigera-operator/tigera-operator-5bf8dfcb4-p7nws" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:13.576617 kubelet[2717]: I0813 01:04:13.576567 2717 kubelet.go:2306] "Pod admission denied" podUID="8770e65f-16a4-4db0-af38-251c3ded84c3" pod="tigera-operator/tigera-operator-5bf8dfcb4-phzgb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:13.679643 kubelet[2717]: I0813 01:04:13.679600 2717 kubelet.go:2306] "Pod admission denied" podUID="abb543d7-9fd5-45cf-bf7a-19d5fddcd7b3" pod="tigera-operator/tigera-operator-5bf8dfcb4-57x89" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:13.781440 kubelet[2717]: I0813 01:04:13.781318 2717 kubelet.go:2306] "Pod admission denied" podUID="d0e10441-73fa-4353-a506-4e86f4632ae6" pod="tigera-operator/tigera-operator-5bf8dfcb4-wsnmw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:13.880999 kubelet[2717]: I0813 01:04:13.880941 2717 kubelet.go:2306] "Pod admission denied" podUID="26374f37-625b-4abb-8925-174acd99585f" pod="tigera-operator/tigera-operator-5bf8dfcb4-pms5g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:13.981395 kubelet[2717]: I0813 01:04:13.981324 2717 kubelet.go:2306] "Pod admission denied" podUID="6f207b0b-3176-4ae6-8003-c66f13e1dcb6" pod="tigera-operator/tigera-operator-5bf8dfcb4-t4t9r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:14.081795 kubelet[2717]: I0813 01:04:14.081661 2717 kubelet.go:2306] "Pod admission denied" podUID="b460d648-d146-46ab-9cf2-1ede697693d5" pod="tigera-operator/tigera-operator-5bf8dfcb4-fbttm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:14.179321 kubelet[2717]: I0813 01:04:14.179270 2717 kubelet.go:2306] "Pod admission denied" podUID="c425ebe3-aff0-4ee7-b834-c16275794647" pod="tigera-operator/tigera-operator-5bf8dfcb4-z6dqn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:14.229547 kubelet[2717]: I0813 01:04:14.229511 2717 kubelet.go:2306] "Pod admission denied" podUID="10cb987e-a0f3-49f5-9ffc-c91c29673e81" pod="tigera-operator/tigera-operator-5bf8dfcb4-kpv48" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:14.332066 kubelet[2717]: I0813 01:04:14.331951 2717 kubelet.go:2306] "Pod admission denied" podUID="b520e6ff-780e-4ac6-a77e-4fba216d8abf" pod="tigera-operator/tigera-operator-5bf8dfcb4-qxwzw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:14.430008 kubelet[2717]: I0813 01:04:14.429957 2717 kubelet.go:2306] "Pod admission denied" podUID="5fbaf9e4-0f6b-4060-b16b-961a3dade2d4" pod="tigera-operator/tigera-operator-5bf8dfcb4-9kq8n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:14.476404 kubelet[2717]: I0813 01:04:14.476360 2717 kubelet.go:2306] "Pod admission denied" podUID="07e3f8fb-79ff-4c69-91b1-0b256d366b12" pod="tigera-operator/tigera-operator-5bf8dfcb4-9hkf9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:14.578418 kubelet[2717]: I0813 01:04:14.578370 2717 kubelet.go:2306] "Pod admission denied" podUID="e3d46f55-d944-4849-b1f6-f71db220dce1" pod="tigera-operator/tigera-operator-5bf8dfcb4-x6blb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:14.680960 kubelet[2717]: I0813 01:04:14.680464 2717 kubelet.go:2306] "Pod admission denied" podUID="9f2c87ec-6559-438c-bfcf-9b09df91c3b4" pod="tigera-operator/tigera-operator-5bf8dfcb4-4tjv8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:14.785146 kubelet[2717]: I0813 01:04:14.784909 2717 kubelet.go:2306] "Pod admission denied" podUID="bb4a56da-68ae-4e17-9de2-bebffbf4482d" pod="tigera-operator/tigera-operator-5bf8dfcb4-z5522" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:14.880988 kubelet[2717]: I0813 01:04:14.880939 2717 kubelet.go:2306] "Pod admission denied" podUID="7f085238-75bc-464d-a9b3-e552bda7c2e1" pod="tigera-operator/tigera-operator-5bf8dfcb4-ld6k9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:14.936984 kubelet[2717]: I0813 01:04:14.936874 2717 kubelet.go:2306] "Pod admission denied" podUID="f567520e-961a-404c-a6db-6b8e883e6b35" pod="tigera-operator/tigera-operator-5bf8dfcb4-sstbf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:15.032375 kubelet[2717]: I0813 01:04:15.032320 2717 kubelet.go:2306] "Pod admission denied" podUID="0e4d17d6-03f4-40d7-8761-49ed5829c3e6" pod="tigera-operator/tigera-operator-5bf8dfcb4-w7x8r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:15.228080 kubelet[2717]: I0813 01:04:15.227867 2717 kubelet.go:2306] "Pod admission denied" podUID="04b4b864-8a45-4816-8058-1b2ed434428d" pod="tigera-operator/tigera-operator-5bf8dfcb4-rbxpb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:15.334473 kubelet[2717]: I0813 01:04:15.333452 2717 kubelet.go:2306] "Pod admission denied" podUID="9ef2220b-d8a8-4256-98de-0ce1dc160276" pod="tigera-operator/tigera-operator-5bf8dfcb4-rlwc8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:15.428338 kubelet[2717]: I0813 01:04:15.428294 2717 kubelet.go:2306] "Pod admission denied" podUID="671afb67-aef3-4c72-860c-73e9f8cc6bc9" pod="tigera-operator/tigera-operator-5bf8dfcb4-q7wwh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:15.528401 kubelet[2717]: I0813 01:04:15.528290 2717 kubelet.go:2306] "Pod admission denied" podUID="7a9f3ad4-0eaa-47d6-97da-776fe5a7491b" pod="tigera-operator/tigera-operator-5bf8dfcb4-kvbsp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:15.629956 kubelet[2717]: I0813 01:04:15.629911 2717 kubelet.go:2306] "Pod admission denied" podUID="fd9e528f-3093-49e7-bffa-c7ae154556a5" pod="tigera-operator/tigera-operator-5bf8dfcb4-m4c8m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:15.729203 kubelet[2717]: I0813 01:04:15.729146 2717 kubelet.go:2306] "Pod admission denied" podUID="3f219f82-223d-40b7-ad3e-92e3ad165d15" pod="tigera-operator/tigera-operator-5bf8dfcb4-rp4xg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:15.765106 kubelet[2717]: E0813 01:04:15.764842 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-7pdcs" podUID="e6731efa-3c96-4227-b83c-f4c3adff36c6" Aug 13 01:04:15.830942 kubelet[2717]: I0813 01:04:15.830853 2717 kubelet.go:2306] "Pod admission denied" podUID="64c5bf05-2f50-4e59-bc22-a3f8773843da" pod="tigera-operator/tigera-operator-5bf8dfcb4-ptgng" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:15.929105 kubelet[2717]: I0813 01:04:15.929057 2717 kubelet.go:2306] "Pod admission denied" podUID="8d6647fb-43ac-475f-9950-ef8e3249ca93" pod="tigera-operator/tigera-operator-5bf8dfcb4-9wk9b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:16.027687 kubelet[2717]: I0813 01:04:16.027640 2717 kubelet.go:2306] "Pod admission denied" podUID="fda01117-f848-4e7e-a64d-ad83e42e3ea6" pod="tigera-operator/tigera-operator-5bf8dfcb4-6ps8j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:16.140294 kubelet[2717]: I0813 01:04:16.139812 2717 kubelet.go:2306] "Pod admission denied" podUID="2fd45447-31eb-4c4a-9801-804c03559ffd" pod="tigera-operator/tigera-operator-5bf8dfcb4-2pkbc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:16.229249 kubelet[2717]: I0813 01:04:16.229172 2717 kubelet.go:2306] "Pod admission denied" podUID="8973b00c-0486-4982-9333-653e9bb153b1" pod="tigera-operator/tigera-operator-5bf8dfcb4-b8q8l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:16.432060 kubelet[2717]: I0813 01:04:16.432010 2717 kubelet.go:2306] "Pod admission denied" podUID="758d5058-cad4-45f8-b445-27d3fb9bfbf9" pod="tigera-operator/tigera-operator-5bf8dfcb4-dnrrq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:16.540214 kubelet[2717]: I0813 01:04:16.539547 2717 kubelet.go:2306] "Pod admission denied" podUID="e1a62b39-04c6-4f4a-b20a-ec57acd0bef5" pod="tigera-operator/tigera-operator-5bf8dfcb4-27428" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:16.631353 kubelet[2717]: I0813 01:04:16.631312 2717 kubelet.go:2306] "Pod admission denied" podUID="83a58562-db08-4cbb-86ff-0216b6bfa00f" pod="tigera-operator/tigera-operator-5bf8dfcb4-8lgbc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:16.831404 kubelet[2717]: I0813 01:04:16.830854 2717 kubelet.go:2306] "Pod admission denied" podUID="5a3e111c-e869-4299-a43e-c84c8d1afb4d" pod="tigera-operator/tigera-operator-5bf8dfcb4-zkgk5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:16.943894 kubelet[2717]: I0813 01:04:16.943649 2717 kubelet.go:2306] "Pod admission denied" podUID="be8c57d7-d07e-4b85-ae75-059adc6da196" pod="tigera-operator/tigera-operator-5bf8dfcb4-7kt8w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:17.030576 kubelet[2717]: I0813 01:04:17.030521 2717 kubelet.go:2306] "Pod admission denied" podUID="7f6cdcef-adcd-4224-8251-de0cb0f89e67" pod="tigera-operator/tigera-operator-5bf8dfcb4-fr9xj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:17.133774 kubelet[2717]: I0813 01:04:17.133231 2717 kubelet.go:2306] "Pod admission denied" podUID="02402586-cbed-4cae-8f96-50a50d87f1ed" pod="tigera-operator/tigera-operator-5bf8dfcb4-5tx54" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:17.244994 kubelet[2717]: I0813 01:04:17.244921 2717 kubelet.go:2306] "Pod admission denied" podUID="c2e3465d-cb77-4284-b527-fc2c2db963e5" pod="tigera-operator/tigera-operator-5bf8dfcb4-88ln9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:17.432545 kubelet[2717]: I0813 01:04:17.432508 2717 kubelet.go:2306] "Pod admission denied" podUID="450a38b1-30e5-4ae6-a535-7b3ef95dd55f" pod="tigera-operator/tigera-operator-5bf8dfcb4-d4bth" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:17.532779 kubelet[2717]: I0813 01:04:17.532726 2717 kubelet.go:2306] "Pod admission denied" podUID="47276373-75fe-41e7-b85c-1e0fd505f1b0" pod="tigera-operator/tigera-operator-5bf8dfcb4-4bx7r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:17.629408 kubelet[2717]: I0813 01:04:17.629353 2717 kubelet.go:2306] "Pod admission denied" podUID="743a2a79-fd99-48c4-a20a-99634cb22fed" pod="tigera-operator/tigera-operator-5bf8dfcb4-czxgz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:17.730723 kubelet[2717]: I0813 01:04:17.730594 2717 kubelet.go:2306] "Pod admission denied" podUID="aad65b1d-f60b-441d-a107-7f552cac4e7c" pod="tigera-operator/tigera-operator-5bf8dfcb4-gqjgd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:17.831336 kubelet[2717]: I0813 01:04:17.831292 2717 kubelet.go:2306] "Pod admission denied" podUID="fd39e30c-cdff-45f2-868b-637776b6b1e8" pod="tigera-operator/tigera-operator-5bf8dfcb4-mjg4b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:17.930599 kubelet[2717]: I0813 01:04:17.930539 2717 kubelet.go:2306] "Pod admission denied" podUID="ac782ddf-b836-499a-9d13-69bb55f8c551" pod="tigera-operator/tigera-operator-5bf8dfcb4-l5gqh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:18.029403 kubelet[2717]: I0813 01:04:18.029047 2717 kubelet.go:2306] "Pod admission denied" podUID="9da0431a-b81e-4dd8-a321-3561dddd9017" pod="tigera-operator/tigera-operator-5bf8dfcb4-wwm5q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:18.128412 kubelet[2717]: I0813 01:04:18.128369 2717 kubelet.go:2306] "Pod admission denied" podUID="c375a652-a6f0-4f7d-ba6a-cf8cc8d53c56" pod="tigera-operator/tigera-operator-5bf8dfcb4-wr4np" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:18.234355 kubelet[2717]: I0813 01:04:18.234296 2717 kubelet.go:2306] "Pod admission denied" podUID="61c5993b-1eaf-4cc6-b84e-eee3d7c6c87d" pod="tigera-operator/tigera-operator-5bf8dfcb4-tb5vv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:18.332732 kubelet[2717]: I0813 01:04:18.331710 2717 kubelet.go:2306] "Pod admission denied" podUID="c652c24d-8801-4f08-9bc5-3f241bb6444c" pod="tigera-operator/tigera-operator-5bf8dfcb4-l4kqm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:18.428287 kubelet[2717]: I0813 01:04:18.428231 2717 kubelet.go:2306] "Pod admission denied" podUID="7c658fa1-baa7-4e2f-8d10-5d3490d23c29" pod="tigera-operator/tigera-operator-5bf8dfcb4-7xsnb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:18.631321 kubelet[2717]: I0813 01:04:18.630975 2717 kubelet.go:2306] "Pod admission denied" podUID="b37f741a-b76c-416a-b25e-1d9e33c2c658" pod="tigera-operator/tigera-operator-5bf8dfcb4-dnq84" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:18.743684 kubelet[2717]: I0813 01:04:18.743125 2717 kubelet.go:2306] "Pod admission denied" podUID="6a9c818a-b251-4859-a24b-5bbbaa487f92" pod="tigera-operator/tigera-operator-5bf8dfcb4-7c6fp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:18.764339 kubelet[2717]: E0813 01:04:18.764036 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:04:18.764715 containerd[1575]: time="2025-08-13T01:04:18.764627680Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,}" Aug 13 01:04:18.790073 kubelet[2717]: I0813 01:04:18.788505 2717 kubelet.go:2306] "Pod admission denied" podUID="980d1465-2e52-43b3-a4e3-69c29c6489a8" pod="tigera-operator/tigera-operator-5bf8dfcb4-hsbm9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:18.837248 containerd[1575]: time="2025-08-13T01:04:18.837159953Z" level=error msg="Failed to destroy network for sandbox \"f2af28a76db67939f7a6af97f3df653cefff219373a71c5d739f67d7559aa6a7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:18.839131 containerd[1575]: time="2025-08-13T01:04:18.839088024Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2af28a76db67939f7a6af97f3df653cefff219373a71c5d739f67d7559aa6a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:18.839574 kubelet[2717]: E0813 01:04:18.839326 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2af28a76db67939f7a6af97f3df653cefff219373a71c5d739f67d7559aa6a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:18.839574 kubelet[2717]: E0813 01:04:18.839371 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2af28a76db67939f7a6af97f3df653cefff219373a71c5d739f67d7559aa6a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:04:18.839574 kubelet[2717]: E0813 01:04:18.839389 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2af28a76db67939f7a6af97f3df653cefff219373a71c5d739f67d7559aa6a7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:04:18.839574 kubelet[2717]: E0813 01:04:18.839435 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2af28a76db67939f7a6af97f3df653cefff219373a71c5d739f67d7559aa6a7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dk5p7" podUID="6787cc4f-56e4-4094-978c-958d0d7a35ba" Aug 13 01:04:18.840040 systemd[1]: run-netns-cni\x2d2a6387fa\x2d9f2b\x2d6a65\x2d3d58\x2dff62bdec677b.mount: Deactivated successfully. Aug 13 01:04:18.882378 kubelet[2717]: I0813 01:04:18.882267 2717 kubelet.go:2306] "Pod admission denied" podUID="320378f5-1aae-4256-8573-d2eed965c4cc" pod="tigera-operator/tigera-operator-5bf8dfcb4-sq7x5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:18.983595 kubelet[2717]: I0813 01:04:18.983499 2717 kubelet.go:2306] "Pod admission denied" podUID="214f7e79-9a60-4d0f-aa35-d24586a1055d" pod="tigera-operator/tigera-operator-5bf8dfcb4-4dt65" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:19.083209 kubelet[2717]: I0813 01:04:19.083160 2717 kubelet.go:2306] "Pod admission denied" podUID="01e2ce68-577b-41e4-9daf-30bf3d3aee9f" pod="tigera-operator/tigera-operator-5bf8dfcb4-b9vt7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:19.188118 kubelet[2717]: I0813 01:04:19.188087 2717 kubelet.go:2306] "Pod admission denied" podUID="e729362b-5b67-4311-9d61-88f747572c5b" pod="tigera-operator/tigera-operator-5bf8dfcb4-qhs79" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:19.279657 kubelet[2717]: I0813 01:04:19.279609 2717 kubelet.go:2306] "Pod admission denied" podUID="db4ea5cc-9e5a-4235-bcb6-3874da4c68e5" pod="tigera-operator/tigera-operator-5bf8dfcb4-bmdlt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:19.382088 kubelet[2717]: I0813 01:04:19.381865 2717 kubelet.go:2306] "Pod admission denied" podUID="ce832e1c-61cc-4a3b-95f8-e93412a32cd2" pod="tigera-operator/tigera-operator-5bf8dfcb4-hq4v7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:19.490791 kubelet[2717]: I0813 01:04:19.490360 2717 kubelet.go:2306] "Pod admission denied" podUID="4ece51d5-5e1f-40e6-a16d-6a1a08285874" pod="tigera-operator/tigera-operator-5bf8dfcb4-8j9fd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:19.582655 kubelet[2717]: I0813 01:04:19.582602 2717 kubelet.go:2306] "Pod admission denied" podUID="4b1b9811-c01c-4c73-974d-50dcbc795ff6" pod="tigera-operator/tigera-operator-5bf8dfcb4-vwccs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:19.686088 kubelet[2717]: I0813 01:04:19.684790 2717 kubelet.go:2306] "Pod admission denied" podUID="9c60581a-7695-486d-8ce2-ef6f62b50d2e" pod="tigera-operator/tigera-operator-5bf8dfcb4-lsg2d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:19.779701 kubelet[2717]: I0813 01:04:19.779574 2717 kubelet.go:2306] "Pod admission denied" podUID="c1046b7d-1406-4975-bab2-c2275bbf750f" pod="tigera-operator/tigera-operator-5bf8dfcb4-54sjj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:19.887362 kubelet[2717]: I0813 01:04:19.887314 2717 kubelet.go:2306] "Pod admission denied" podUID="e7be16ae-1b90-44df-9ac5-1ccf54eeb1f1" pod="tigera-operator/tigera-operator-5bf8dfcb4-tflq8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:19.984338 kubelet[2717]: I0813 01:04:19.983328 2717 kubelet.go:2306] "Pod admission denied" podUID="7f7d329b-4351-4fa4-8d24-1c1b001b6658" pod="tigera-operator/tigera-operator-5bf8dfcb4-2fbs6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:20.081365 kubelet[2717]: I0813 01:04:20.080411 2717 kubelet.go:2306] "Pod admission denied" podUID="8230f46f-3f5c-4471-9c66-30c580e4195d" pod="tigera-operator/tigera-operator-5bf8dfcb4-w2j6d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:20.283015 kubelet[2717]: I0813 01:04:20.282955 2717 kubelet.go:2306] "Pod admission denied" podUID="a71f5811-9fdb-4610-b022-ab5e00465ffd" pod="tigera-operator/tigera-operator-5bf8dfcb4-czm9l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:20.383576 kubelet[2717]: I0813 01:04:20.383443 2717 kubelet.go:2306] "Pod admission denied" podUID="aa28b7dd-0d1a-4110-997f-719640e5b49b" pod="tigera-operator/tigera-operator-5bf8dfcb4-zhw2c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:20.437167 kubelet[2717]: I0813 01:04:20.437133 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:04:20.437167 kubelet[2717]: I0813 01:04:20.437162 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:04:20.439482 kubelet[2717]: I0813 01:04:20.439448 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:04:20.449451 kubelet[2717]: I0813 01:04:20.449272 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:04:20.449451 kubelet[2717]: I0813 01:04:20.449339 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-dk5p7","kube-system/coredns-7c65d6cfc9-tp469","calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/csi-node-driver-84hvc","calico-system/calico-node-7pdcs","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:04:20.449451 kubelet[2717]: E0813 01:04:20.449361 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:04:20.449451 kubelet[2717]: E0813 01:04:20.449370 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:04:20.449451 kubelet[2717]: E0813 01:04:20.449376 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:04:20.449451 kubelet[2717]: E0813 01:04:20.449383 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:04:20.449451 kubelet[2717]: E0813 01:04:20.449389 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:04:20.449451 kubelet[2717]: E0813 01:04:20.449399 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:04:20.449451 kubelet[2717]: E0813 01:04:20.449408 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:04:20.449451 kubelet[2717]: E0813 01:04:20.449415 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:04:20.449451 kubelet[2717]: E0813 01:04:20.449423 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:04:20.449451 kubelet[2717]: E0813 01:04:20.449430 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:04:20.449451 kubelet[2717]: I0813 01:04:20.449439 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:04:20.481017 kubelet[2717]: I0813 01:04:20.480982 2717 kubelet.go:2306] "Pod admission denied" podUID="91b434c4-dcb3-4924-937a-7743aa954e52" pod="tigera-operator/tigera-operator-5bf8dfcb4-pv6cl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:20.583209 kubelet[2717]: I0813 01:04:20.582340 2717 kubelet.go:2306] "Pod admission denied" podUID="945a1458-825d-4d10-8746-11cea427938d" pod="tigera-operator/tigera-operator-5bf8dfcb4-t5j8c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:20.681903 kubelet[2717]: I0813 01:04:20.681871 2717 kubelet.go:2306] "Pod admission denied" podUID="d1fa36f2-695f-4409-92bf-fee5a9a296ad" pod="tigera-operator/tigera-operator-5bf8dfcb4-zvjrv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:20.765411 containerd[1575]: time="2025-08-13T01:04:20.765035485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564d8b8748-ps97n,Uid:a7e8405c-2c82-420c-bac7-a7277571f968,Namespace:calico-system,Attempt:0,}" Aug 13 01:04:20.793637 kubelet[2717]: I0813 01:04:20.793593 2717 kubelet.go:2306] "Pod admission denied" podUID="c2ebef40-ef30-44f0-a125-9d223a30ce3e" pod="tigera-operator/tigera-operator-5bf8dfcb4-zwzwf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:20.845853 containerd[1575]: time="2025-08-13T01:04:20.844026462Z" level=error msg="Failed to destroy network for sandbox \"f9846f87eac026ecc98ce8d5a89a800299cf2dc671f76d41170b19ef80513651\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:20.846935 systemd[1]: run-netns-cni\x2d95a64ddb\x2dd8fa\x2dd304\x2de675\x2d32e5796e41a9.mount: Deactivated successfully. Aug 13 01:04:20.847311 containerd[1575]: time="2025-08-13T01:04:20.847049234Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564d8b8748-ps97n,Uid:a7e8405c-2c82-420c-bac7-a7277571f968,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9846f87eac026ecc98ce8d5a89a800299cf2dc671f76d41170b19ef80513651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:20.847987 kubelet[2717]: E0813 01:04:20.847939 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9846f87eac026ecc98ce8d5a89a800299cf2dc671f76d41170b19ef80513651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:20.848056 kubelet[2717]: E0813 01:04:20.848001 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9846f87eac026ecc98ce8d5a89a800299cf2dc671f76d41170b19ef80513651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:04:20.848056 kubelet[2717]: E0813 01:04:20.848020 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9846f87eac026ecc98ce8d5a89a800299cf2dc671f76d41170b19ef80513651\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:04:20.848106 kubelet[2717]: E0813 01:04:20.848059 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9846f87eac026ecc98ce8d5a89a800299cf2dc671f76d41170b19ef80513651\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:04:20.881064 kubelet[2717]: I0813 01:04:20.881026 2717 kubelet.go:2306] "Pod admission denied" podUID="6012e48d-14a3-45b0-98d1-3d8ded1eb63b" pod="tigera-operator/tigera-operator-5bf8dfcb4-nfpff" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:21.083705 kubelet[2717]: I0813 01:04:21.083575 2717 kubelet.go:2306] "Pod admission denied" podUID="b3e70d77-12f8-4f79-8b0d-7aa0167fbda9" pod="tigera-operator/tigera-operator-5bf8dfcb4-2g42b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:21.188212 kubelet[2717]: I0813 01:04:21.187797 2717 kubelet.go:2306] "Pod admission denied" podUID="77accfec-6ea6-4bf5-a278-ea5674572880" pod="tigera-operator/tigera-operator-5bf8dfcb4-ndfxb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:21.283857 kubelet[2717]: I0813 01:04:21.283799 2717 kubelet.go:2306] "Pod admission denied" podUID="9e17ab02-1257-4263-90dd-925e1d7fbc8e" pod="tigera-operator/tigera-operator-5bf8dfcb4-cbk6n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:21.382351 kubelet[2717]: I0813 01:04:21.381751 2717 kubelet.go:2306] "Pod admission denied" podUID="36fa55eb-fa91-4689-b243-81d37b27ecab" pod="tigera-operator/tigera-operator-5bf8dfcb4-xxwhf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:21.493232 kubelet[2717]: I0813 01:04:21.492966 2717 kubelet.go:2306] "Pod admission denied" podUID="f17977a2-9165-4a5a-b07e-0a4bfe75f5cd" pod="tigera-operator/tigera-operator-5bf8dfcb4-vmnmx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:21.584168 kubelet[2717]: I0813 01:04:21.584113 2717 kubelet.go:2306] "Pod admission denied" podUID="8990c952-ad39-4d7d-b192-cb54d2818aa9" pod="tigera-operator/tigera-operator-5bf8dfcb4-4gtff" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:21.633712 kubelet[2717]: I0813 01:04:21.633600 2717 kubelet.go:2306] "Pod admission denied" podUID="4fd2c64e-d0b2-4f5c-a3c9-f81ee4b7b116" pod="tigera-operator/tigera-operator-5bf8dfcb4-hpcmz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:21.733912 kubelet[2717]: I0813 01:04:21.733858 2717 kubelet.go:2306] "Pod admission denied" podUID="a7cb3e3e-3c43-44ae-b7ac-266ee25557f5" pod="tigera-operator/tigera-operator-5bf8dfcb4-d7lrg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:21.764672 kubelet[2717]: E0813 01:04:21.764585 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:04:21.766388 containerd[1575]: time="2025-08-13T01:04:21.766319859Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84hvc,Uid:f2b74998-29fc-4213-8313-543c9154bc64,Namespace:calico-system,Attempt:0,}" Aug 13 01:04:21.767217 containerd[1575]: time="2025-08-13T01:04:21.766477276Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tp469,Uid:c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af,Namespace:kube-system,Attempt:0,}" Aug 13 01:04:21.841447 kubelet[2717]: I0813 01:04:21.841399 2717 kubelet.go:2306] "Pod admission denied" podUID="180451a7-31a7-4fc0-9838-287721d7e44b" pod="tigera-operator/tigera-operator-5bf8dfcb4-cv4rj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:21.846256 containerd[1575]: time="2025-08-13T01:04:21.845435704Z" level=error msg="Failed to destroy network for sandbox \"abd40d2c92666c8baa2476df00024a407c26edc41f06ceda0e246c292885c351\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:21.852590 systemd[1]: run-netns-cni\x2d95ee540d\x2deea2\x2d8453\x2d0f56\x2dfe07828c2c46.mount: Deactivated successfully. Aug 13 01:04:21.853925 containerd[1575]: time="2025-08-13T01:04:21.853897895Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tp469,Uid:c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"abd40d2c92666c8baa2476df00024a407c26edc41f06ceda0e246c292885c351\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:21.855222 kubelet[2717]: E0813 01:04:21.854700 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abd40d2c92666c8baa2476df00024a407c26edc41f06ceda0e246c292885c351\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:21.855222 kubelet[2717]: E0813 01:04:21.854738 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abd40d2c92666c8baa2476df00024a407c26edc41f06ceda0e246c292885c351\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:04:21.855222 kubelet[2717]: E0813 01:04:21.854754 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"abd40d2c92666c8baa2476df00024a407c26edc41f06ceda0e246c292885c351\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:04:21.855222 kubelet[2717]: E0813 01:04:21.854783 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-tp469_kube-system(c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-tp469_kube-system(c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"abd40d2c92666c8baa2476df00024a407c26edc41f06ceda0e246c292885c351\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-tp469" podUID="c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af" Aug 13 01:04:21.861817 containerd[1575]: time="2025-08-13T01:04:21.860312763Z" level=error msg="Failed to destroy network for sandbox \"720a31a95fb2f9ee68f4c70a1d0da87f2ed35bf675f99d6e74eecbe219c0c43b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:21.864218 containerd[1575]: time="2025-08-13T01:04:21.863278017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84hvc,Uid:f2b74998-29fc-4213-8313-543c9154bc64,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"720a31a95fb2f9ee68f4c70a1d0da87f2ed35bf675f99d6e74eecbe219c0c43b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:21.864148 systemd[1]: run-netns-cni\x2d30ac1975\x2dc020\x2d2288\x2ddf90\x2d2407ecc6afee.mount: Deactivated successfully. Aug 13 01:04:21.864608 kubelet[2717]: E0813 01:04:21.864542 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"720a31a95fb2f9ee68f4c70a1d0da87f2ed35bf675f99d6e74eecbe219c0c43b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:21.865446 kubelet[2717]: E0813 01:04:21.864691 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"720a31a95fb2f9ee68f4c70a1d0da87f2ed35bf675f99d6e74eecbe219c0c43b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:04:21.865446 kubelet[2717]: E0813 01:04:21.864711 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"720a31a95fb2f9ee68f4c70a1d0da87f2ed35bf675f99d6e74eecbe219c0c43b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:04:21.865446 kubelet[2717]: E0813 01:04:21.864738 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-84hvc_calico-system(f2b74998-29fc-4213-8313-543c9154bc64)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-84hvc_calico-system(f2b74998-29fc-4213-8313-543c9154bc64)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"720a31a95fb2f9ee68f4c70a1d0da87f2ed35bf675f99d6e74eecbe219c0c43b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-84hvc" podUID="f2b74998-29fc-4213-8313-543c9154bc64" Aug 13 01:04:21.904055 kubelet[2717]: I0813 01:04:21.903019 2717 kubelet.go:2306] "Pod admission denied" podUID="8849d2a0-ae75-4e92-8f14-537926361c6a" pod="tigera-operator/tigera-operator-5bf8dfcb4-ft9db" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:21.981878 kubelet[2717]: I0813 01:04:21.981829 2717 kubelet.go:2306] "Pod admission denied" podUID="6be06ac0-bcc1-466b-9c87-c7a0b5952832" pod="tigera-operator/tigera-operator-5bf8dfcb4-q6jvx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:22.082756 kubelet[2717]: I0813 01:04:22.082706 2717 kubelet.go:2306] "Pod admission denied" podUID="db595102-684c-4554-bca9-2b1eaea878a4" pod="tigera-operator/tigera-operator-5bf8dfcb4-4874m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:22.181848 kubelet[2717]: I0813 01:04:22.181808 2717 kubelet.go:2306] "Pod admission denied" podUID="6c809859-782f-4f57-b1e3-140c6e1d0b9f" pod="tigera-operator/tigera-operator-5bf8dfcb4-85xp6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:22.280968 kubelet[2717]: I0813 01:04:22.280928 2717 kubelet.go:2306] "Pod admission denied" podUID="50bbc711-720d-4776-bcd1-c6960a6d46b3" pod="tigera-operator/tigera-operator-5bf8dfcb4-7cl6p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:22.389293 kubelet[2717]: I0813 01:04:22.388616 2717 kubelet.go:2306] "Pod admission denied" podUID="1305caa5-4c92-4122-9387-fd131562530f" pod="tigera-operator/tigera-operator-5bf8dfcb4-vrf4m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:22.480760 kubelet[2717]: I0813 01:04:22.480647 2717 kubelet.go:2306] "Pod admission denied" podUID="98622736-0aa2-4472-b77a-5e1242863714" pod="tigera-operator/tigera-operator-5bf8dfcb4-2jmg9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:22.528082 kubelet[2717]: I0813 01:04:22.527845 2717 kubelet.go:2306] "Pod admission denied" podUID="27faa7ff-3323-45c7-8455-e98532dd0f9c" pod="tigera-operator/tigera-operator-5bf8dfcb4-4n855" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:22.630956 kubelet[2717]: I0813 01:04:22.630894 2717 kubelet.go:2306] "Pod admission denied" podUID="70db545a-d305-4af9-96c9-01d83f5976aa" pod="tigera-operator/tigera-operator-5bf8dfcb4-f7l8b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:22.829672 kubelet[2717]: I0813 01:04:22.829564 2717 kubelet.go:2306] "Pod admission denied" podUID="7c6508a2-5a44-4df1-a905-90f055985faf" pod="tigera-operator/tigera-operator-5bf8dfcb4-6jffz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:22.940002 kubelet[2717]: I0813 01:04:22.939245 2717 kubelet.go:2306] "Pod admission denied" podUID="020bd1ff-bba5-46fb-8b39-86e6f62905c0" pod="tigera-operator/tigera-operator-5bf8dfcb4-tv24q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:23.033519 kubelet[2717]: I0813 01:04:23.033462 2717 kubelet.go:2306] "Pod admission denied" podUID="cbfd72f7-3226-40b9-a9b9-e22e54fef447" pod="tigera-operator/tigera-operator-5bf8dfcb4-qq7cp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:23.232468 kubelet[2717]: I0813 01:04:23.232419 2717 kubelet.go:2306] "Pod admission denied" podUID="bfc71c64-7309-40b9-a504-92d255ccb3e4" pod="tigera-operator/tigera-operator-5bf8dfcb4-swdn7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:23.333628 kubelet[2717]: I0813 01:04:23.333572 2717 kubelet.go:2306] "Pod admission denied" podUID="272fc702-6ee1-4a84-aa4f-1703f42b9a5d" pod="tigera-operator/tigera-operator-5bf8dfcb4-hdnnv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:23.432174 kubelet[2717]: I0813 01:04:23.432101 2717 kubelet.go:2306] "Pod admission denied" podUID="b486167e-34e1-4118-b01c-6593d50f0673" pod="tigera-operator/tigera-operator-5bf8dfcb4-jzh86" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:23.564368 kubelet[2717]: I0813 01:04:23.564243 2717 kubelet.go:2306] "Pod admission denied" podUID="78942682-601c-4880-83ed-334b88d3845a" pod="tigera-operator/tigera-operator-5bf8dfcb4-4tq48" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:23.618809 kubelet[2717]: I0813 01:04:23.618445 2717 kubelet.go:2306] "Pod admission denied" podUID="5bae4eb0-9ba1-49c2-b05d-9dab9bf60a96" pod="tigera-operator/tigera-operator-5bf8dfcb4-bptwt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:23.730740 kubelet[2717]: I0813 01:04:23.730693 2717 kubelet.go:2306] "Pod admission denied" podUID="19d958b9-4e7a-46ec-8eeb-3d7064bf0806" pod="tigera-operator/tigera-operator-5bf8dfcb4-8w5fm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:23.829784 kubelet[2717]: I0813 01:04:23.829658 2717 kubelet.go:2306] "Pod admission denied" podUID="1575d06d-dfe9-4ca7-849b-df0e54def0a1" pod="tigera-operator/tigera-operator-5bf8dfcb4-hmjsm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:23.933136 kubelet[2717]: I0813 01:04:23.933081 2717 kubelet.go:2306] "Pod admission denied" podUID="fed15f83-5321-495b-8f5c-7736bb805047" pod="tigera-operator/tigera-operator-5bf8dfcb4-htf2p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:24.040213 kubelet[2717]: I0813 01:04:24.039401 2717 kubelet.go:2306] "Pod admission denied" podUID="0660a746-0e74-47d1-99a7-e80bf5280e11" pod="tigera-operator/tigera-operator-5bf8dfcb4-q8jj2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:24.131104 kubelet[2717]: I0813 01:04:24.130990 2717 kubelet.go:2306] "Pod admission denied" podUID="52f9553a-c38b-4c8e-abf7-f3e9f8978555" pod="tigera-operator/tigera-operator-5bf8dfcb4-xf82b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:24.231878 kubelet[2717]: I0813 01:04:24.231822 2717 kubelet.go:2306] "Pod admission denied" podUID="ec18ab9c-5898-47f2-9bb7-4f44df9abd62" pod="tigera-operator/tigera-operator-5bf8dfcb4-6mcb6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:24.331295 kubelet[2717]: I0813 01:04:24.331248 2717 kubelet.go:2306] "Pod admission denied" podUID="0eff8b3e-395b-4660-a231-e3313ee21c51" pod="tigera-operator/tigera-operator-5bf8dfcb4-fthm4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:24.428363 kubelet[2717]: I0813 01:04:24.428246 2717 kubelet.go:2306] "Pod admission denied" podUID="168d4102-b78a-49c4-a1bc-8ece68d3e442" pod="tigera-operator/tigera-operator-5bf8dfcb4-wfpls" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:24.535142 kubelet[2717]: I0813 01:04:24.535102 2717 kubelet.go:2306] "Pod admission denied" podUID="341bd1b7-199b-47d0-8a12-90ca5d2e7ced" pod="tigera-operator/tigera-operator-5bf8dfcb4-gzrrm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:24.631221 kubelet[2717]: I0813 01:04:24.630964 2717 kubelet.go:2306] "Pod admission denied" podUID="a4179764-50bc-475c-b4cb-4d6dbdeb521a" pod="tigera-operator/tigera-operator-5bf8dfcb4-hfnqc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:24.832496 kubelet[2717]: I0813 01:04:24.832444 2717 kubelet.go:2306] "Pod admission denied" podUID="979bb778-e534-423f-8f75-de2856653cae" pod="tigera-operator/tigera-operator-5bf8dfcb4-pqq7l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:24.929840 kubelet[2717]: I0813 01:04:24.929778 2717 kubelet.go:2306] "Pod admission denied" podUID="ff45ed70-3306-4737-8171-bb8908e60993" pod="tigera-operator/tigera-operator-5bf8dfcb4-lmmt7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:25.032425 kubelet[2717]: I0813 01:04:25.032372 2717 kubelet.go:2306] "Pod admission denied" podUID="e024a923-592a-4e4e-af40-1696a3194e0c" pod="tigera-operator/tigera-operator-5bf8dfcb4-7m49b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:25.133314 kubelet[2717]: I0813 01:04:25.132466 2717 kubelet.go:2306] "Pod admission denied" podUID="e7a5a598-800e-4871-8e17-48fdbb1819ba" pod="tigera-operator/tigera-operator-5bf8dfcb4-b92x2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:25.230429 kubelet[2717]: I0813 01:04:25.230383 2717 kubelet.go:2306] "Pod admission denied" podUID="2a4c647e-dfc0-4b06-990f-12a59845535a" pod="tigera-operator/tigera-operator-5bf8dfcb4-drs6l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:25.445433 kubelet[2717]: I0813 01:04:25.445401 2717 kubelet.go:2306] "Pod admission denied" podUID="daa93888-5e52-4aa8-8ba0-c8311c10d072" pod="tigera-operator/tigera-operator-5bf8dfcb4-8fkxb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:25.529329 kubelet[2717]: I0813 01:04:25.529278 2717 kubelet.go:2306] "Pod admission denied" podUID="882c053f-1a25-4066-831e-b4a960ae5800" pod="tigera-operator/tigera-operator-5bf8dfcb4-jk4jz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:25.638511 kubelet[2717]: I0813 01:04:25.638460 2717 kubelet.go:2306] "Pod admission denied" podUID="2e93f9b7-8a12-42f2-a244-1c3798eed0d6" pod="tigera-operator/tigera-operator-5bf8dfcb4-nvjgh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:25.734913 kubelet[2717]: I0813 01:04:25.734426 2717 kubelet.go:2306] "Pod admission denied" podUID="33455901-c460-4b57-9297-1a50b5bcc8b4" pod="tigera-operator/tigera-operator-5bf8dfcb4-2rxmp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:25.833445 kubelet[2717]: I0813 01:04:25.833403 2717 kubelet.go:2306] "Pod admission denied" podUID="b936ed52-07a8-4785-b381-265a4a4cbedc" pod="tigera-operator/tigera-operator-5bf8dfcb4-ctd5d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:26.038463 kubelet[2717]: I0813 01:04:26.037255 2717 kubelet.go:2306] "Pod admission denied" podUID="5999876b-7fac-4f88-8ca0-925dcd6cd860" pod="tigera-operator/tigera-operator-5bf8dfcb4-65vkm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:26.132676 kubelet[2717]: I0813 01:04:26.132629 2717 kubelet.go:2306] "Pod admission denied" podUID="f5a819c3-e1f7-4edd-87b4-be6a6a123fff" pod="tigera-operator/tigera-operator-5bf8dfcb4-97dx7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:26.229364 kubelet[2717]: I0813 01:04:26.229317 2717 kubelet.go:2306] "Pod admission denied" podUID="0100bdc0-fb0d-45eb-a94b-1f0f6603b8f0" pod="tigera-operator/tigera-operator-5bf8dfcb4-rg42q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:26.331355 kubelet[2717]: I0813 01:04:26.330652 2717 kubelet.go:2306] "Pod admission denied" podUID="cf96fc6b-31db-4e8d-90fc-cd2d8963e989" pod="tigera-operator/tigera-operator-5bf8dfcb4-8btfl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:26.435506 kubelet[2717]: I0813 01:04:26.435467 2717 kubelet.go:2306] "Pod admission denied" podUID="4c9d05c5-2d17-455b-b476-20c2c7e1e9ee" pod="tigera-operator/tigera-operator-5bf8dfcb4-sjlmw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:26.634584 kubelet[2717]: I0813 01:04:26.633719 2717 kubelet.go:2306] "Pod admission denied" podUID="58d136f0-94e8-4933-a626-757762367a60" pod="tigera-operator/tigera-operator-5bf8dfcb4-qtzbl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:26.733681 kubelet[2717]: I0813 01:04:26.733633 2717 kubelet.go:2306] "Pod admission denied" podUID="aa330af7-e19c-47ec-8a09-232840924d07" pod="tigera-operator/tigera-operator-5bf8dfcb4-j4bl4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:26.832658 kubelet[2717]: I0813 01:04:26.832607 2717 kubelet.go:2306] "Pod admission denied" podUID="0f7514ba-b8bb-4e86-9cfa-7bef82310ab6" pod="tigera-operator/tigera-operator-5bf8dfcb4-jwg2x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:26.934810 kubelet[2717]: I0813 01:04:26.934765 2717 kubelet.go:2306] "Pod admission denied" podUID="73301a5c-b986-4a47-8fbe-cc0a8b0cd3b7" pod="tigera-operator/tigera-operator-5bf8dfcb4-9mznz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:27.033571 kubelet[2717]: I0813 01:04:27.033520 2717 kubelet.go:2306] "Pod admission denied" podUID="c6dd9a68-2755-47b7-a87b-78fe9d498644" pod="tigera-operator/tigera-operator-5bf8dfcb4-9w9m4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:27.137210 kubelet[2717]: I0813 01:04:27.136727 2717 kubelet.go:2306] "Pod admission denied" podUID="0e7f8a71-eb1e-4fe7-8179-f0b5cddd038d" pod="tigera-operator/tigera-operator-5bf8dfcb4-npmq9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:27.179729 kubelet[2717]: I0813 01:04:27.179678 2717 kubelet.go:2306] "Pod admission denied" podUID="ea8224f3-b920-4012-9a04-93fb15cf9010" pod="tigera-operator/tigera-operator-5bf8dfcb4-vdkh6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:27.284906 kubelet[2717]: I0813 01:04:27.284787 2717 kubelet.go:2306] "Pod admission denied" podUID="8b74c4e4-2b77-4713-9cd0-02c696d49081" pod="tigera-operator/tigera-operator-5bf8dfcb4-nstmt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:27.382568 kubelet[2717]: I0813 01:04:27.382512 2717 kubelet.go:2306] "Pod admission denied" podUID="3b6360c1-29b2-48cc-b6d6-f5c8942ed4cf" pod="tigera-operator/tigera-operator-5bf8dfcb4-m89v5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:27.483614 kubelet[2717]: I0813 01:04:27.483568 2717 kubelet.go:2306] "Pod admission denied" podUID="85a6e3b1-b245-40ae-9a53-e5a2502d921a" pod="tigera-operator/tigera-operator-5bf8dfcb4-5zv2g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:27.582335 kubelet[2717]: I0813 01:04:27.581630 2717 kubelet.go:2306] "Pod admission denied" podUID="c75f8b50-acc7-474a-b844-daf09bc3df4c" pod="tigera-operator/tigera-operator-5bf8dfcb4-pgdbq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:27.682506 kubelet[2717]: I0813 01:04:27.682458 2717 kubelet.go:2306] "Pod admission denied" podUID="7974bfa1-ccae-4166-85f8-2b18d088b0c6" pod="tigera-operator/tigera-operator-5bf8dfcb4-vpld7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:27.779592 kubelet[2717]: I0813 01:04:27.779542 2717 kubelet.go:2306] "Pod admission denied" podUID="f41e349a-a647-4b45-9bb7-127f2b2669db" pod="tigera-operator/tigera-operator-5bf8dfcb4-h6mkv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:27.889342 kubelet[2717]: I0813 01:04:27.888683 2717 kubelet.go:2306] "Pod admission denied" podUID="9a3863ad-e080-4ced-9dc1-124a7473c883" pod="tigera-operator/tigera-operator-5bf8dfcb4-lfbzd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:27.978614 kubelet[2717]: I0813 01:04:27.978575 2717 kubelet.go:2306] "Pod admission denied" podUID="cea6bcd7-5b3b-412f-99f3-4e21226fb8ba" pod="tigera-operator/tigera-operator-5bf8dfcb4-6jh2n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:28.080644 kubelet[2717]: I0813 01:04:28.080597 2717 kubelet.go:2306] "Pod admission denied" podUID="8efee2d3-10e1-49a9-95fb-34a977c38c56" pod="tigera-operator/tigera-operator-5bf8dfcb4-rmb2x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:28.187385 kubelet[2717]: I0813 01:04:28.187350 2717 kubelet.go:2306] "Pod admission denied" podUID="0785e71c-44a2-45f8-a24d-0a744b99395f" pod="tigera-operator/tigera-operator-5bf8dfcb4-zkbf8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:28.285515 kubelet[2717]: I0813 01:04:28.285464 2717 kubelet.go:2306] "Pod admission denied" podUID="e79247b0-d57e-4100-a848-d0490555ccbd" pod="tigera-operator/tigera-operator-5bf8dfcb4-dv9vr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:28.384637 kubelet[2717]: I0813 01:04:28.384591 2717 kubelet.go:2306] "Pod admission denied" podUID="ccce0f8b-d504-40f2-812b-7554f5102734" pod="tigera-operator/tigera-operator-5bf8dfcb4-hp4dj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:28.437168 kubelet[2717]: I0813 01:04:28.437123 2717 kubelet.go:2306] "Pod admission denied" podUID="dc23f9e9-3f9e-477d-b196-23264846231a" pod="tigera-operator/tigera-operator-5bf8dfcb4-m4jtn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:28.533721 kubelet[2717]: I0813 01:04:28.533598 2717 kubelet.go:2306] "Pod admission denied" podUID="8ba7899e-ca50-4b42-bb0a-c5d4b17ab4d9" pod="tigera-operator/tigera-operator-5bf8dfcb4-7vc4g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:28.632536 kubelet[2717]: I0813 01:04:28.632492 2717 kubelet.go:2306] "Pod admission denied" podUID="782cdf95-1170-4255-91aa-a6251ff9c850" pod="tigera-operator/tigera-operator-5bf8dfcb4-8gh7k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:28.743278 kubelet[2717]: I0813 01:04:28.743242 2717 kubelet.go:2306] "Pod admission denied" podUID="59e47341-ca6d-4ad6-93be-26c469cf35a3" pod="tigera-operator/tigera-operator-5bf8dfcb4-6msws" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:28.767951 containerd[1575]: time="2025-08-13T01:04:28.767698522Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:04:28.941215 kubelet[2717]: I0813 01:04:28.941158 2717 kubelet.go:2306] "Pod admission denied" podUID="82f32f56-7e99-4203-9d2d-922588effeb1" pod="tigera-operator/tigera-operator-5bf8dfcb4-6qfvn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:29.031960 kubelet[2717]: I0813 01:04:29.031907 2717 kubelet.go:2306] "Pod admission denied" podUID="f9db1889-f3a4-4efe-b7a5-382b1d9680a8" pod="tigera-operator/tigera-operator-5bf8dfcb4-n6ggz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:29.149392 kubelet[2717]: I0813 01:04:29.149337 2717 kubelet.go:2306] "Pod admission denied" podUID="9eb9b4ca-f401-484d-8ce6-df85b74a0db4" pod="tigera-operator/tigera-operator-5bf8dfcb4-69tlc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:29.337899 kubelet[2717]: I0813 01:04:29.337772 2717 kubelet.go:2306] "Pod admission denied" podUID="c4d2223a-805b-4952-b7f1-6f4143e7f874" pod="tigera-operator/tigera-operator-5bf8dfcb4-vtvtc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:29.439497 kubelet[2717]: I0813 01:04:29.439438 2717 kubelet.go:2306] "Pod admission denied" podUID="b5d218e0-42da-448d-a013-05ca9f9a3110" pod="tigera-operator/tigera-operator-5bf8dfcb4-z2kbr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:29.560159 kubelet[2717]: I0813 01:04:29.559508 2717 kubelet.go:2306] "Pod admission denied" podUID="3c7781da-ce34-4abe-8fad-4c63ef0bbdf8" pod="tigera-operator/tigera-operator-5bf8dfcb4-9ttnz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:29.645020 kubelet[2717]: I0813 01:04:29.644710 2717 kubelet.go:2306] "Pod admission denied" podUID="ab150d97-6e7e-4912-ac0f-5a82efd27725" pod="tigera-operator/tigera-operator-5bf8dfcb4-6q2s7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:29.734753 kubelet[2717]: I0813 01:04:29.734708 2717 kubelet.go:2306] "Pod admission denied" podUID="52088152-9136-4121-a019-c4d468d931e7" pod="tigera-operator/tigera-operator-5bf8dfcb4-cc6jb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:29.768422 kubelet[2717]: E0813 01:04:29.768389 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:04:29.770948 containerd[1575]: time="2025-08-13T01:04:29.770919712Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,}" Aug 13 01:04:29.856105 containerd[1575]: time="2025-08-13T01:04:29.852542280Z" level=error msg="Failed to destroy network for sandbox \"bbcb330d3b6e8d1218897288ccd65ae0967e557209a60829d2a5ac8cab837d17\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:29.855890 systemd[1]: run-netns-cni\x2df24aeb14\x2d039f\x2df11c\x2d6f4f\x2da59f2f1b920e.mount: Deactivated successfully. Aug 13 01:04:29.859150 containerd[1575]: time="2025-08-13T01:04:29.859057724Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbcb330d3b6e8d1218897288ccd65ae0967e557209a60829d2a5ac8cab837d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:29.859644 kubelet[2717]: E0813 01:04:29.859352 2717 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbcb330d3b6e8d1218897288ccd65ae0967e557209a60829d2a5ac8cab837d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:04:29.859644 kubelet[2717]: E0813 01:04:29.859404 2717 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbcb330d3b6e8d1218897288ccd65ae0967e557209a60829d2a5ac8cab837d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:04:29.859644 kubelet[2717]: E0813 01:04:29.859424 2717 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bbcb330d3b6e8d1218897288ccd65ae0967e557209a60829d2a5ac8cab837d17\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:04:29.859644 kubelet[2717]: E0813 01:04:29.859477 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bbcb330d3b6e8d1218897288ccd65ae0967e557209a60829d2a5ac8cab837d17\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-dk5p7" podUID="6787cc4f-56e4-4094-978c-958d0d7a35ba" Aug 13 01:04:29.945215 kubelet[2717]: I0813 01:04:29.943591 2717 kubelet.go:2306] "Pod admission denied" podUID="73b6747d-2221-4304-89f2-8a2abbc64c95" pod="tigera-operator/tigera-operator-5bf8dfcb4-nw4cs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:30.033585 kubelet[2717]: I0813 01:04:30.033540 2717 kubelet.go:2306] "Pod admission denied" podUID="42f7f044-4d79-4aa7-9e76-59320202446f" pod="tigera-operator/tigera-operator-5bf8dfcb4-xgmqw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:30.157405 kubelet[2717]: I0813 01:04:30.156578 2717 kubelet.go:2306] "Pod admission denied" podUID="5318e622-145e-4cb3-b70e-96a8e88464d7" pod="tigera-operator/tigera-operator-5bf8dfcb4-4ssng" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:30.236663 kubelet[2717]: I0813 01:04:30.236560 2717 kubelet.go:2306] "Pod admission denied" podUID="226d4547-7bf3-4d57-bcd2-e119a3aa5412" pod="tigera-operator/tigera-operator-5bf8dfcb4-z5bv5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:30.342345 kubelet[2717]: I0813 01:04:30.342303 2717 kubelet.go:2306] "Pod admission denied" podUID="3ebcc6e7-d4e4-429b-b63a-ca94fff7b9e9" pod="tigera-operator/tigera-operator-5bf8dfcb4-z4j6q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:30.436379 kubelet[2717]: I0813 01:04:30.436316 2717 kubelet.go:2306] "Pod admission denied" podUID="2d2ca5d3-0dfe-4ee1-b913-e82dd5b750b6" pod="tigera-operator/tigera-operator-5bf8dfcb4-db4hd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:30.486143 kubelet[2717]: I0813 01:04:30.486075 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:04:30.486143 kubelet[2717]: I0813 01:04:30.486113 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:04:30.489369 kubelet[2717]: I0813 01:04:30.489159 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:04:30.491843 kubelet[2717]: I0813 01:04:30.491808 2717 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" size=18562039 runtimeHandler="" Aug 13 01:04:30.492288 containerd[1575]: time="2025-08-13T01:04:30.492235181Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:04:30.493525 containerd[1575]: time="2025-08-13T01:04:30.493466391Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:04:30.494111 containerd[1575]: time="2025-08-13T01:04:30.494094681Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\"" Aug 13 01:04:30.494698 containerd[1575]: time="2025-08-13T01:04:30.494682002Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" returns successfully" Aug 13 01:04:30.495224 containerd[1575]: time="2025-08-13T01:04:30.494732001Z" level=info msg="ImageDelete event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:04:30.495313 kubelet[2717]: I0813 01:04:30.495274 2717 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" size=56909194 runtimeHandler="" Aug 13 01:04:30.495447 containerd[1575]: time="2025-08-13T01:04:30.495429580Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:04:30.496687 containerd[1575]: time="2025-08-13T01:04:30.496657370Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 01:04:30.497579 containerd[1575]: time="2025-08-13T01:04:30.497509377Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\"" Aug 13 01:04:30.498528 containerd[1575]: time="2025-08-13T01:04:30.498417732Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" returns successfully" Aug 13 01:04:30.498528 containerd[1575]: time="2025-08-13T01:04:30.498493771Z" level=info msg="ImageDelete event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:04:30.512967 kubelet[2717]: I0813 01:04:30.512937 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:04:30.513263 kubelet[2717]: I0813 01:04:30.513245 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-dk5p7","kube-system/coredns-7c65d6cfc9-tp469","calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-node-7pdcs","calico-system/csi-node-driver-84hvc","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:04:30.513431 kubelet[2717]: E0813 01:04:30.513341 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:04:30.513431 kubelet[2717]: E0813 01:04:30.513353 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:04:30.513431 kubelet[2717]: E0813 01:04:30.513360 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:04:30.513431 kubelet[2717]: E0813 01:04:30.513368 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:04:30.513431 kubelet[2717]: E0813 01:04:30.513374 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:04:30.513757 kubelet[2717]: E0813 01:04:30.513571 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:04:30.513757 kubelet[2717]: E0813 01:04:30.513692 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:04:30.513757 kubelet[2717]: E0813 01:04:30.513703 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:04:30.513757 kubelet[2717]: E0813 01:04:30.513711 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:04:30.513757 kubelet[2717]: E0813 01:04:30.513720 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:04:30.513757 kubelet[2717]: I0813 01:04:30.513730 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:04:30.545031 kubelet[2717]: I0813 01:04:30.544939 2717 kubelet.go:2306] "Pod admission denied" podUID="c7929fa6-617c-4930-a2b8-918bde6a51f7" pod="tigera-operator/tigera-operator-5bf8dfcb4-cl4xf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:30.641935 kubelet[2717]: I0813 01:04:30.641876 2717 kubelet.go:2306] "Pod admission denied" podUID="f84c4a3a-bbd1-42aa-9b98-9d01a1bbda3e" pod="tigera-operator/tigera-operator-5bf8dfcb4-nb5fh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:30.741569 kubelet[2717]: I0813 01:04:30.741460 2717 kubelet.go:2306] "Pod admission denied" podUID="013e2257-5ca0-4eaf-9f4e-6091b646d659" pod="tigera-operator/tigera-operator-5bf8dfcb4-xl7mv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:30.873944 kubelet[2717]: I0813 01:04:30.873892 2717 kubelet.go:2306] "Pod admission denied" podUID="c1e8c925-91f5-4b2d-97bc-189ecc9f8161" pod="tigera-operator/tigera-operator-5bf8dfcb4-f6pjr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:30.993323 kubelet[2717]: I0813 01:04:30.993213 2717 kubelet.go:2306] "Pod admission denied" podUID="a16db7ef-3f3d-4876-8733-4d42f2a9fdf7" pod="tigera-operator/tigera-operator-5bf8dfcb4-552cs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:31.094561 kubelet[2717]: I0813 01:04:31.094503 2717 kubelet.go:2306] "Pod admission denied" podUID="501a493b-209b-409c-90a2-d7c328baf74f" pod="tigera-operator/tigera-operator-5bf8dfcb4-zfdf4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:31.200501 kubelet[2717]: I0813 01:04:31.200311 2717 kubelet.go:2306] "Pod admission denied" podUID="69c8fb7a-2a03-4aa5-a849-f7a42f0b33d5" pod="tigera-operator/tigera-operator-5bf8dfcb4-jsbtk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:31.292042 kubelet[2717]: I0813 01:04:31.291703 2717 kubelet.go:2306] "Pod admission denied" podUID="5c58a8c8-52c2-4689-a6e7-55e25b52fd9d" pod="tigera-operator/tigera-operator-5bf8dfcb4-6qbbf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:31.433637 kubelet[2717]: I0813 01:04:31.433587 2717 kubelet.go:2306] "Pod admission denied" podUID="73a80ddc-18a8-45ed-80cc-19bde07b08ff" pod="tigera-operator/tigera-operator-5bf8dfcb4-prz7l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:31.541842 kubelet[2717]: I0813 01:04:31.541793 2717 kubelet.go:2306] "Pod admission denied" podUID="197fac41-2158-4114-a701-ceee1b34a350" pod="tigera-operator/tigera-operator-5bf8dfcb4-zr55f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:31.590312 kubelet[2717]: I0813 01:04:31.590222 2717 kubelet.go:2306] "Pod admission denied" podUID="01578a5e-6a75-464b-a839-c296328dbcb9" pod="tigera-operator/tigera-operator-5bf8dfcb4-pfcw2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:31.694314 kubelet[2717]: I0813 01:04:31.694254 2717 kubelet.go:2306] "Pod admission denied" podUID="cc63ce53-126d-4aae-af31-7303ec1b2cfa" pod="tigera-operator/tigera-operator-5bf8dfcb4-x5zcv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:31.794257 kubelet[2717]: I0813 01:04:31.794008 2717 kubelet.go:2306] "Pod admission denied" podUID="c952d087-9d08-4d4b-a9c5-c065130c89b2" pod="tigera-operator/tigera-operator-5bf8dfcb4-pnvwg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:31.927127 kubelet[2717]: I0813 01:04:31.925084 2717 kubelet.go:2306] "Pod admission denied" podUID="ec2f4c81-dc62-45ae-a3f2-2f700b74cd42" pod="tigera-operator/tigera-operator-5bf8dfcb4-62cjw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:31.993298 kubelet[2717]: I0813 01:04:31.993263 2717 kubelet.go:2306] "Pod admission denied" podUID="a97df1dc-ae6d-4839-899e-24b810ee124d" pod="tigera-operator/tigera-operator-5bf8dfcb4-gqv7c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:32.090527 kubelet[2717]: I0813 01:04:32.090467 2717 kubelet.go:2306] "Pod admission denied" podUID="9743cd12-9391-4139-b04c-32639bd5de4e" pod="tigera-operator/tigera-operator-5bf8dfcb4-x27dx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:32.318766 kubelet[2717]: I0813 01:04:32.318719 2717 kubelet.go:2306] "Pod admission denied" podUID="808417f2-6e60-470c-a92c-cb8db6748da7" pod="tigera-operator/tigera-operator-5bf8dfcb4-9dtcp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:32.388760 kubelet[2717]: I0813 01:04:32.388725 2717 kubelet.go:2306] "Pod admission denied" podUID="eb826ce0-c5e8-4719-88c5-b361f94ec9e0" pod="tigera-operator/tigera-operator-5bf8dfcb4-676wp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:32.508086 kubelet[2717]: I0813 01:04:32.508043 2717 kubelet.go:2306] "Pod admission denied" podUID="aeacebb6-0c90-4540-9527-4cb4330146a6" pod="tigera-operator/tigera-operator-5bf8dfcb4-br42r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:32.750368 kubelet[2717]: I0813 01:04:32.750315 2717 kubelet.go:2306] "Pod admission denied" podUID="958d7854-d777-4792-8e5c-cf6eef39e2f2" pod="tigera-operator/tigera-operator-5bf8dfcb4-kwt7b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:32.851013 kubelet[2717]: I0813 01:04:32.850824 2717 kubelet.go:2306] "Pod admission denied" podUID="755af5da-4348-4fc4-a913-5ca1c3feabaf" pod="tigera-operator/tigera-operator-5bf8dfcb4-wrzkk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:33.049367 kubelet[2717]: I0813 01:04:33.049266 2717 kubelet.go:2306] "Pod admission denied" podUID="7b3ae47e-6f94-4066-aeb9-77a39ef9b084" pod="tigera-operator/tigera-operator-5bf8dfcb4-4kn2v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:33.141456 kubelet[2717]: I0813 01:04:33.141416 2717 kubelet.go:2306] "Pod admission denied" podUID="f65de91a-5a21-49c8-a121-cee059f91861" pod="tigera-operator/tigera-operator-5bf8dfcb4-d57jz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:33.257637 kubelet[2717]: I0813 01:04:33.256673 2717 kubelet.go:2306] "Pod admission denied" podUID="bbdcf0d7-97d3-453e-a888-44b1662e150a" pod="tigera-operator/tigera-operator-5bf8dfcb4-fnvzz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:33.505142 kubelet[2717]: I0813 01:04:33.504894 2717 kubelet.go:2306] "Pod admission denied" podUID="4b4dd8f0-84c2-4f8e-8f79-424011676c74" pod="tigera-operator/tigera-operator-5bf8dfcb4-t6nl5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:33.596354 kubelet[2717]: I0813 01:04:33.596286 2717 kubelet.go:2306] "Pod admission denied" podUID="43b7ca6e-227b-4a34-853e-1cd67053aa9b" pod="tigera-operator/tigera-operator-5bf8dfcb4-fx2cj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:33.726824 kubelet[2717]: I0813 01:04:33.726562 2717 kubelet.go:2306] "Pod admission denied" podUID="835fee5c-edfd-4cdc-ba82-9142d02f2c0f" pod="tigera-operator/tigera-operator-5bf8dfcb4-dc9lp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:33.841627 kubelet[2717]: I0813 01:04:33.841301 2717 kubelet.go:2306] "Pod admission denied" podUID="db84cf35-c802-401c-87e0-dbc0448041c3" pod="tigera-operator/tigera-operator-5bf8dfcb4-8p5dm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:33.951659 kubelet[2717]: I0813 01:04:33.951608 2717 kubelet.go:2306] "Pod admission denied" podUID="e3f20f27-d011-4f3f-a5d3-951e703c71f2" pod="tigera-operator/tigera-operator-5bf8dfcb4-nnl5z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:34.067069 kubelet[2717]: I0813 01:04:34.067004 2717 kubelet.go:2306] "Pod admission denied" podUID="0f6c320f-4ee5-46f4-9ef1-dc719ca3db60" pod="tigera-operator/tigera-operator-5bf8dfcb4-cfq87" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:34.080784 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2190886061.mount: Deactivated successfully. Aug 13 01:04:34.137293 kubelet[2717]: I0813 01:04:34.136794 2717 kubelet.go:2306] "Pod admission denied" podUID="b076d512-830f-4cfd-b067-92f959c475ff" pod="tigera-operator/tigera-operator-5bf8dfcb4-rgrnz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:34.137399 containerd[1575]: time="2025-08-13T01:04:34.137250841Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:34.138913 containerd[1575]: time="2025-08-13T01:04:34.138605521Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:04:34.139593 containerd[1575]: time="2025-08-13T01:04:34.139146113Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:34.142219 containerd[1575]: time="2025-08-13T01:04:34.141571587Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:34.142564 containerd[1575]: time="2025-08-13T01:04:34.142516643Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 5.374784492s" Aug 13 01:04:34.142564 containerd[1575]: time="2025-08-13T01:04:34.142548563Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 01:04:34.167716 containerd[1575]: time="2025-08-13T01:04:34.167683340Z" level=info msg="CreateContainer within sandbox \"d12cf5aacc597b7c6167326efb88e368e58730ffb459492ad3a89d6754a56a6d\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 01:04:34.179650 containerd[1575]: time="2025-08-13T01:04:34.176389341Z" level=info msg="Container 6353c1e40a0c53412135ac4fbfd177615839c7b54973b0c0cad5c4d48fa83c96: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:04:34.192211 containerd[1575]: time="2025-08-13T01:04:34.192165877Z" level=info msg="CreateContainer within sandbox \"d12cf5aacc597b7c6167326efb88e368e58730ffb459492ad3a89d6754a56a6d\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6353c1e40a0c53412135ac4fbfd177615839c7b54973b0c0cad5c4d48fa83c96\"" Aug 13 01:04:34.192730 containerd[1575]: time="2025-08-13T01:04:34.192712379Z" level=info msg="StartContainer for \"6353c1e40a0c53412135ac4fbfd177615839c7b54973b0c0cad5c4d48fa83c96\"" Aug 13 01:04:34.194041 containerd[1575]: time="2025-08-13T01:04:34.194019219Z" level=info msg="connecting to shim 6353c1e40a0c53412135ac4fbfd177615839c7b54973b0c0cad5c4d48fa83c96" address="unix:///run/containerd/s/ea3ca65a49e80f52abda37f00293fb706217e18eb71f9546cee6ff146935258b" protocol=ttrpc version=3 Aug 13 01:04:34.216394 systemd[1]: Started cri-containerd-6353c1e40a0c53412135ac4fbfd177615839c7b54973b0c0cad5c4d48fa83c96.scope - libcontainer container 6353c1e40a0c53412135ac4fbfd177615839c7b54973b0c0cad5c4d48fa83c96. Aug 13 01:04:34.245707 kubelet[2717]: I0813 01:04:34.245642 2717 kubelet.go:2306] "Pod admission denied" podUID="becf8c5c-ba3e-4069-be62-e189748fc6d9" pod="tigera-operator/tigera-operator-5bf8dfcb4-plfdb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:34.320021 containerd[1575]: time="2025-08-13T01:04:34.319982131Z" level=info msg="StartContainer for \"6353c1e40a0c53412135ac4fbfd177615839c7b54973b0c0cad5c4d48fa83c96\" returns successfully" Aug 13 01:04:34.343675 kubelet[2717]: I0813 01:04:34.343623 2717 kubelet.go:2306] "Pod admission denied" podUID="2a4e693c-cbaf-4936-a677-1ba71bdf8114" pod="tigera-operator/tigera-operator-5bf8dfcb4-5hh7g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:34.417356 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 01:04:34.417448 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 01:04:34.444642 kubelet[2717]: I0813 01:04:34.444577 2717 kubelet.go:2306] "Pod admission denied" podUID="c716ccd4-98f3-4b13-8f15-95e3851a6b47" pod="tigera-operator/tigera-operator-5bf8dfcb4-ndxs4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:34.540442 kubelet[2717]: I0813 01:04:34.540395 2717 kubelet.go:2306] "Pod admission denied" podUID="b0ca45fa-f848-4ae8-8d0f-bafbdca2c333" pod="tigera-operator/tigera-operator-5bf8dfcb4-gjlkm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:34.642136 kubelet[2717]: I0813 01:04:34.641945 2717 kubelet.go:2306] "Pod admission denied" podUID="0dc82ac5-a6e9-43ef-a62c-ee3db3c712ca" pod="tigera-operator/tigera-operator-5bf8dfcb4-g8j4p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:34.691227 kubelet[2717]: I0813 01:04:34.691141 2717 kubelet.go:2306] "Pod admission denied" podUID="13e45a48-6b0b-4b6e-8467-f42748f271d0" pod="tigera-operator/tigera-operator-5bf8dfcb4-twrhm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:34.772493 containerd[1575]: time="2025-08-13T01:04:34.771841539Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84hvc,Uid:f2b74998-29fc-4213-8313-543c9154bc64,Namespace:calico-system,Attempt:0,}" Aug 13 01:04:34.819398 kubelet[2717]: I0813 01:04:34.818848 2717 kubelet.go:2306] "Pod admission denied" podUID="e2ab5211-bf9d-43f4-a858-747d901904ab" pod="tigera-operator/tigera-operator-5bf8dfcb4-5lrnj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:34.909015 systemd-networkd[1460]: calif772b7758e2: Link UP Aug 13 01:04:34.909590 systemd-networkd[1460]: calif772b7758e2: Gained carrier Aug 13 01:04:34.928971 containerd[1575]: 2025-08-13 01:04:34.818 [INFO][4743] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:04:34.928971 containerd[1575]: 2025-08-13 01:04:34.834 [INFO][4743] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--209--21-k8s-csi--node--driver--84hvc-eth0 csi-node-driver- calico-system f2b74998-29fc-4213-8313-543c9154bc64 749 0 2025-08-13 01:02:43 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-233-209-21 csi-node-driver-84hvc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calif772b7758e2 [] [] }} ContainerID="058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" Namespace="calico-system" Pod="csi-node-driver-84hvc" WorkloadEndpoint="172--233--209--21-k8s-csi--node--driver--84hvc-" Aug 13 01:04:34.928971 containerd[1575]: 2025-08-13 01:04:34.835 [INFO][4743] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" Namespace="calico-system" Pod="csi-node-driver-84hvc" WorkloadEndpoint="172--233--209--21-k8s-csi--node--driver--84hvc-eth0" Aug 13 01:04:34.928971 containerd[1575]: 2025-08-13 01:04:34.865 [INFO][4756] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" HandleID="k8s-pod-network.058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" Workload="172--233--209--21-k8s-csi--node--driver--84hvc-eth0" Aug 13 01:04:34.929358 containerd[1575]: 2025-08-13 01:04:34.865 [INFO][4756] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" HandleID="k8s-pod-network.058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" Workload="172--233--209--21-k8s-csi--node--driver--84hvc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024fbe0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-233-209-21", "pod":"csi-node-driver-84hvc", "timestamp":"2025-08-13 01:04:34.865585939 +0000 UTC"}, Hostname:"172-233-209-21", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:04:34.929358 containerd[1575]: 2025-08-13 01:04:34.865 [INFO][4756] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:04:34.929358 containerd[1575]: 2025-08-13 01:04:34.865 [INFO][4756] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:04:34.929358 containerd[1575]: 2025-08-13 01:04:34.865 [INFO][4756] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-209-21' Aug 13 01:04:34.929358 containerd[1575]: 2025-08-13 01:04:34.872 [INFO][4756] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" host="172-233-209-21" Aug 13 01:04:34.929358 containerd[1575]: 2025-08-13 01:04:34.877 [INFO][4756] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-209-21" Aug 13 01:04:34.929358 containerd[1575]: 2025-08-13 01:04:34.881 [INFO][4756] ipam/ipam.go 511: Trying affinity for 192.168.107.64/26 host="172-233-209-21" Aug 13 01:04:34.929358 containerd[1575]: 2025-08-13 01:04:34.883 [INFO][4756] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.64/26 host="172-233-209-21" Aug 13 01:04:34.929358 containerd[1575]: 2025-08-13 01:04:34.885 [INFO][4756] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.64/26 host="172-233-209-21" Aug 13 01:04:34.929358 containerd[1575]: 2025-08-13 01:04:34.885 [INFO][4756] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.64/26 handle="k8s-pod-network.058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" host="172-233-209-21" Aug 13 01:04:34.929560 containerd[1575]: 2025-08-13 01:04:34.887 [INFO][4756] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c Aug 13 01:04:34.929560 containerd[1575]: 2025-08-13 01:04:34.891 [INFO][4756] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.64/26 handle="k8s-pod-network.058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" host="172-233-209-21" Aug 13 01:04:34.929560 containerd[1575]: 2025-08-13 01:04:34.896 [INFO][4756] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.65/26] block=192.168.107.64/26 handle="k8s-pod-network.058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" host="172-233-209-21" Aug 13 01:04:34.929560 containerd[1575]: 2025-08-13 01:04:34.896 [INFO][4756] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.65/26] handle="k8s-pod-network.058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" host="172-233-209-21" Aug 13 01:04:34.929560 containerd[1575]: 2025-08-13 01:04:34.896 [INFO][4756] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:04:34.929560 containerd[1575]: 2025-08-13 01:04:34.896 [INFO][4756] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.65/26] IPv6=[] ContainerID="058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" HandleID="k8s-pod-network.058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" Workload="172--233--209--21-k8s-csi--node--driver--84hvc-eth0" Aug 13 01:04:34.929675 containerd[1575]: 2025-08-13 01:04:34.900 [INFO][4743] cni-plugin/k8s.go 418: Populated endpoint ContainerID="058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" Namespace="calico-system" Pod="csi-node-driver-84hvc" WorkloadEndpoint="172--233--209--21-k8s-csi--node--driver--84hvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--209--21-k8s-csi--node--driver--84hvc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2b74998-29fc-4213-8313-543c9154bc64", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 2, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-209-21", ContainerID:"", Pod:"csi-node-driver-84hvc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif772b7758e2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:04:34.929724 containerd[1575]: 2025-08-13 01:04:34.901 [INFO][4743] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.65/32] ContainerID="058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" Namespace="calico-system" Pod="csi-node-driver-84hvc" WorkloadEndpoint="172--233--209--21-k8s-csi--node--driver--84hvc-eth0" Aug 13 01:04:34.929724 containerd[1575]: 2025-08-13 01:04:34.901 [INFO][4743] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calif772b7758e2 ContainerID="058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" Namespace="calico-system" Pod="csi-node-driver-84hvc" WorkloadEndpoint="172--233--209--21-k8s-csi--node--driver--84hvc-eth0" Aug 13 01:04:34.929724 containerd[1575]: 2025-08-13 01:04:34.908 [INFO][4743] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" Namespace="calico-system" Pod="csi-node-driver-84hvc" WorkloadEndpoint="172--233--209--21-k8s-csi--node--driver--84hvc-eth0" Aug 13 01:04:34.929790 containerd[1575]: 2025-08-13 01:04:34.908 [INFO][4743] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" Namespace="calico-system" Pod="csi-node-driver-84hvc" WorkloadEndpoint="172--233--209--21-k8s-csi--node--driver--84hvc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--209--21-k8s-csi--node--driver--84hvc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"f2b74998-29fc-4213-8313-543c9154bc64", ResourceVersion:"749", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 2, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-209-21", ContainerID:"058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c", Pod:"csi-node-driver-84hvc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.107.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calif772b7758e2", MAC:"c2:d9:ad:9e:dd:88", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:04:34.929837 containerd[1575]: 2025-08-13 01:04:34.918 [INFO][4743] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" Namespace="calico-system" Pod="csi-node-driver-84hvc" WorkloadEndpoint="172--233--209--21-k8s-csi--node--driver--84hvc-eth0" Aug 13 01:04:34.948212 containerd[1575]: time="2025-08-13T01:04:34.946333811Z" level=info msg="connecting to shim 058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c" address="unix:///run/containerd/s/bceed9ed9bbc51c805803b267507ad0b3b25796d10a6ddf285d784eb7529b2aa" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:04:34.973390 systemd[1]: Started cri-containerd-058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c.scope - libcontainer container 058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c. Aug 13 01:04:34.999541 kubelet[2717]: I0813 01:04:34.999493 2717 kubelet.go:2306] "Pod admission denied" podUID="279e786c-bef7-439a-8823-df14e8b193ef" pod="tigera-operator/tigera-operator-5bf8dfcb4-4zs99" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:35.025419 containerd[1575]: time="2025-08-13T01:04:35.025172778Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-84hvc,Uid:f2b74998-29fc-4213-8313-543c9154bc64,Namespace:calico-system,Attempt:0,} returns sandbox id \"058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c\"" Aug 13 01:04:35.031222 containerd[1575]: time="2025-08-13T01:04:35.030320303Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 01:04:35.099641 kubelet[2717]: I0813 01:04:35.099598 2717 kubelet.go:2306] "Pod admission denied" podUID="1efe0f20-527d-4429-a769-ce2bd3cf3952" pod="tigera-operator/tigera-operator-5bf8dfcb4-6pnmc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:35.187683 kubelet[2717]: I0813 01:04:35.187629 2717 kubelet.go:2306] "Pod admission denied" podUID="d209412c-0c03-4bde-b9d6-e81e37772eea" pod="tigera-operator/tigera-operator-5bf8dfcb4-54c67" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:35.250585 kubelet[2717]: I0813 01:04:35.250517 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-7pdcs" podStartSLOduration=1.560606163 podStartE2EDuration="1m52.250501862s" podCreationTimestamp="2025-08-13 01:02:43 +0000 UTC" firstStartedPulling="2025-08-13 01:02:43.45361038 +0000 UTC m=+18.778428744" lastFinishedPulling="2025-08-13 01:04:34.143506079 +0000 UTC m=+129.468324443" observedRunningTime="2025-08-13 01:04:35.231668567 +0000 UTC m=+130.556486931" watchObservedRunningTime="2025-08-13 01:04:35.250501862 +0000 UTC m=+130.575320226" Aug 13 01:04:35.304404 containerd[1575]: time="2025-08-13T01:04:35.304226429Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6353c1e40a0c53412135ac4fbfd177615839c7b54973b0c0cad5c4d48fa83c96\" id:\"e6dfc56065c9fb0a45bbdd8563866e3291295b8d017462da2ce7c5e4fe8fe17c\" pid:4830 exit_status:1 exited_at:{seconds:1755047075 nanos:303514179}" Aug 13 01:04:35.401210 kubelet[2717]: I0813 01:04:35.401047 2717 kubelet.go:2306] "Pod admission denied" podUID="8b45f0be-f0a7-48a7-b186-accf7c6cb0eb" pod="tigera-operator/tigera-operator-5bf8dfcb4-947t4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:35.486453 kubelet[2717]: I0813 01:04:35.486404 2717 kubelet.go:2306] "Pod admission denied" podUID="11c02f42-aeb0-4c75-8379-40eeca8cba3e" pod="tigera-operator/tigera-operator-5bf8dfcb4-w7bmw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:35.538918 kubelet[2717]: I0813 01:04:35.538878 2717 kubelet.go:2306] "Pod admission denied" podUID="bdfd6370-7c41-4e2b-b9ba-927f7983f019" pod="tigera-operator/tigera-operator-5bf8dfcb4-mtwfr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:35.647908 kubelet[2717]: I0813 01:04:35.647796 2717 kubelet.go:2306] "Pod admission denied" podUID="4a5444f8-6348-44fc-bb0b-e063e4a7a486" pod="tigera-operator/tigera-operator-5bf8dfcb4-bg7hj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:35.753183 kubelet[2717]: I0813 01:04:35.753135 2717 kubelet.go:2306] "Pod admission denied" podUID="e28f9d25-72ff-49eb-af82-c6639f1e67f3" pod="tigera-operator/tigera-operator-5bf8dfcb4-kzd97" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:35.765430 containerd[1575]: time="2025-08-13T01:04:35.765397153Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564d8b8748-ps97n,Uid:a7e8405c-2c82-420c-bac7-a7277571f968,Namespace:calico-system,Attempt:0,}" Aug 13 01:04:35.904430 kubelet[2717]: I0813 01:04:35.903466 2717 kubelet.go:2306] "Pod admission denied" podUID="97f043a8-c9f9-4cc3-988d-bc7256aab0a9" pod="tigera-operator/tigera-operator-5bf8dfcb4-sdj9w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:36.001403 containerd[1575]: time="2025-08-13T01:04:36.001280943Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:36.003386 containerd[1575]: time="2025-08-13T01:04:36.003355624Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 01:04:36.004253 containerd[1575]: time="2025-08-13T01:04:36.004223481Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:36.008073 containerd[1575]: time="2025-08-13T01:04:36.008030397Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:36.009967 containerd[1575]: time="2025-08-13T01:04:36.009939819Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 979.595726ms" Aug 13 01:04:36.010119 containerd[1575]: time="2025-08-13T01:04:36.009967969Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 01:04:36.013856 containerd[1575]: time="2025-08-13T01:04:36.013776874Z" level=info msg="CreateContainer within sandbox \"058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 01:04:36.027208 containerd[1575]: time="2025-08-13T01:04:36.027161942Z" level=info msg="Container 37d73f4c36a95159b71863adcc733f515191434c71d57584d4991182c1a2c6f5: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:04:36.041339 containerd[1575]: time="2025-08-13T01:04:36.040380193Z" level=info msg="CreateContainer within sandbox \"058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"37d73f4c36a95159b71863adcc733f515191434c71d57584d4991182c1a2c6f5\"" Aug 13 01:04:36.041339 containerd[1575]: time="2025-08-13T01:04:36.041281590Z" level=info msg="StartContainer for \"37d73f4c36a95159b71863adcc733f515191434c71d57584d4991182c1a2c6f5\"" Aug 13 01:04:36.044263 containerd[1575]: time="2025-08-13T01:04:36.044163768Z" level=info msg="connecting to shim 37d73f4c36a95159b71863adcc733f515191434c71d57584d4991182c1a2c6f5" address="unix:///run/containerd/s/bceed9ed9bbc51c805803b267507ad0b3b25796d10a6ddf285d784eb7529b2aa" protocol=ttrpc version=3 Aug 13 01:04:36.079322 systemd[1]: Started cri-containerd-37d73f4c36a95159b71863adcc733f515191434c71d57584d4991182c1a2c6f5.scope - libcontainer container 37d73f4c36a95159b71863adcc733f515191434c71d57584d4991182c1a2c6f5. Aug 13 01:04:36.088393 systemd-networkd[1460]: cali81338d0161c: Link UP Aug 13 01:04:36.090549 systemd-networkd[1460]: cali81338d0161c: Gained carrier Aug 13 01:04:36.118838 containerd[1575]: 2025-08-13 01:04:35.822 [INFO][4864] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:04:36.118838 containerd[1575]: 2025-08-13 01:04:35.843 [INFO][4864] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--209--21-k8s-calico--kube--controllers--564d8b8748--ps97n-eth0 calico-kube-controllers-564d8b8748- calico-system a7e8405c-2c82-420c-bac7-a7277571f968 845 0 2025-08-13 01:02:43 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:564d8b8748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-233-209-21 calico-kube-controllers-564d8b8748-ps97n eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali81338d0161c [] [] }} ContainerID="f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" Namespace="calico-system" Pod="calico-kube-controllers-564d8b8748-ps97n" WorkloadEndpoint="172--233--209--21-k8s-calico--kube--controllers--564d8b8748--ps97n-" Aug 13 01:04:36.118838 containerd[1575]: 2025-08-13 01:04:35.843 [INFO][4864] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" Namespace="calico-system" Pod="calico-kube-controllers-564d8b8748-ps97n" WorkloadEndpoint="172--233--209--21-k8s-calico--kube--controllers--564d8b8748--ps97n-eth0" Aug 13 01:04:36.118838 containerd[1575]: 2025-08-13 01:04:35.956 [INFO][4893] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" HandleID="k8s-pod-network.f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" Workload="172--233--209--21-k8s-calico--kube--controllers--564d8b8748--ps97n-eth0" Aug 13 01:04:36.119135 containerd[1575]: 2025-08-13 01:04:35.956 [INFO][4893] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" HandleID="k8s-pod-network.f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" Workload="172--233--209--21-k8s-calico--kube--controllers--564d8b8748--ps97n-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0003d29a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-233-209-21", "pod":"calico-kube-controllers-564d8b8748-ps97n", "timestamp":"2025-08-13 01:04:35.95283241 +0000 UTC"}, Hostname:"172-233-209-21", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:04:36.119135 containerd[1575]: 2025-08-13 01:04:35.960 [INFO][4893] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:04:36.119135 containerd[1575]: 2025-08-13 01:04:35.960 [INFO][4893] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:04:36.119135 containerd[1575]: 2025-08-13 01:04:35.960 [INFO][4893] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-209-21' Aug 13 01:04:36.119135 containerd[1575]: 2025-08-13 01:04:35.970 [INFO][4893] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" host="172-233-209-21" Aug 13 01:04:36.119135 containerd[1575]: 2025-08-13 01:04:35.977 [INFO][4893] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-209-21" Aug 13 01:04:36.119135 containerd[1575]: 2025-08-13 01:04:35.985 [INFO][4893] ipam/ipam.go 511: Trying affinity for 192.168.107.64/26 host="172-233-209-21" Aug 13 01:04:36.119135 containerd[1575]: 2025-08-13 01:04:35.987 [INFO][4893] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.64/26 host="172-233-209-21" Aug 13 01:04:36.119135 containerd[1575]: 2025-08-13 01:04:35.991 [INFO][4893] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.64/26 host="172-233-209-21" Aug 13 01:04:36.119436 containerd[1575]: 2025-08-13 01:04:35.992 [INFO][4893] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.64/26 handle="k8s-pod-network.f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" host="172-233-209-21" Aug 13 01:04:36.119436 containerd[1575]: 2025-08-13 01:04:35.994 [INFO][4893] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a Aug 13 01:04:36.119436 containerd[1575]: 2025-08-13 01:04:36.001 [INFO][4893] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.64/26 handle="k8s-pod-network.f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" host="172-233-209-21" Aug 13 01:04:36.119436 containerd[1575]: 2025-08-13 01:04:36.077 [INFO][4893] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.66/26] block=192.168.107.64/26 handle="k8s-pod-network.f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" host="172-233-209-21" Aug 13 01:04:36.119436 containerd[1575]: 2025-08-13 01:04:36.077 [INFO][4893] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.66/26] handle="k8s-pod-network.f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" host="172-233-209-21" Aug 13 01:04:36.119436 containerd[1575]: 2025-08-13 01:04:36.077 [INFO][4893] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:04:36.119436 containerd[1575]: 2025-08-13 01:04:36.077 [INFO][4893] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.66/26] IPv6=[] ContainerID="f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" HandleID="k8s-pod-network.f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" Workload="172--233--209--21-k8s-calico--kube--controllers--564d8b8748--ps97n-eth0" Aug 13 01:04:36.119567 containerd[1575]: 2025-08-13 01:04:36.081 [INFO][4864] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" Namespace="calico-system" Pod="calico-kube-controllers-564d8b8748-ps97n" WorkloadEndpoint="172--233--209--21-k8s-calico--kube--controllers--564d8b8748--ps97n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--209--21-k8s-calico--kube--controllers--564d8b8748--ps97n-eth0", GenerateName:"calico-kube-controllers-564d8b8748-", Namespace:"calico-system", SelfLink:"", UID:"a7e8405c-2c82-420c-bac7-a7277571f968", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 2, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"564d8b8748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-209-21", ContainerID:"", Pod:"calico-kube-controllers-564d8b8748-ps97n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81338d0161c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:04:36.119621 containerd[1575]: 2025-08-13 01:04:36.082 [INFO][4864] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.66/32] ContainerID="f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" Namespace="calico-system" Pod="calico-kube-controllers-564d8b8748-ps97n" WorkloadEndpoint="172--233--209--21-k8s-calico--kube--controllers--564d8b8748--ps97n-eth0" Aug 13 01:04:36.119621 containerd[1575]: 2025-08-13 01:04:36.082 [INFO][4864] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81338d0161c ContainerID="f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" Namespace="calico-system" Pod="calico-kube-controllers-564d8b8748-ps97n" WorkloadEndpoint="172--233--209--21-k8s-calico--kube--controllers--564d8b8748--ps97n-eth0" Aug 13 01:04:36.119621 containerd[1575]: 2025-08-13 01:04:36.088 [INFO][4864] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" Namespace="calico-system" Pod="calico-kube-controllers-564d8b8748-ps97n" WorkloadEndpoint="172--233--209--21-k8s-calico--kube--controllers--564d8b8748--ps97n-eth0" Aug 13 01:04:36.119683 containerd[1575]: 2025-08-13 01:04:36.088 [INFO][4864] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" Namespace="calico-system" Pod="calico-kube-controllers-564d8b8748-ps97n" WorkloadEndpoint="172--233--209--21-k8s-calico--kube--controllers--564d8b8748--ps97n-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--209--21-k8s-calico--kube--controllers--564d8b8748--ps97n-eth0", GenerateName:"calico-kube-controllers-564d8b8748-", Namespace:"calico-system", SelfLink:"", UID:"a7e8405c-2c82-420c-bac7-a7277571f968", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 2, 43, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"564d8b8748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-209-21", ContainerID:"f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a", Pod:"calico-kube-controllers-564d8b8748-ps97n", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.107.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali81338d0161c", MAC:"c6:64:89:98:85:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:04:36.119731 containerd[1575]: 2025-08-13 01:04:36.109 [INFO][4864] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" Namespace="calico-system" Pod="calico-kube-controllers-564d8b8748-ps97n" WorkloadEndpoint="172--233--209--21-k8s-calico--kube--controllers--564d8b8748--ps97n-eth0" Aug 13 01:04:36.146947 kubelet[2717]: I0813 01:04:36.146492 2717 kubelet.go:2306] "Pod admission denied" podUID="edb7edfa-e307-4076-a8cc-0c16570be2e2" pod="tigera-operator/tigera-operator-5bf8dfcb4-x6lh7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:36.171225 containerd[1575]: time="2025-08-13T01:04:36.170762323Z" level=info msg="connecting to shim f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a" address="unix:///run/containerd/s/52b2b75805b84fd503872cd52598855e7777aec980f2a9cb32953a1503c686e6" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:04:36.175120 kubelet[2717]: I0813 01:04:36.174471 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-x6lh7" podStartSLOduration=0.17445242 podStartE2EDuration="174.45242ms" podCreationTimestamp="2025-08-13 01:04:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:04:36.164160137 +0000 UTC m=+131.488978501" watchObservedRunningTime="2025-08-13 01:04:36.17445242 +0000 UTC m=+131.499270784" Aug 13 01:04:36.262746 systemd[1]: Started cri-containerd-f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a.scope - libcontainer container f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a. Aug 13 01:04:36.377387 systemd-networkd[1460]: calif772b7758e2: Gained IPv6LL Aug 13 01:04:36.384818 containerd[1575]: time="2025-08-13T01:04:36.384774093Z" level=info msg="StartContainer for \"37d73f4c36a95159b71863adcc733f515191434c71d57584d4991182c1a2c6f5\" returns successfully" Aug 13 01:04:36.398860 kubelet[2717]: I0813 01:04:36.398763 2717 kubelet.go:2306] "Pod admission denied" podUID="8321d1de-ecbc-47d2-8b73-058d8da6bf51" pod="tigera-operator/tigera-operator-5bf8dfcb4-dvz2f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:36.402133 containerd[1575]: time="2025-08-13T01:04:36.402094945Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 01:04:36.477371 containerd[1575]: time="2025-08-13T01:04:36.477297456Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-564d8b8748-ps97n,Uid:a7e8405c-2c82-420c-bac7-a7277571f968,Namespace:calico-system,Attempt:0,} returns sandbox id \"f5c8de6adaf6b78a369fef8561cb9f0c3c84d6164b6b4ea1370456a01127d30a\"" Aug 13 01:04:36.491824 kubelet[2717]: I0813 01:04:36.491782 2717 kubelet.go:2306] "Pod admission denied" podUID="d11990e7-98ec-4a27-aa57-bec6fce73146" pod="tigera-operator/tigera-operator-5bf8dfcb4-5fwwp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:36.539963 kubelet[2717]: I0813 01:04:36.539722 2717 kubelet.go:2306] "Pod admission denied" podUID="666e5e4a-4537-48c6-92b4-534cd46046d5" pod="tigera-operator/tigera-operator-5bf8dfcb4-mhgql" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:36.696249 kubelet[2717]: I0813 01:04:36.696181 2717 kubelet.go:2306] "Pod admission denied" podUID="b06e3d4d-930a-4151-a6b9-893769bd2a90" pod="tigera-operator/tigera-operator-5bf8dfcb4-922t5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:36.754397 kubelet[2717]: I0813 01:04:36.753876 2717 kubelet.go:2306] "Pod admission denied" podUID="fe80181a-c8f7-40f3-aa8a-cc91e449187e" pod="tigera-operator/tigera-operator-5bf8dfcb4-tvmd2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:36.767050 kubelet[2717]: E0813 01:04:36.763703 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:04:36.769810 containerd[1575]: time="2025-08-13T01:04:36.769730632Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tp469,Uid:c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af,Namespace:kube-system,Attempt:0,}" Aug 13 01:04:36.858104 kubelet[2717]: I0813 01:04:36.857863 2717 kubelet.go:2306] "Pod admission denied" podUID="22b2f472-ea26-4c02-8438-8ad007e49ec5" pod="tigera-operator/tigera-operator-5bf8dfcb4-9jkmh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:36.864144 containerd[1575]: time="2025-08-13T01:04:36.864079199Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6353c1e40a0c53412135ac4fbfd177615839c7b54973b0c0cad5c4d48fa83c96\" id:\"d8125e9ea9c2870c38efaa3c50b3793407a35521c222e0cc2d1c4b701dbf7ba8\" pid:5030 exit_status:1 exited_at:{seconds:1755047076 nanos:863826462}" Aug 13 01:04:36.980809 systemd-networkd[1460]: cali21a5fdffdf1: Link UP Aug 13 01:04:36.982377 systemd-networkd[1460]: cali21a5fdffdf1: Gained carrier Aug 13 01:04:36.999292 containerd[1575]: 2025-08-13 01:04:36.868 [INFO][5075] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--209--21-k8s-coredns--7c65d6cfc9--tp469-eth0 coredns-7c65d6cfc9- kube-system c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af 847 0 2025-08-13 01:02:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-233-209-21 coredns-7c65d6cfc9-tp469 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali21a5fdffdf1 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tp469" WorkloadEndpoint="172--233--209--21-k8s-coredns--7c65d6cfc9--tp469-" Aug 13 01:04:36.999292 containerd[1575]: 2025-08-13 01:04:36.869 [INFO][5075] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tp469" WorkloadEndpoint="172--233--209--21-k8s-coredns--7c65d6cfc9--tp469-eth0" Aug 13 01:04:36.999292 containerd[1575]: 2025-08-13 01:04:36.935 [INFO][5108] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" HandleID="k8s-pod-network.0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" Workload="172--233--209--21-k8s-coredns--7c65d6cfc9--tp469-eth0" Aug 13 01:04:36.999476 containerd[1575]: 2025-08-13 01:04:36.936 [INFO][5108] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" HandleID="k8s-pod-network.0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" Workload="172--233--209--21-k8s-coredns--7c65d6cfc9--tp469-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5020), Attrs:map[string]string{"namespace":"kube-system", "node":"172-233-209-21", "pod":"coredns-7c65d6cfc9-tp469", "timestamp":"2025-08-13 01:04:36.935045891 +0000 UTC"}, Hostname:"172-233-209-21", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:04:36.999476 containerd[1575]: 2025-08-13 01:04:36.936 [INFO][5108] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:04:36.999476 containerd[1575]: 2025-08-13 01:04:36.936 [INFO][5108] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:04:36.999476 containerd[1575]: 2025-08-13 01:04:36.936 [INFO][5108] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-209-21' Aug 13 01:04:36.999476 containerd[1575]: 2025-08-13 01:04:36.942 [INFO][5108] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" host="172-233-209-21" Aug 13 01:04:36.999476 containerd[1575]: 2025-08-13 01:04:36.949 [INFO][5108] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-209-21" Aug 13 01:04:36.999476 containerd[1575]: 2025-08-13 01:04:36.954 [INFO][5108] ipam/ipam.go 511: Trying affinity for 192.168.107.64/26 host="172-233-209-21" Aug 13 01:04:36.999476 containerd[1575]: 2025-08-13 01:04:36.956 [INFO][5108] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.64/26 host="172-233-209-21" Aug 13 01:04:36.999476 containerd[1575]: 2025-08-13 01:04:36.958 [INFO][5108] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.64/26 host="172-233-209-21" Aug 13 01:04:36.999476 containerd[1575]: 2025-08-13 01:04:36.958 [INFO][5108] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.64/26 handle="k8s-pod-network.0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" host="172-233-209-21" Aug 13 01:04:36.999684 containerd[1575]: 2025-08-13 01:04:36.960 [INFO][5108] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2 Aug 13 01:04:36.999684 containerd[1575]: 2025-08-13 01:04:36.965 [INFO][5108] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.64/26 handle="k8s-pod-network.0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" host="172-233-209-21" Aug 13 01:04:36.999684 containerd[1575]: 2025-08-13 01:04:36.971 [INFO][5108] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.67/26] block=192.168.107.64/26 handle="k8s-pod-network.0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" host="172-233-209-21" Aug 13 01:04:36.999684 containerd[1575]: 2025-08-13 01:04:36.971 [INFO][5108] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.67/26] handle="k8s-pod-network.0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" host="172-233-209-21" Aug 13 01:04:36.999684 containerd[1575]: 2025-08-13 01:04:36.971 [INFO][5108] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:04:36.999684 containerd[1575]: 2025-08-13 01:04:36.971 [INFO][5108] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.67/26] IPv6=[] ContainerID="0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" HandleID="k8s-pod-network.0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" Workload="172--233--209--21-k8s-coredns--7c65d6cfc9--tp469-eth0" Aug 13 01:04:36.999800 containerd[1575]: 2025-08-13 01:04:36.974 [INFO][5075] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tp469" WorkloadEndpoint="172--233--209--21-k8s-coredns--7c65d6cfc9--tp469-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--209--21-k8s-coredns--7c65d6cfc9--tp469-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-209-21", ContainerID:"", Pod:"coredns-7c65d6cfc9-tp469", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21a5fdffdf1", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:04:36.999800 containerd[1575]: 2025-08-13 01:04:36.974 [INFO][5075] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.67/32] ContainerID="0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tp469" WorkloadEndpoint="172--233--209--21-k8s-coredns--7c65d6cfc9--tp469-eth0" Aug 13 01:04:36.999800 containerd[1575]: 2025-08-13 01:04:36.974 [INFO][5075] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali21a5fdffdf1 ContainerID="0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tp469" WorkloadEndpoint="172--233--209--21-k8s-coredns--7c65d6cfc9--tp469-eth0" Aug 13 01:04:36.999800 containerd[1575]: 2025-08-13 01:04:36.977 [INFO][5075] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tp469" WorkloadEndpoint="172--233--209--21-k8s-coredns--7c65d6cfc9--tp469-eth0" Aug 13 01:04:36.999800 containerd[1575]: 2025-08-13 01:04:36.978 [INFO][5075] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tp469" WorkloadEndpoint="172--233--209--21-k8s-coredns--7c65d6cfc9--tp469-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--209--21-k8s-coredns--7c65d6cfc9--tp469-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-209-21", ContainerID:"0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2", Pod:"coredns-7c65d6cfc9-tp469", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali21a5fdffdf1", MAC:"82:1c:1d:9c:1c:70", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:04:36.999800 containerd[1575]: 2025-08-13 01:04:36.990 [INFO][5075] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" Namespace="kube-system" Pod="coredns-7c65d6cfc9-tp469" WorkloadEndpoint="172--233--209--21-k8s-coredns--7c65d6cfc9--tp469-eth0" Aug 13 01:04:37.075280 containerd[1575]: time="2025-08-13T01:04:37.073468882Z" level=info msg="connecting to shim 0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2" address="unix:///run/containerd/s/9fb9b048aae8053f046aa6dc8253b8816ea1e9ddf618b49a59bf71799f11fb76" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:04:37.086253 kubelet[2717]: I0813 01:04:37.083020 2717 kubelet.go:2306] "Pod admission denied" podUID="6ad7a8cf-1c35-4d15-8fa8-c8c91cdca463" pod="tigera-operator/tigera-operator-5bf8dfcb4-t6h4r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:37.116318 systemd[1]: Started cri-containerd-0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2.scope - libcontainer container 0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2. Aug 13 01:04:37.201864 kubelet[2717]: I0813 01:04:37.201805 2717 kubelet.go:2306] "Pod admission denied" podUID="618d7b59-ddb4-4c5a-ac80-dacd6c87b6f5" pod="tigera-operator/tigera-operator-5bf8dfcb4-bnztq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:37.231312 containerd[1575]: time="2025-08-13T01:04:37.231274506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-tp469,Uid:c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2\"" Aug 13 01:04:37.233458 kubelet[2717]: E0813 01:04:37.233425 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:04:37.333583 systemd-networkd[1460]: vxlan.calico: Link UP Aug 13 01:04:37.333591 systemd-networkd[1460]: vxlan.calico: Gained carrier Aug 13 01:04:37.402310 kubelet[2717]: I0813 01:04:37.401792 2717 kubelet.go:2306] "Pod admission denied" podUID="dfa05d58-d6d0-438c-9222-5882924c7667" pod="tigera-operator/tigera-operator-5bf8dfcb4-zqmtw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:37.482421 kubelet[2717]: I0813 01:04:37.482392 2717 kubelet.go:2306] "Pod admission denied" podUID="4fe281ca-d213-4107-869a-e2fb93f5d56c" pod="tigera-operator/tigera-operator-5bf8dfcb4-9cmhd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:37.594956 kubelet[2717]: I0813 01:04:37.594658 2717 kubelet.go:2306] "Pod admission denied" podUID="38f7700e-2818-497b-afb9-31c545a545db" pod="tigera-operator/tigera-operator-5bf8dfcb4-q7dqx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:37.695347 kubelet[2717]: I0813 01:04:37.695313 2717 kubelet.go:2306] "Pod admission denied" podUID="2e62f011-56fc-4cc3-b78b-2f56b6a7eb50" pod="tigera-operator/tigera-operator-5bf8dfcb4-xwwhm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:37.794961 kubelet[2717]: I0813 01:04:37.794927 2717 kubelet.go:2306] "Pod admission denied" podUID="e2e11930-c6f9-437e-8789-a140990a52ea" pod="tigera-operator/tigera-operator-5bf8dfcb4-f86p2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:37.942981 kubelet[2717]: I0813 01:04:37.941444 2717 kubelet.go:2306] "Pod admission denied" podUID="0644ce0e-9738-4171-9c1a-bf4d76b69e61" pod="tigera-operator/tigera-operator-5bf8dfcb4-n9j92" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:37.977728 systemd-networkd[1460]: cali81338d0161c: Gained IPv6LL Aug 13 01:04:38.005478 kubelet[2717]: I0813 01:04:38.005421 2717 kubelet.go:2306] "Pod admission denied" podUID="8e365bf7-e2d6-45af-a299-41160c5d7af9" pod="tigera-operator/tigera-operator-5bf8dfcb4-fttdw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:38.150128 kubelet[2717]: I0813 01:04:38.150060 2717 kubelet.go:2306] "Pod admission denied" podUID="69f3bd4c-8cd6-4dd8-beee-fc7d505fa3d4" pod="tigera-operator/tigera-operator-5bf8dfcb4-tqmjv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:38.257764 kubelet[2717]: I0813 01:04:38.257420 2717 kubelet.go:2306] "Pod admission denied" podUID="055409a8-1b9d-47d2-b078-654461f08350" pod="tigera-operator/tigera-operator-5bf8dfcb4-mbtkn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:38.303643 containerd[1575]: time="2025-08-13T01:04:38.303039722Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:38.303643 containerd[1575]: time="2025-08-13T01:04:38.303616304Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 01:04:38.304251 containerd[1575]: time="2025-08-13T01:04:38.304232385Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:38.305556 containerd[1575]: time="2025-08-13T01:04:38.305513547Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:04:38.306321 containerd[1575]: time="2025-08-13T01:04:38.306295277Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 1.904163583s" Aug 13 01:04:38.306392 containerd[1575]: time="2025-08-13T01:04:38.306378555Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 01:04:38.308219 containerd[1575]: time="2025-08-13T01:04:38.308173710Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:04:38.310640 containerd[1575]: time="2025-08-13T01:04:38.310591327Z" level=info msg="CreateContainer within sandbox \"058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 01:04:38.319617 containerd[1575]: time="2025-08-13T01:04:38.319597372Z" level=info msg="Container 06982775cfd103af600d9ea364770265436d909558238ea8828b7a5a540c645d: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:04:38.323560 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount829933619.mount: Deactivated successfully. Aug 13 01:04:38.334556 containerd[1575]: time="2025-08-13T01:04:38.334520265Z" level=info msg="CreateContainer within sandbox \"058dd54c20b62da2d5f6420f43f5baf16f3bdef618a5b31abe2f72a3b493653c\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"06982775cfd103af600d9ea364770265436d909558238ea8828b7a5a540c645d\"" Aug 13 01:04:38.335078 containerd[1575]: time="2025-08-13T01:04:38.335009188Z" level=info msg="StartContainer for \"06982775cfd103af600d9ea364770265436d909558238ea8828b7a5a540c645d\"" Aug 13 01:04:38.336372 containerd[1575]: time="2025-08-13T01:04:38.336325970Z" level=info msg="connecting to shim 06982775cfd103af600d9ea364770265436d909558238ea8828b7a5a540c645d" address="unix:///run/containerd/s/bceed9ed9bbc51c805803b267507ad0b3b25796d10a6ddf285d784eb7529b2aa" protocol=ttrpc version=3 Aug 13 01:04:38.365328 systemd[1]: Started cri-containerd-06982775cfd103af600d9ea364770265436d909558238ea8828b7a5a540c645d.scope - libcontainer container 06982775cfd103af600d9ea364770265436d909558238ea8828b7a5a540c645d. Aug 13 01:04:38.423124 containerd[1575]: time="2025-08-13T01:04:38.423095135Z" level=info msg="StartContainer for \"06982775cfd103af600d9ea364770265436d909558238ea8828b7a5a540c645d\" returns successfully" Aug 13 01:04:38.489544 kubelet[2717]: I0813 01:04:38.489490 2717 kubelet.go:2306] "Pod admission denied" podUID="fa0edbb6-79a7-4919-bdf1-6fa510073f40" pod="tigera-operator/tigera-operator-5bf8dfcb4-47d4v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:38.606076 kubelet[2717]: I0813 01:04:38.604534 2717 kubelet.go:2306] "Pod admission denied" podUID="85a35e7d-0068-43a6-b28c-6f44351c1244" pod="tigera-operator/tigera-operator-5bf8dfcb4-dwqcp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:38.617391 systemd-networkd[1460]: cali21a5fdffdf1: Gained IPv6LL Aug 13 01:04:38.690213 kubelet[2717]: I0813 01:04:38.690118 2717 kubelet.go:2306] "Pod admission denied" podUID="9ced70cc-1e2d-4b43-9029-0c319cf03f89" pod="tigera-operator/tigera-operator-5bf8dfcb4-fnn9b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:38.765695 kubelet[2717]: E0813 01:04:38.765637 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:04:38.795627 kubelet[2717]: I0813 01:04:38.795582 2717 kubelet.go:2306] "Pod admission denied" podUID="b4f51ec2-7f5a-48b5-a70f-3a7ad248461b" pod="tigera-operator/tigera-operator-5bf8dfcb4-4mnzz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:38.890029 kubelet[2717]: I0813 01:04:38.888809 2717 kubelet.go:2306] "Pod admission denied" podUID="881ddd97-0118-4960-aa65-092972cafc60" pod="tigera-operator/tigera-operator-5bf8dfcb4-cltxw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:38.942740 kubelet[2717]: I0813 01:04:38.942720 2717 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 01:04:38.943068 kubelet[2717]: I0813 01:04:38.943051 2717 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 01:04:38.965412 containerd[1575]: time="2025-08-13T01:04:38.965002941Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 01:04:38.965623 containerd[1575]: time="2025-08-13T01:04:38.965528594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=16781546" Aug 13 01:04:38.965859 kubelet[2717]: E0813 01:04:38.965836 2717 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:04:38.966004 kubelet[2717]: E0813 01:04:38.965919 2717 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:04:38.967167 kubelet[2717]: E0813 01:04:38.966674 2717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqfjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 01:04:38.967717 containerd[1575]: time="2025-08-13T01:04:38.966857635Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:04:38.967880 kubelet[2717]: E0813 01:04:38.967841 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:04:39.003865 kubelet[2717]: I0813 01:04:39.003804 2717 kubelet.go:2306] "Pod admission denied" podUID="c11e9b53-ccf5-4446-84cd-5c0471ffaee8" pod="tigera-operator/tigera-operator-5bf8dfcb4-zhgzm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:39.094675 kubelet[2717]: I0813 01:04:39.094626 2717 kubelet.go:2306] "Pod admission denied" podUID="1d94845b-6f6c-49c6-a6ab-8cef8371dd21" pod="tigera-operator/tigera-operator-5bf8dfcb4-b2vrg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:39.192493 kubelet[2717]: I0813 01:04:39.192430 2717 kubelet.go:2306] "Pod admission denied" podUID="cefac206-c0a7-4a5a-af6a-537fcc789c20" pod="tigera-operator/tigera-operator-5bf8dfcb4-k6gwc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:39.293449 kubelet[2717]: I0813 01:04:39.293390 2717 kubelet.go:2306] "Pod admission denied" podUID="97d5b4a6-52f1-4284-8ab7-02f5787dff7d" pod="tigera-operator/tigera-operator-5bf8dfcb4-llcwd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:39.307943 kubelet[2717]: E0813 01:04:39.306715 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:04:39.389308 systemd-networkd[1460]: vxlan.calico: Gained IPv6LL Aug 13 01:04:39.415335 kubelet[2717]: I0813 01:04:39.415284 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-84hvc" podStartSLOduration=113.138026084 podStartE2EDuration="1m56.41526694s" podCreationTimestamp="2025-08-13 01:02:43 +0000 UTC" firstStartedPulling="2025-08-13 01:04:35.030095996 +0000 UTC m=+130.354914360" lastFinishedPulling="2025-08-13 01:04:38.307336852 +0000 UTC m=+133.632155216" observedRunningTime="2025-08-13 01:04:39.362649789 +0000 UTC m=+134.687468153" watchObservedRunningTime="2025-08-13 01:04:39.41526694 +0000 UTC m=+134.740085314" Aug 13 01:04:39.451471 kubelet[2717]: I0813 01:04:39.451256 2717 kubelet.go:2306] "Pod admission denied" podUID="829c05b0-a066-40d1-a2db-77a1006ad9f3" pod="tigera-operator/tigera-operator-5bf8dfcb4-569zp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:39.502567 kubelet[2717]: I0813 01:04:39.502533 2717 kubelet.go:2306] "Pod admission denied" podUID="b185b37d-dc25-448b-9577-4e6363c02027" pod="tigera-operator/tigera-operator-5bf8dfcb4-w242f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:39.594322 kubelet[2717]: I0813 01:04:39.594265 2717 kubelet.go:2306] "Pod admission denied" podUID="26408872-397b-47ad-a11c-07049d75d4ce" pod="tigera-operator/tigera-operator-5bf8dfcb4-57lb7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:39.687545 kubelet[2717]: I0813 01:04:39.687482 2717 kubelet.go:2306] "Pod admission denied" podUID="6832b654-d145-43e6-b231-1b6ea7a74930" pod="tigera-operator/tigera-operator-5bf8dfcb4-bg5ld" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:39.758423 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1891613308.mount: Deactivated successfully. Aug 13 01:04:39.820311 kubelet[2717]: I0813 01:04:39.820105 2717 kubelet.go:2306] "Pod admission denied" podUID="e5840a73-2e50-449d-a13e-43818e2610b8" pod="tigera-operator/tigera-operator-5bf8dfcb4-cbftq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:39.895557 kubelet[2717]: I0813 01:04:39.894655 2717 kubelet.go:2306] "Pod admission denied" podUID="d2aa0439-bc30-4766-baa6-fb65ee5318f6" pod="tigera-operator/tigera-operator-5bf8dfcb4-9lqc5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:39.990099 kubelet[2717]: I0813 01:04:39.990058 2717 kubelet.go:2306] "Pod admission denied" podUID="050eba7f-97ca-4e1e-bfe3-3855c4d78266" pod="tigera-operator/tigera-operator-5bf8dfcb4-qcz89" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:40.146281 kubelet[2717]: I0813 01:04:40.145712 2717 kubelet.go:2306] "Pod admission denied" podUID="2a0a83aa-ddd1-41ca-9f64-846407158832" pod="tigera-operator/tigera-operator-5bf8dfcb4-zjx2q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:40.164222 containerd[1575]: time="2025-08-13T01:04:40.163523939Z" level=error msg="failed to cleanup \"extract-781238139-paT_ sha256:49b6bf99f305d7e81f47f1eb8e0a263326d8b3e0485e2b74bade257d300f6a00\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:04:40.166354 containerd[1575]: time="2025-08-13T01:04:40.166311162Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:9ed498e122b248a801130d052c25418381ee7bf215cdf7990965bae0dc37dcc2: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/83/fs/usr/share/zoneinfo/Pacific/Gambier: no space left on device" Aug 13 01:04:40.166404 containerd[1575]: time="2025-08-13T01:04:40.166366961Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=14626829" Aug 13 01:04:40.166713 kubelet[2717]: E0813 01:04:40.166662 2717 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:9ed498e122b248a801130d052c25418381ee7bf215cdf7990965bae0dc37dcc2: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/83/fs/usr/share/zoneinfo/Pacific/Gambier: no space left on device" image="registry.k8s.io/coredns/coredns:v1.11.3" Aug 13 01:04:40.166758 kubelet[2717]: E0813 01:04:40.166741 2717 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:9ed498e122b248a801130d052c25418381ee7bf215cdf7990965bae0dc37dcc2: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/83/fs/usr/share/zoneinfo/Pacific/Gambier: no space left on device" image="registry.k8s.io/coredns/coredns:v1.11.3" Aug 13 01:04:40.168247 kubelet[2717]: E0813 01:04:40.167080 2717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.3,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gth48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-7c65d6cfc9-tp469_kube-system(c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af): ErrImagePull: failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:9ed498e122b248a801130d052c25418381ee7bf215cdf7990965bae0dc37dcc2: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/83/fs/usr/share/zoneinfo/Pacific/Gambier: no space left on device" logger="UnhandledError" Aug 13 01:04:40.168892 kubelet[2717]: E0813 01:04:40.168501 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.11.3\\\": failed to extract layer sha256:9ed498e122b248a801130d052c25418381ee7bf215cdf7990965bae0dc37dcc2: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/83/fs/usr/share/zoneinfo/Pacific/Gambier: no space left on device\"" pod="kube-system/coredns-7c65d6cfc9-tp469" podUID="c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af" Aug 13 01:04:40.230477 kubelet[2717]: I0813 01:04:40.230430 2717 kubelet.go:2306] "Pod admission denied" podUID="74e50a68-320c-478f-9682-be5b67a4efc9" pod="tigera-operator/tigera-operator-5bf8dfcb4-gh6vx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:40.309735 kubelet[2717]: E0813 01:04:40.309709 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:04:40.312711 kubelet[2717]: E0813 01:04:40.311846 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.11.3\\\"\"" pod="kube-system/coredns-7c65d6cfc9-tp469" podUID="c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af" Aug 13 01:04:40.328603 kubelet[2717]: I0813 01:04:40.328569 2717 kubelet.go:2306] "Pod admission denied" podUID="10869e72-00f5-432b-acb7-b4a51b0d20ca" pod="tigera-operator/tigera-operator-5bf8dfcb4-rqkc9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:40.436131 kubelet[2717]: I0813 01:04:40.436090 2717 kubelet.go:2306] "Pod admission denied" podUID="d8befd5b-d745-453e-9219-900046cdd31f" pod="tigera-operator/tigera-operator-5bf8dfcb4-vdqdk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:40.539222 kubelet[2717]: I0813 01:04:40.538986 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:04:40.539222 kubelet[2717]: I0813 01:04:40.539016 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:04:40.543683 kubelet[2717]: I0813 01:04:40.543663 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:04:40.552708 kubelet[2717]: I0813 01:04:40.552680 2717 kubelet.go:2306] "Pod admission denied" podUID="6f05396b-68ed-48a2-9eaf-709fb9b905a3" pod="tigera-operator/tigera-operator-5bf8dfcb4-ccwv2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:40.576957 kubelet[2717]: I0813 01:04:40.576927 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:04:40.577075 kubelet[2717]: I0813 01:04:40.577024 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-dk5p7","kube-system/coredns-7c65d6cfc9-tp469","calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21","calico-system/csi-node-driver-84hvc"] Aug 13 01:04:40.577075 kubelet[2717]: E0813 01:04:40.577048 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:04:40.577075 kubelet[2717]: E0813 01:04:40.577057 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:04:40.577075 kubelet[2717]: E0813 01:04:40.577064 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:04:40.577075 kubelet[2717]: E0813 01:04:40.577076 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:04:40.578343 kubelet[2717]: E0813 01:04:40.577085 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:04:40.578343 kubelet[2717]: E0813 01:04:40.577093 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:04:40.578343 kubelet[2717]: E0813 01:04:40.577101 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:04:40.578343 kubelet[2717]: E0813 01:04:40.577108 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:04:40.578343 kubelet[2717]: E0813 01:04:40.577116 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:04:40.578343 kubelet[2717]: E0813 01:04:40.577125 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:04:40.578343 kubelet[2717]: I0813 01:04:40.577133 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:04:40.736633 kubelet[2717]: I0813 01:04:40.736508 2717 kubelet.go:2306] "Pod admission denied" podUID="63aff005-bf19-4c3d-bdb4-a8e09a806e7f" pod="tigera-operator/tigera-operator-5bf8dfcb4-24ps8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:40.766107 kubelet[2717]: E0813 01:04:40.764643 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:04:40.766314 containerd[1575]: time="2025-08-13T01:04:40.766280810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,}" Aug 13 01:04:40.840631 kubelet[2717]: I0813 01:04:40.840580 2717 kubelet.go:2306] "Pod admission denied" podUID="a32d0207-c572-45f5-93e2-66b964a6e851" pod="tigera-operator/tigera-operator-5bf8dfcb4-q9jhd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:40.904861 systemd-networkd[1460]: califaa9c02237c: Link UP Aug 13 01:04:40.905848 systemd-networkd[1460]: califaa9c02237c: Gained carrier Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.808 [INFO][5293] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--233--209--21-k8s-coredns--7c65d6cfc9--dk5p7-eth0 coredns-7c65d6cfc9- kube-system 6787cc4f-56e4-4094-978c-958d0d7a35ba 841 0 2025-08-13 01:02:31 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-233-209-21 coredns-7c65d6cfc9-dk5p7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] califaa9c02237c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dk5p7" WorkloadEndpoint="172--233--209--21-k8s-coredns--7c65d6cfc9--dk5p7-" Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.808 [INFO][5293] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dk5p7" WorkloadEndpoint="172--233--209--21-k8s-coredns--7c65d6cfc9--dk5p7-eth0" Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.841 [INFO][5305] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" HandleID="k8s-pod-network.c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" Workload="172--233--209--21-k8s-coredns--7c65d6cfc9--dk5p7-eth0" Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.841 [INFO][5305] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" HandleID="k8s-pod-network.c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" Workload="172--233--209--21-k8s-coredns--7c65d6cfc9--dk5p7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000333050), Attrs:map[string]string{"namespace":"kube-system", "node":"172-233-209-21", "pod":"coredns-7c65d6cfc9-dk5p7", "timestamp":"2025-08-13 01:04:40.841580697 +0000 UTC"}, Hostname:"172-233-209-21", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.842 [INFO][5305] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.842 [INFO][5305] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.842 [INFO][5305] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-233-209-21' Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.847 [INFO][5305] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" host="172-233-209-21" Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.855 [INFO][5305] ipam/ipam.go 394: Looking up existing affinities for host host="172-233-209-21" Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.861 [INFO][5305] ipam/ipam.go 511: Trying affinity for 192.168.107.64/26 host="172-233-209-21" Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.864 [INFO][5305] ipam/ipam.go 158: Attempting to load block cidr=192.168.107.64/26 host="172-233-209-21" Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.867 [INFO][5305] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.107.64/26 host="172-233-209-21" Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.868 [INFO][5305] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.107.64/26 handle="k8s-pod-network.c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" host="172-233-209-21" Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.876 [INFO][5305] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.880 [INFO][5305] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.107.64/26 handle="k8s-pod-network.c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" host="172-233-209-21" Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.886 [INFO][5305] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.107.68/26] block=192.168.107.64/26 handle="k8s-pod-network.c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" host="172-233-209-21" Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.886 [INFO][5305] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.107.68/26] handle="k8s-pod-network.c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" host="172-233-209-21" Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.886 [INFO][5305] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:04:40.948903 containerd[1575]: 2025-08-13 01:04:40.887 [INFO][5305] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.107.68/26] IPv6=[] ContainerID="c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" HandleID="k8s-pod-network.c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" Workload="172--233--209--21-k8s-coredns--7c65d6cfc9--dk5p7-eth0" Aug 13 01:04:40.949462 containerd[1575]: 2025-08-13 01:04:40.894 [INFO][5293] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dk5p7" WorkloadEndpoint="172--233--209--21-k8s-coredns--7c65d6cfc9--dk5p7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--209--21-k8s-coredns--7c65d6cfc9--dk5p7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6787cc4f-56e4-4094-978c-958d0d7a35ba", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-209-21", ContainerID:"", Pod:"coredns-7c65d6cfc9-dk5p7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califaa9c02237c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:04:40.949462 containerd[1575]: 2025-08-13 01:04:40.894 [INFO][5293] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.107.68/32] ContainerID="c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dk5p7" WorkloadEndpoint="172--233--209--21-k8s-coredns--7c65d6cfc9--dk5p7-eth0" Aug 13 01:04:40.949462 containerd[1575]: 2025-08-13 01:04:40.894 [INFO][5293] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califaa9c02237c ContainerID="c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dk5p7" WorkloadEndpoint="172--233--209--21-k8s-coredns--7c65d6cfc9--dk5p7-eth0" Aug 13 01:04:40.949462 containerd[1575]: 2025-08-13 01:04:40.907 [INFO][5293] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dk5p7" WorkloadEndpoint="172--233--209--21-k8s-coredns--7c65d6cfc9--dk5p7-eth0" Aug 13 01:04:40.949462 containerd[1575]: 2025-08-13 01:04:40.907 [INFO][5293] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dk5p7" WorkloadEndpoint="172--233--209--21-k8s-coredns--7c65d6cfc9--dk5p7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--233--209--21-k8s-coredns--7c65d6cfc9--dk5p7-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"6787cc4f-56e4-4094-978c-958d0d7a35ba", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 2, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-233-209-21", ContainerID:"c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac", Pod:"coredns-7c65d6cfc9-dk5p7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.107.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"califaa9c02237c", MAC:"9e:58:e2:d6:c6:60", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:04:40.949462 containerd[1575]: 2025-08-13 01:04:40.935 [INFO][5293] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" Namespace="kube-system" Pod="coredns-7c65d6cfc9-dk5p7" WorkloadEndpoint="172--233--209--21-k8s-coredns--7c65d6cfc9--dk5p7-eth0" Aug 13 01:04:40.958756 kubelet[2717]: I0813 01:04:40.958466 2717 kubelet.go:2306] "Pod admission denied" podUID="8a0d2aba-196e-421f-b71c-40b05235d653" pod="tigera-operator/tigera-operator-5bf8dfcb4-c9ttf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:40.991247 containerd[1575]: time="2025-08-13T01:04:40.991139245Z" level=info msg="connecting to shim c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac" address="unix:///run/containerd/s/f7b6b12b169f69ce51c068c40eb1f8ba95bf3e7b0b356d36755d506778db01d9" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:04:41.033404 systemd[1]: Started cri-containerd-c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac.scope - libcontainer container c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac. Aug 13 01:04:41.050485 kubelet[2717]: I0813 01:04:41.050453 2717 kubelet.go:2306] "Pod admission denied" podUID="2c4b7fe2-e647-458f-8ed6-be3576c431ab" pod="tigera-operator/tigera-operator-5bf8dfcb4-k587f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:41.093500 containerd[1575]: time="2025-08-13T01:04:41.093458067Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-dk5p7,Uid:6787cc4f-56e4-4094-978c-958d0d7a35ba,Namespace:kube-system,Attempt:0,} returns sandbox id \"c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac\"" Aug 13 01:04:41.094141 kubelet[2717]: E0813 01:04:41.094033 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:04:41.095357 containerd[1575]: time="2025-08-13T01:04:41.095290773Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:04:41.147447 kubelet[2717]: I0813 01:04:41.147240 2717 kubelet.go:2306] "Pod admission denied" podUID="ffc2a295-8481-4a05-93e9-303b5ac56ace" pod="tigera-operator/tigera-operator-5bf8dfcb4-sfhtz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:41.347749 kubelet[2717]: I0813 01:04:41.347651 2717 kubelet.go:2306] "Pod admission denied" podUID="fd178cba-3dc5-4ecc-85a4-8e40a0d8dc90" pod="tigera-operator/tigera-operator-5bf8dfcb4-b78mk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:41.437296 kubelet[2717]: I0813 01:04:41.437248 2717 kubelet.go:2306] "Pod admission denied" podUID="5e126fc2-5108-4433-bc24-b4b9d77a3d21" pod="tigera-operator/tigera-operator-5bf8dfcb4-7qspk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:41.492940 kubelet[2717]: I0813 01:04:41.492700 2717 kubelet.go:2306] "Pod admission denied" podUID="2a0000c1-ea63-4873-bdf5-793837ff9e00" pod="tigera-operator/tigera-operator-5bf8dfcb4-zdhp5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:41.586501 kubelet[2717]: I0813 01:04:41.586452 2717 kubelet.go:2306] "Pod admission denied" podUID="263bcf5a-342b-4d43-8f0b-4969b437606f" pod="tigera-operator/tigera-operator-5bf8dfcb4-fjdkd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:41.684737 kubelet[2717]: I0813 01:04:41.684704 2717 kubelet.go:2306] "Pod admission denied" podUID="343e52e7-b58b-482e-8f40-d8b5c8deae75" pod="tigera-operator/tigera-operator-5bf8dfcb4-g7kqk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:41.796618 kubelet[2717]: I0813 01:04:41.796518 2717 kubelet.go:2306] "Pod admission denied" podUID="b39a8875-8cc1-4a16-9141-cac500f5231a" pod="tigera-operator/tigera-operator-5bf8dfcb4-rbn8j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:41.804665 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1704826146.mount: Deactivated successfully. Aug 13 01:04:41.901579 kubelet[2717]: I0813 01:04:41.901530 2717 kubelet.go:2306] "Pod admission denied" podUID="547fc78c-ad5e-4eb9-a681-285f5cb334e5" pod="tigera-operator/tigera-operator-5bf8dfcb4-5tqb8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:42.035679 kubelet[2717]: I0813 01:04:42.035404 2717 kubelet.go:2306] "Pod admission denied" podUID="666a6637-d89e-4089-bcac-e163c0f50927" pod="tigera-operator/tigera-operator-5bf8dfcb4-kdj28" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:42.137331 systemd-networkd[1460]: califaa9c02237c: Gained IPv6LL Aug 13 01:04:42.245128 kubelet[2717]: I0813 01:04:42.245093 2717 kubelet.go:2306] "Pod admission denied" podUID="1d0cd896-ecde-4588-b741-14af6c3c1410" pod="tigera-operator/tigera-operator-5bf8dfcb4-w66z2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:42.337074 kubelet[2717]: I0813 01:04:42.336730 2717 kubelet.go:2306] "Pod admission denied" podUID="c04fba6f-0120-4492-bbd2-037050d916ac" pod="tigera-operator/tigera-operator-5bf8dfcb4-2d8xr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:42.433216 kubelet[2717]: I0813 01:04:42.433153 2717 kubelet.go:2306] "Pod admission denied" podUID="a50dd549-f832-45cc-905c-10affbd9d891" pod="tigera-operator/tigera-operator-5bf8dfcb4-mljtn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:42.534619 kubelet[2717]: I0813 01:04:42.534580 2717 kubelet.go:2306] "Pod admission denied" podUID="b1b76142-bf79-4bf2-a5e2-33d42810392f" pod="tigera-operator/tigera-operator-5bf8dfcb4-j2vbr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:42.650217 kubelet[2717]: I0813 01:04:42.647817 2717 kubelet.go:2306] "Pod admission denied" podUID="08076ebf-a661-4354-989e-99480fc131a1" pod="tigera-operator/tigera-operator-5bf8dfcb4-5pc5b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:42.695032 kubelet[2717]: I0813 01:04:42.694991 2717 kubelet.go:2306] "Pod admission denied" podUID="579d72f0-9ef5-466a-b6bb-c33289128fdd" pod="tigera-operator/tigera-operator-5bf8dfcb4-z7mq9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:42.764102 kubelet[2717]: E0813 01:04:42.763866 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:04:42.786101 kubelet[2717]: I0813 01:04:42.786064 2717 kubelet.go:2306] "Pod admission denied" podUID="e8e07cdf-4ba3-4db3-8235-ba2231b8f3cd" pod="tigera-operator/tigera-operator-5bf8dfcb4-vrz65" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:42.887099 kubelet[2717]: I0813 01:04:42.887052 2717 kubelet.go:2306] "Pod admission denied" podUID="391677cb-04ff-49d4-b697-eb4f6782a7f8" pod="tigera-operator/tigera-operator-5bf8dfcb4-8n2jj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:42.985535 kubelet[2717]: I0813 01:04:42.985473 2717 kubelet.go:2306] "Pod admission denied" podUID="625fcbcb-4e45-4734-9466-d804024b0c4d" pod="tigera-operator/tigera-operator-5bf8dfcb4-4276d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:43.094210 kubelet[2717]: I0813 01:04:43.093209 2717 kubelet.go:2306] "Pod admission denied" podUID="17369e21-1629-4133-82c8-37db720c6388" pod="tigera-operator/tigera-operator-5bf8dfcb4-txqp7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:43.187514 kubelet[2717]: I0813 01:04:43.187464 2717 kubelet.go:2306] "Pod admission denied" podUID="90f36873-ce43-4574-9a83-2d82f9d9a9b2" pod="tigera-operator/tigera-operator-5bf8dfcb4-6ndvx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:43.285238 kubelet[2717]: I0813 01:04:43.285120 2717 kubelet.go:2306] "Pod admission denied" podUID="3806cc71-ee87-498f-98ac-fbe015997e19" pod="tigera-operator/tigera-operator-5bf8dfcb4-f7jzn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:43.388636 kubelet[2717]: I0813 01:04:43.388585 2717 kubelet.go:2306] "Pod admission denied" podUID="63cc4ff6-42b0-424f-9008-8fb44ec4663c" pod="tigera-operator/tigera-operator-5bf8dfcb4-fvfwc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:43.453259 containerd[1575]: time="2025-08-13T01:04:43.453217259Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=9383949" Aug 13 01:04:43.453259 containerd[1575]: time="2025-08-13T01:04:43.453219489Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/6011f96271ffadff62656bf4e3a8f2c59939bc7b25ea1496ca93cf8521f4ba6b/data: no space left on device" Aug 13 01:04:43.454002 kubelet[2717]: E0813 01:04:43.453425 2717 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/6011f96271ffadff62656bf4e3a8f2c59939bc7b25ea1496ca93cf8521f4ba6b/data: no space left on device" image="registry.k8s.io/coredns/coredns:v1.11.3" Aug 13 01:04:43.454002 kubelet[2717]: E0813 01:04:43.453463 2717 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/6011f96271ffadff62656bf4e3a8f2c59939bc7b25ea1496ca93cf8521f4ba6b/data: no space left on device" image="registry.k8s.io/coredns/coredns:v1.11.3" Aug 13 01:04:43.454002 kubelet[2717]: E0813 01:04:43.453605 2717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.3,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-25dl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba): ErrImagePull: failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/6011f96271ffadff62656bf4e3a8f2c59939bc7b25ea1496ca93cf8521f4ba6b/data: no space left on device" logger="UnhandledError" Aug 13 01:04:43.455313 kubelet[2717]: E0813 01:04:43.455142 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.11.3\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/6011f96271ffadff62656bf4e3a8f2c59939bc7b25ea1496ca93cf8521f4ba6b/data: no space left on device\"" pod="kube-system/coredns-7c65d6cfc9-dk5p7" podUID="6787cc4f-56e4-4094-978c-958d0d7a35ba" Aug 13 01:04:43.593605 kubelet[2717]: I0813 01:04:43.593216 2717 kubelet.go:2306] "Pod admission denied" podUID="671c4b10-f57d-43a7-893f-f41b16d41c7f" pod="tigera-operator/tigera-operator-5bf8dfcb4-8n77k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:43.694693 kubelet[2717]: I0813 01:04:43.693563 2717 kubelet.go:2306] "Pod admission denied" podUID="d098e564-318b-4f20-ad85-a4aa65cd34b3" pod="tigera-operator/tigera-operator-5bf8dfcb4-qwt8k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:43.758166 kubelet[2717]: I0813 01:04:43.758130 2717 kubelet.go:2306] "Pod admission denied" podUID="f90a38d2-a742-48c6-a8e5-bea59d22cc9a" pod="tigera-operator/tigera-operator-5bf8dfcb4-hhkkx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:43.834106 kubelet[2717]: I0813 01:04:43.834060 2717 kubelet.go:2306] "Pod admission denied" podUID="228439d2-fe0f-4beb-ac63-6cb77948de6b" pod="tigera-operator/tigera-operator-5bf8dfcb4-qpt4m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:04:44.069451 systemd[1]: Started sshd@7-172.233.209.21:22-147.75.109.163:41250.service - OpenSSH per-connection server daemon (147.75.109.163:41250). Aug 13 01:04:44.324618 kubelet[2717]: E0813 01:04:44.323455 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:04:44.325417 kubelet[2717]: E0813 01:04:44.325399 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.11.3\\\"\"" pod="kube-system/coredns-7c65d6cfc9-dk5p7" podUID="6787cc4f-56e4-4094-978c-958d0d7a35ba" Aug 13 01:04:44.414167 sshd[5427]: Accepted publickey for core from 147.75.109.163 port 41250 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:04:44.416220 sshd-session[5427]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:04:44.422318 systemd-logind[1550]: New session 8 of user core. Aug 13 01:04:44.429343 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 01:04:44.727423 sshd[5431]: Connection closed by 147.75.109.163 port 41250 Aug 13 01:04:44.727969 sshd-session[5427]: pam_unix(sshd:session): session closed for user core Aug 13 01:04:44.732203 systemd[1]: sshd@7-172.233.209.21:22-147.75.109.163:41250.service: Deactivated successfully. Aug 13 01:04:44.734356 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:04:44.735528 systemd-logind[1550]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:04:44.737098 systemd-logind[1550]: Removed session 8. Aug 13 01:04:46.246544 containerd[1575]: time="2025-08-13T01:04:46.246508354Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6353c1e40a0c53412135ac4fbfd177615839c7b54973b0c0cad5c4d48fa83c96\" id:\"12cf08ff30f4db6bdbc743435f2eaa2ebce2964675886a322cbd10d8114781f4\" pid:5458 exited_at:{seconds:1755047086 nanos:246113189}" Aug 13 01:04:49.764447 kubelet[2717]: E0813 01:04:49.764417 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:04:49.796122 systemd[1]: Started sshd@8-172.233.209.21:22-147.75.109.163:33332.service - OpenSSH per-connection server daemon (147.75.109.163:33332). Aug 13 01:04:50.152528 sshd[5477]: Accepted publickey for core from 147.75.109.163 port 33332 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:04:50.154174 sshd-session[5477]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:04:50.160169 systemd-logind[1550]: New session 9 of user core. Aug 13 01:04:50.167321 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 01:04:50.471216 sshd[5479]: Connection closed by 147.75.109.163 port 33332 Aug 13 01:04:50.472010 sshd-session[5477]: pam_unix(sshd:session): session closed for user core Aug 13 01:04:50.476415 systemd-logind[1550]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:04:50.477046 systemd[1]: sshd@8-172.233.209.21:22-147.75.109.163:33332.service: Deactivated successfully. Aug 13 01:04:50.483234 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:04:50.485537 systemd-logind[1550]: Removed session 9. Aug 13 01:04:50.594874 kubelet[2717]: I0813 01:04:50.594839 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:04:50.594874 kubelet[2717]: I0813 01:04:50.594877 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:04:50.597489 kubelet[2717]: I0813 01:04:50.597474 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:04:50.609252 kubelet[2717]: I0813 01:04:50.609232 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:04:50.609318 kubelet[2717]: I0813 01:04:50.609309 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-dk5p7","kube-system/coredns-7c65d6cfc9-tp469","calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:04:50.609386 kubelet[2717]: E0813 01:04:50.609332 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:04:50.609386 kubelet[2717]: E0813 01:04:50.609341 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:04:50.609386 kubelet[2717]: E0813 01:04:50.609348 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:04:50.609386 kubelet[2717]: E0813 01:04:50.609359 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:04:50.609386 kubelet[2717]: E0813 01:04:50.609366 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:04:50.609386 kubelet[2717]: E0813 01:04:50.609374 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:04:50.609386 kubelet[2717]: E0813 01:04:50.609383 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:04:50.609542 kubelet[2717]: E0813 01:04:50.609392 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:04:50.609542 kubelet[2717]: E0813 01:04:50.609400 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:04:50.609542 kubelet[2717]: E0813 01:04:50.609407 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:04:50.609542 kubelet[2717]: I0813 01:04:50.609416 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:04:50.764297 kubelet[2717]: E0813 01:04:50.763879 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:04:50.767532 containerd[1575]: time="2025-08-13T01:04:50.767482523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:04:51.428392 containerd[1575]: time="2025-08-13T01:04:51.428329306Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 01:04:51.428577 containerd[1575]: time="2025-08-13T01:04:51.428415795Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=16781546" Aug 13 01:04:51.428614 kubelet[2717]: E0813 01:04:51.428556 2717 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:04:51.428922 kubelet[2717]: E0813 01:04:51.428611 2717 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:04:51.428922 kubelet[2717]: E0813 01:04:51.428824 2717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqfjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 01:04:51.429394 containerd[1575]: time="2025-08-13T01:04:51.429361494Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:04:51.430864 kubelet[2717]: E0813 01:04:51.430707 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:04:52.132692 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4109307314.mount: Deactivated successfully. Aug 13 01:04:52.409667 containerd[1575]: time="2025-08-13T01:04:52.409507523Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:9ed498e122b248a801130d052c25418381ee7bf215cdf7990965bae0dc37dcc2: mkdir /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/101/fs/var: no space left on device" Aug 13 01:04:52.411099 containerd[1575]: time="2025-08-13T01:04:52.410308514Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=10432525" Aug 13 01:04:52.411492 kubelet[2717]: E0813 01:04:52.410551 2717 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:9ed498e122b248a801130d052c25418381ee7bf215cdf7990965bae0dc37dcc2: mkdir /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/101/fs/var: no space left on device" image="registry.k8s.io/coredns/coredns:v1.11.3" Aug 13 01:04:52.411492 kubelet[2717]: E0813 01:04:52.410624 2717 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:9ed498e122b248a801130d052c25418381ee7bf215cdf7990965bae0dc37dcc2: mkdir /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/101/fs/var: no space left on device" image="registry.k8s.io/coredns/coredns:v1.11.3" Aug 13 01:04:52.411492 kubelet[2717]: E0813 01:04:52.410974 2717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.3,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gth48,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-7c65d6cfc9-tp469_kube-system(c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af): ErrImagePull: failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:9ed498e122b248a801130d052c25418381ee7bf215cdf7990965bae0dc37dcc2: mkdir /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/101/fs/var: no space left on device" logger="UnhandledError" Aug 13 01:04:52.412331 kubelet[2717]: E0813 01:04:52.412292 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.11.3\\\": failed to extract layer sha256:9ed498e122b248a801130d052c25418381ee7bf215cdf7990965bae0dc37dcc2: mkdir /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/101/fs/var: no space left on device\"" pod="kube-system/coredns-7c65d6cfc9-tp469" podUID="c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af" Aug 13 01:04:55.536482 systemd[1]: Started sshd@9-172.233.209.21:22-147.75.109.163:33348.service - OpenSSH per-connection server daemon (147.75.109.163:33348). Aug 13 01:04:55.764685 kubelet[2717]: E0813 01:04:55.764336 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:04:55.870632 sshd[5507]: Accepted publickey for core from 147.75.109.163 port 33348 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:04:55.872182 sshd-session[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:04:55.876956 systemd-logind[1550]: New session 10 of user core. Aug 13 01:04:55.884347 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 01:04:56.170794 sshd[5509]: Connection closed by 147.75.109.163 port 33348 Aug 13 01:04:56.171079 sshd-session[5507]: pam_unix(sshd:session): session closed for user core Aug 13 01:04:56.174641 systemd[1]: sshd@9-172.233.209.21:22-147.75.109.163:33348.service: Deactivated successfully. Aug 13 01:04:56.176721 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:04:56.178399 systemd-logind[1550]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:04:56.180050 systemd-logind[1550]: Removed session 10. Aug 13 01:04:56.233716 systemd[1]: Started sshd@10-172.233.209.21:22-147.75.109.163:33354.service - OpenSSH per-connection server daemon (147.75.109.163:33354). Aug 13 01:04:56.574563 sshd[5522]: Accepted publickey for core from 147.75.109.163 port 33354 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:04:56.576078 sshd-session[5522]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:04:56.582034 systemd-logind[1550]: New session 11 of user core. Aug 13 01:04:56.588327 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 01:04:56.918571 sshd[5524]: Connection closed by 147.75.109.163 port 33354 Aug 13 01:04:56.919503 sshd-session[5522]: pam_unix(sshd:session): session closed for user core Aug 13 01:04:56.924406 systemd[1]: sshd@10-172.233.209.21:22-147.75.109.163:33354.service: Deactivated successfully. Aug 13 01:04:56.926999 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:04:56.928523 systemd-logind[1550]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:04:56.930664 systemd-logind[1550]: Removed session 11. Aug 13 01:04:56.983108 systemd[1]: Started sshd@11-172.233.209.21:22-147.75.109.163:33364.service - OpenSSH per-connection server daemon (147.75.109.163:33364). Aug 13 01:04:57.332573 sshd[5534]: Accepted publickey for core from 147.75.109.163 port 33364 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:04:57.334708 sshd-session[5534]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:04:57.340860 systemd-logind[1550]: New session 12 of user core. Aug 13 01:04:57.346631 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 01:04:57.649665 sshd[5536]: Connection closed by 147.75.109.163 port 33364 Aug 13 01:04:57.650140 sshd-session[5534]: pam_unix(sshd:session): session closed for user core Aug 13 01:04:57.655654 systemd-logind[1550]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:04:57.656639 systemd[1]: sshd@11-172.233.209.21:22-147.75.109.163:33364.service: Deactivated successfully. Aug 13 01:04:57.659508 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:04:57.661854 systemd-logind[1550]: Removed session 12. Aug 13 01:04:58.764220 kubelet[2717]: E0813 01:04:58.764156 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:04:58.766059 containerd[1575]: time="2025-08-13T01:04:58.765733915Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:04:59.497740 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2389070226.mount: Deactivated successfully. Aug 13 01:04:59.753284 containerd[1575]: time="2025-08-13T01:04:59.752368508Z" level=error msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:9ed498e122b248a801130d052c25418381ee7bf215cdf7990965bae0dc37dcc2: mkdir /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/104/fs/var: no space left on device" Aug 13 01:04:59.753284 containerd[1575]: time="2025-08-13T01:04:59.752586996Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=10432525" Aug 13 01:04:59.753475 kubelet[2717]: E0813 01:04:59.752729 2717 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:9ed498e122b248a801130d052c25418381ee7bf215cdf7990965bae0dc37dcc2: mkdir /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/104/fs/var: no space left on device" image="registry.k8s.io/coredns/coredns:v1.11.3" Aug 13 01:04:59.753475 kubelet[2717]: E0813 01:04:59.752771 2717 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:9ed498e122b248a801130d052c25418381ee7bf215cdf7990965bae0dc37dcc2: mkdir /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/104/fs/var: no space left on device" image="registry.k8s.io/coredns/coredns:v1.11.3" Aug 13 01:04:59.753475 kubelet[2717]: E0813 01:04:59.752893 2717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:coredns,Image:registry.k8s.io/coredns/coredns:v1.11.3,Command:[],Args:[-conf /etc/coredns/Corefile],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:dns,HostPort:0,ContainerPort:53,Protocol:UDP,HostIP:,},ContainerPort{Name:dns-tcp,HostPort:0,ContainerPort:53,Protocol:TCP,HostIP:,},ContainerPort{Name:metrics,HostPort:0,ContainerPort:9153,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{memory: {{178257920 0} {} 170Mi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{73400320 0} {} 70Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:true,MountPath:/etc/coredns,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-25dl4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/health,Port:{0 8080 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:5,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/ready,Port:{0 8181 },Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[NET_BIND_SERVICE],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod coredns-7c65d6cfc9-dk5p7_kube-system(6787cc4f-56e4-4094-978c-958d0d7a35ba): ErrImagePull: failed to pull and unpack image \"registry.k8s.io/coredns/coredns:v1.11.3\": failed to extract layer sha256:9ed498e122b248a801130d052c25418381ee7bf215cdf7990965bae0dc37dcc2: mkdir /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/104/fs/var: no space left on device" logger="UnhandledError" Aug 13 01:04:59.754229 kubelet[2717]: E0813 01:04:59.754161 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ErrImagePull: \"failed to pull and unpack image \\\"registry.k8s.io/coredns/coredns:v1.11.3\\\": failed to extract layer sha256:9ed498e122b248a801130d052c25418381ee7bf215cdf7990965bae0dc37dcc2: mkdir /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/104/fs/var: no space left on device\"" pod="kube-system/coredns-7c65d6cfc9-dk5p7" podUID="6787cc4f-56e4-4094-978c-958d0d7a35ba" Aug 13 01:05:00.627735 kubelet[2717]: I0813 01:05:00.627704 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:00.627735 kubelet[2717]: I0813 01:05:00.627739 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:05:00.630035 kubelet[2717]: I0813 01:05:00.630012 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:05:00.631588 kubelet[2717]: I0813 01:05:00.631554 2717 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93" size=25052538 runtimeHandler="" Aug 13 01:05:00.631817 containerd[1575]: time="2025-08-13T01:05:00.631781024Z" level=info msg="RemoveImage \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:05:00.632954 containerd[1575]: time="2025-08-13T01:05:00.632933562Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator:v1.38.3\"" Aug 13 01:05:00.633358 containerd[1575]: time="2025-08-13T01:05:00.633329588Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\"" Aug 13 01:05:00.633742 containerd[1575]: time="2025-08-13T01:05:00.633724013Z" level=info msg="RemoveImage \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" returns successfully" Aug 13 01:05:00.633813 containerd[1575]: time="2025-08-13T01:05:00.633782573Z" level=info msg="ImageDelete event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:05:00.644103 kubelet[2717]: I0813 01:05:00.644079 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:00.644226 kubelet[2717]: I0813 01:05:00.644203 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-dk5p7","kube-system/coredns-7c65d6cfc9-tp469","calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:05:00.644226 kubelet[2717]: E0813 01:05:00.644234 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:05:00.644226 kubelet[2717]: E0813 01:05:00.644243 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:05:00.644226 kubelet[2717]: E0813 01:05:00.644249 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:05:00.644378 kubelet[2717]: E0813 01:05:00.644259 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:05:00.644378 kubelet[2717]: E0813 01:05:00.644267 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:05:00.644378 kubelet[2717]: E0813 01:05:00.644275 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:05:00.644378 kubelet[2717]: E0813 01:05:00.644283 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:05:00.644378 kubelet[2717]: E0813 01:05:00.644293 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:05:00.644378 kubelet[2717]: E0813 01:05:00.644300 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:05:00.644378 kubelet[2717]: E0813 01:05:00.644307 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:05:00.644378 kubelet[2717]: I0813 01:05:00.644315 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:05:02.715158 systemd[1]: Started sshd@12-172.233.209.21:22-147.75.109.163:44272.service - OpenSSH per-connection server daemon (147.75.109.163:44272). Aug 13 01:05:03.060415 sshd[5572]: Accepted publickey for core from 147.75.109.163 port 44272 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:05:03.061720 sshd-session[5572]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:05:03.067138 systemd-logind[1550]: New session 13 of user core. Aug 13 01:05:03.071306 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 01:05:03.367700 sshd[5576]: Connection closed by 147.75.109.163 port 44272 Aug 13 01:05:03.368390 sshd-session[5572]: pam_unix(sshd:session): session closed for user core Aug 13 01:05:03.371554 systemd[1]: sshd@12-172.233.209.21:22-147.75.109.163:44272.service: Deactivated successfully. Aug 13 01:05:03.373708 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:05:03.375064 systemd-logind[1550]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:05:03.377555 systemd-logind[1550]: Removed session 13. Aug 13 01:05:04.769984 kubelet[2717]: E0813 01:05:04.769921 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:05:06.764219 kubelet[2717]: E0813 01:05:06.763808 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:05:06.765672 kubelet[2717]: E0813 01:05:06.765307 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.11.3\\\"\"" pod="kube-system/coredns-7c65d6cfc9-tp469" podUID="c98cf28c-3b77-4d2a-9f0b-e0f918c9a0af" Aug 13 01:05:07.763803 kubelet[2717]: E0813 01:05:07.763772 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:05:08.432435 systemd[1]: Started sshd@13-172.233.209.21:22-147.75.109.163:33642.service - OpenSSH per-connection server daemon (147.75.109.163:33642). Aug 13 01:05:08.781237 sshd[5589]: Accepted publickey for core from 147.75.109.163 port 33642 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:05:08.782801 sshd-session[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:05:08.789240 systemd-logind[1550]: New session 14 of user core. Aug 13 01:05:08.794320 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 01:05:09.107184 sshd[5591]: Connection closed by 147.75.109.163 port 33642 Aug 13 01:05:09.107824 sshd-session[5589]: pam_unix(sshd:session): session closed for user core Aug 13 01:05:09.111926 systemd[1]: sshd@13-172.233.209.21:22-147.75.109.163:33642.service: Deactivated successfully. Aug 13 01:05:09.114941 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:05:09.118506 systemd-logind[1550]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:05:09.120112 systemd-logind[1550]: Removed session 14. Aug 13 01:05:10.666140 kubelet[2717]: I0813 01:05:10.666094 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:10.666140 kubelet[2717]: I0813 01:05:10.666156 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:05:10.669176 kubelet[2717]: I0813 01:05:10.669158 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:05:10.689721 kubelet[2717]: I0813 01:05:10.689498 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:10.689721 kubelet[2717]: I0813 01:05:10.689591 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-tp469","calico-system/calico-kube-controllers-564d8b8748-ps97n","kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/calico-typha-5fdd567c68-zgxjx","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:05:10.689721 kubelet[2717]: E0813 01:05:10.689619 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:05:10.689721 kubelet[2717]: E0813 01:05:10.689630 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:05:10.689721 kubelet[2717]: E0813 01:05:10.689637 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:05:10.689721 kubelet[2717]: E0813 01:05:10.689647 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:05:10.689721 kubelet[2717]: E0813 01:05:10.689656 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:05:10.689721 kubelet[2717]: E0813 01:05:10.689663 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:05:10.689721 kubelet[2717]: E0813 01:05:10.689672 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:05:10.689721 kubelet[2717]: E0813 01:05:10.689683 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:05:10.689721 kubelet[2717]: E0813 01:05:10.689691 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:05:10.689721 kubelet[2717]: E0813 01:05:10.689698 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:05:10.689721 kubelet[2717]: I0813 01:05:10.689706 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:05:14.165493 systemd[1]: Started sshd@14-172.233.209.21:22-147.75.109.163:33652.service - OpenSSH per-connection server daemon (147.75.109.163:33652). Aug 13 01:05:14.499119 sshd[5609]: Accepted publickey for core from 147.75.109.163 port 33652 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:05:14.502293 sshd-session[5609]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:05:14.509421 systemd-logind[1550]: New session 15 of user core. Aug 13 01:05:14.515312 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 01:05:14.773291 kubelet[2717]: E0813 01:05:14.772644 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:05:14.774331 kubelet[2717]: E0813 01:05:14.773959 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"coredns\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.k8s.io/coredns/coredns:v1.11.3\\\"\"" pod="kube-system/coredns-7c65d6cfc9-dk5p7" podUID="6787cc4f-56e4-4094-978c-958d0d7a35ba" Aug 13 01:05:14.816847 sshd[5611]: Connection closed by 147.75.109.163 port 33652 Aug 13 01:05:14.817307 sshd-session[5609]: pam_unix(sshd:session): session closed for user core Aug 13 01:05:14.823110 systemd[1]: sshd@14-172.233.209.21:22-147.75.109.163:33652.service: Deactivated successfully. Aug 13 01:05:14.825854 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:05:14.827166 systemd-logind[1550]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:05:14.829712 systemd-logind[1550]: Removed session 15. Aug 13 01:05:15.765076 containerd[1575]: time="2025-08-13T01:05:15.764863227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:05:16.258155 containerd[1575]: time="2025-08-13T01:05:16.258047641Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6353c1e40a0c53412135ac4fbfd177615839c7b54973b0c0cad5c4d48fa83c96\" id:\"69842e3f8327199153777e61d5d880956ad644c25481c105c177b455036b1914\" pid:5636 exited_at:{seconds:1755047116 nanos:257677225}" Aug 13 01:05:17.448967 containerd[1575]: time="2025-08-13T01:05:17.448910736Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/105/fs/usr/bin/kube-controllers: no space left on device" Aug 13 01:05:17.449827 containerd[1575]: time="2025-08-13T01:05:17.449005265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 01:05:17.449864 kubelet[2717]: E0813 01:05:17.449303 2717 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/105/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:05:17.449864 kubelet[2717]: E0813 01:05:17.449349 2717 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/105/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:05:17.449864 kubelet[2717]: E0813 01:05:17.449454 2717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqfjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/105/fs/usr/bin/kube-controllers: no space left on device" logger="UnhandledError" Aug 13 01:05:17.450570 kubelet[2717]: E0813 01:05:17.450533 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/105/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:05:18.766218 kubelet[2717]: E0813 01:05:18.765351 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:05:18.774209 containerd[1575]: time="2025-08-13T01:05:18.772229916Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:05:19.490128 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3370405999.mount: Deactivated successfully. Aug 13 01:05:19.884364 systemd[1]: Started sshd@15-172.233.209.21:22-147.75.109.163:42298.service - OpenSSH per-connection server daemon (147.75.109.163:42298). Aug 13 01:05:20.238800 sshd[5714]: Accepted publickey for core from 147.75.109.163 port 42298 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:05:20.243048 sshd-session[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:05:20.252842 systemd-logind[1550]: New session 16 of user core. Aug 13 01:05:20.261435 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 01:05:20.578242 containerd[1575]: time="2025-08-13T01:05:20.577640535Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:05:20.583427 containerd[1575]: time="2025-08-13T01:05:20.583402535Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 01:05:20.586370 containerd[1575]: time="2025-08-13T01:05:20.584342077Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:05:20.590361 containerd[1575]: time="2025-08-13T01:05:20.590317545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:05:20.595570 containerd[1575]: time="2025-08-13T01:05:20.595541039Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.823179984s" Aug 13 01:05:20.599021 containerd[1575]: time="2025-08-13T01:05:20.599003129Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:05:20.605681 containerd[1575]: time="2025-08-13T01:05:20.605661951Z" level=info msg="CreateContainer within sandbox \"0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:05:20.623943 containerd[1575]: time="2025-08-13T01:05:20.623344776Z" level=info msg="Container 17c1614ba67e53cb83d04f732485770c3b4ad477d72c0ea27b004fe1e9f21b37: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:05:20.628775 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3043616514.mount: Deactivated successfully. Aug 13 01:05:20.636519 containerd[1575]: time="2025-08-13T01:05:20.636496971Z" level=info msg="CreateContainer within sandbox \"0e3b0f12834db56e2801274ed047feeaa7c6cdec282b440690421382e33aa6b2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"17c1614ba67e53cb83d04f732485770c3b4ad477d72c0ea27b004fe1e9f21b37\"" Aug 13 01:05:20.637176 containerd[1575]: time="2025-08-13T01:05:20.637158636Z" level=info msg="StartContainer for \"17c1614ba67e53cb83d04f732485770c3b4ad477d72c0ea27b004fe1e9f21b37\"" Aug 13 01:05:20.638675 containerd[1575]: time="2025-08-13T01:05:20.638625363Z" level=info msg="connecting to shim 17c1614ba67e53cb83d04f732485770c3b4ad477d72c0ea27b004fe1e9f21b37" address="unix:///run/containerd/s/9fb9b048aae8053f046aa6dc8253b8816ea1e9ddf618b49a59bf71799f11fb76" protocol=ttrpc version=3 Aug 13 01:05:20.644477 sshd[5718]: Connection closed by 147.75.109.163 port 42298 Aug 13 01:05:20.647499 sshd-session[5714]: pam_unix(sshd:session): session closed for user core Aug 13 01:05:20.657175 systemd[1]: sshd@15-172.233.209.21:22-147.75.109.163:42298.service: Deactivated successfully. Aug 13 01:05:20.664486 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:05:20.667175 systemd-logind[1550]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:05:20.670584 systemd-logind[1550]: Removed session 16. Aug 13 01:05:20.680445 systemd[1]: Started cri-containerd-17c1614ba67e53cb83d04f732485770c3b4ad477d72c0ea27b004fe1e9f21b37.scope - libcontainer container 17c1614ba67e53cb83d04f732485770c3b4ad477d72c0ea27b004fe1e9f21b37. Aug 13 01:05:20.751975 containerd[1575]: time="2025-08-13T01:05:20.751931523Z" level=info msg="StartContainer for \"17c1614ba67e53cb83d04f732485770c3b4ad477d72c0ea27b004fe1e9f21b37\" returns successfully" Aug 13 01:05:20.783811 kubelet[2717]: I0813 01:05:20.783777 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:20.783811 kubelet[2717]: I0813 01:05:20.783810 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:05:20.787837 kubelet[2717]: I0813 01:05:20.787461 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:05:20.821119 kubelet[2717]: I0813 01:05:20.821090 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:20.821251 kubelet[2717]: I0813 01:05:20.821183 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-tp469","calico-system/calico-kube-controllers-564d8b8748-ps97n","kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/calico-typha-5fdd567c68-zgxjx","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:05:20.821343 kubelet[2717]: E0813 01:05:20.821254 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:05:20.821343 kubelet[2717]: E0813 01:05:20.821265 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:05:20.821343 kubelet[2717]: E0813 01:05:20.821272 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:05:20.821343 kubelet[2717]: E0813 01:05:20.821282 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:05:20.821343 kubelet[2717]: E0813 01:05:20.821291 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:05:20.821343 kubelet[2717]: E0813 01:05:20.821299 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:05:20.821343 kubelet[2717]: E0813 01:05:20.821307 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:05:20.821343 kubelet[2717]: E0813 01:05:20.821317 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:05:20.821343 kubelet[2717]: E0813 01:05:20.821325 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:05:20.821343 kubelet[2717]: E0813 01:05:20.821334 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:05:20.821343 kubelet[2717]: I0813 01:05:20.821343 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:05:21.398342 kubelet[2717]: E0813 01:05:21.397762 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:05:21.420455 kubelet[2717]: I0813 01:05:21.420410 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-tp469" podStartSLOduration=127.056173058 podStartE2EDuration="2m50.420394711s" podCreationTimestamp="2025-08-13 01:02:31 +0000 UTC" firstStartedPulling="2025-08-13 01:04:37.238621042 +0000 UTC m=+132.563439406" lastFinishedPulling="2025-08-13 01:05:20.602842695 +0000 UTC m=+175.927661059" observedRunningTime="2025-08-13 01:05:21.407663631 +0000 UTC m=+176.732481995" watchObservedRunningTime="2025-08-13 01:05:21.420394711 +0000 UTC m=+176.745213075" Aug 13 01:05:22.401459 kubelet[2717]: E0813 01:05:22.401215 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:05:23.402801 kubelet[2717]: E0813 01:05:23.402757 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:05:25.703893 systemd[1]: Started sshd@16-172.233.209.21:22-147.75.109.163:42310.service - OpenSSH per-connection server daemon (147.75.109.163:42310). Aug 13 01:05:26.053643 sshd[5776]: Accepted publickey for core from 147.75.109.163 port 42310 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:05:26.055727 sshd-session[5776]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:05:26.061989 systemd-logind[1550]: New session 17 of user core. Aug 13 01:05:26.071389 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 01:05:26.365339 sshd[5778]: Connection closed by 147.75.109.163 port 42310 Aug 13 01:05:26.365920 sshd-session[5776]: pam_unix(sshd:session): session closed for user core Aug 13 01:05:26.370733 systemd-logind[1550]: Session 17 logged out. Waiting for processes to exit. Aug 13 01:05:26.371499 systemd[1]: sshd@16-172.233.209.21:22-147.75.109.163:42310.service: Deactivated successfully. Aug 13 01:05:26.375679 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 01:05:26.379647 systemd-logind[1550]: Removed session 17. Aug 13 01:05:26.765612 kubelet[2717]: E0813 01:05:26.764217 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:05:26.769998 containerd[1575]: time="2025-08-13T01:05:26.768483334Z" level=info msg="CreateContainer within sandbox \"c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:05:26.778334 containerd[1575]: time="2025-08-13T01:05:26.778308442Z" level=info msg="Container 57987a9e13356a6287ee67f936f045576ef61d05f176c2ccb515a0e117f5c042: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:05:26.790372 containerd[1575]: time="2025-08-13T01:05:26.790332141Z" level=info msg="CreateContainer within sandbox \"c5060359f7a17ff5ea4654bf82868d418a492074e5300fc6c9800aef477572ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"57987a9e13356a6287ee67f936f045576ef61d05f176c2ccb515a0e117f5c042\"" Aug 13 01:05:26.791214 containerd[1575]: time="2025-08-13T01:05:26.791116694Z" level=info msg="StartContainer for \"57987a9e13356a6287ee67f936f045576ef61d05f176c2ccb515a0e117f5c042\"" Aug 13 01:05:26.794254 containerd[1575]: time="2025-08-13T01:05:26.794181608Z" level=info msg="connecting to shim 57987a9e13356a6287ee67f936f045576ef61d05f176c2ccb515a0e117f5c042" address="unix:///run/containerd/s/f7b6b12b169f69ce51c068c40eb1f8ba95bf3e7b0b356d36755d506778db01d9" protocol=ttrpc version=3 Aug 13 01:05:26.829842 systemd[1]: Started cri-containerd-57987a9e13356a6287ee67f936f045576ef61d05f176c2ccb515a0e117f5c042.scope - libcontainer container 57987a9e13356a6287ee67f936f045576ef61d05f176c2ccb515a0e117f5c042. Aug 13 01:05:26.867790 containerd[1575]: time="2025-08-13T01:05:26.867752589Z" level=info msg="StartContainer for \"57987a9e13356a6287ee67f936f045576ef61d05f176c2ccb515a0e117f5c042\" returns successfully" Aug 13 01:05:27.410652 kubelet[2717]: E0813 01:05:27.410481 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:05:27.432009 kubelet[2717]: I0813 01:05:27.431950 2717 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-dk5p7" podStartSLOduration=-9223371860.42284 podStartE2EDuration="2m56.431934894s" podCreationTimestamp="2025-08-13 01:02:31 +0000 UTC" firstStartedPulling="2025-08-13 01:04:41.095092865 +0000 UTC m=+136.419911229" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:05:27.418847103 +0000 UTC m=+182.743665467" watchObservedRunningTime="2025-08-13 01:05:27.431934894 +0000 UTC m=+182.756753258" Aug 13 01:05:28.412374 kubelet[2717]: E0813 01:05:28.412336 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:05:29.414207 kubelet[2717]: E0813 01:05:29.414161 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:05:30.860209 kubelet[2717]: I0813 01:05:30.857902 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:30.860209 kubelet[2717]: I0813 01:05:30.857938 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:05:30.863418 kubelet[2717]: I0813 01:05:30.863406 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:05:30.885110 kubelet[2717]: I0813 01:05:30.885095 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:30.885395 kubelet[2717]: I0813 01:05:30.885346 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/coredns-7c65d6cfc9-tp469","kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:05:30.885552 kubelet[2717]: E0813 01:05:30.885518 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:05:30.885636 kubelet[2717]: E0813 01:05:30.885605 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:05:30.885723 kubelet[2717]: E0813 01:05:30.885694 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:05:30.885774 kubelet[2717]: E0813 01:05:30.885765 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:05:30.885853 kubelet[2717]: E0813 01:05:30.885843 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:05:30.885921 kubelet[2717]: E0813 01:05:30.885913 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:05:30.885993 kubelet[2717]: E0813 01:05:30.885984 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:05:30.886065 kubelet[2717]: E0813 01:05:30.886056 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:05:30.886130 kubelet[2717]: E0813 01:05:30.886122 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:05:30.886180 kubelet[2717]: E0813 01:05:30.886171 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:05:30.886390 kubelet[2717]: I0813 01:05:30.886347 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:05:31.425431 systemd[1]: Started sshd@17-172.233.209.21:22-147.75.109.163:43570.service - OpenSSH per-connection server daemon (147.75.109.163:43570). Aug 13 01:05:31.759109 sshd[5827]: Accepted publickey for core from 147.75.109.163 port 43570 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:05:31.760431 sshd-session[5827]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:05:31.767400 kubelet[2717]: E0813 01:05:31.765806 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:05:31.766994 systemd-logind[1550]: New session 18 of user core. Aug 13 01:05:31.771126 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 01:05:32.053670 sshd[5829]: Connection closed by 147.75.109.163 port 43570 Aug 13 01:05:32.054453 sshd-session[5827]: pam_unix(sshd:session): session closed for user core Aug 13 01:05:32.058151 systemd-logind[1550]: Session 18 logged out. Waiting for processes to exit. Aug 13 01:05:32.058880 systemd[1]: sshd@17-172.233.209.21:22-147.75.109.163:43570.service: Deactivated successfully. Aug 13 01:05:32.061716 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 01:05:32.063755 systemd-logind[1550]: Removed session 18. Aug 13 01:05:37.120298 systemd[1]: Started sshd@18-172.233.209.21:22-147.75.109.163:43580.service - OpenSSH per-connection server daemon (147.75.109.163:43580). Aug 13 01:05:37.462594 sshd[5843]: Accepted publickey for core from 147.75.109.163 port 43580 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:05:37.464399 sshd-session[5843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:05:37.469886 systemd-logind[1550]: New session 19 of user core. Aug 13 01:05:37.472328 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 01:05:37.785112 sshd[5845]: Connection closed by 147.75.109.163 port 43580 Aug 13 01:05:37.785872 sshd-session[5843]: pam_unix(sshd:session): session closed for user core Aug 13 01:05:37.789576 systemd[1]: sshd@18-172.233.209.21:22-147.75.109.163:43580.service: Deactivated successfully. Aug 13 01:05:37.791555 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 01:05:37.793056 systemd-logind[1550]: Session 19 logged out. Waiting for processes to exit. Aug 13 01:05:37.794649 systemd-logind[1550]: Removed session 19. Aug 13 01:05:40.906434 kubelet[2717]: I0813 01:05:40.906377 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:40.906434 kubelet[2717]: I0813 01:05:40.906417 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:05:40.908127 kubelet[2717]: I0813 01:05:40.908099 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:05:40.922872 kubelet[2717]: I0813 01:05:40.922841 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:40.922980 kubelet[2717]: I0813 01:05:40.922951 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/coredns-7c65d6cfc9-tp469","kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:05:40.923145 kubelet[2717]: E0813 01:05:40.922982 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:05:40.923145 kubelet[2717]: E0813 01:05:40.922995 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:05:40.923145 kubelet[2717]: E0813 01:05:40.923005 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:05:40.923145 kubelet[2717]: E0813 01:05:40.923013 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:05:40.923145 kubelet[2717]: E0813 01:05:40.923020 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:05:40.923145 kubelet[2717]: E0813 01:05:40.923027 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:05:40.923145 kubelet[2717]: E0813 01:05:40.923038 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:05:40.923145 kubelet[2717]: E0813 01:05:40.923050 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:05:40.923145 kubelet[2717]: E0813 01:05:40.923058 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:05:40.923145 kubelet[2717]: E0813 01:05:40.923067 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:05:40.923145 kubelet[2717]: I0813 01:05:40.923075 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:05:42.766601 kubelet[2717]: E0813 01:05:42.766315 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:05:42.844037 systemd[1]: Started sshd@19-172.233.209.21:22-147.75.109.163:45116.service - OpenSSH per-connection server daemon (147.75.109.163:45116). Aug 13 01:05:43.170101 sshd[5859]: Accepted publickey for core from 147.75.109.163 port 45116 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:05:43.171643 sshd-session[5859]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:05:43.178052 systemd-logind[1550]: New session 20 of user core. Aug 13 01:05:43.183468 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 01:05:43.478423 sshd[5861]: Connection closed by 147.75.109.163 port 45116 Aug 13 01:05:43.479027 sshd-session[5859]: pam_unix(sshd:session): session closed for user core Aug 13 01:05:43.484290 systemd[1]: sshd@19-172.233.209.21:22-147.75.109.163:45116.service: Deactivated successfully. Aug 13 01:05:43.486469 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 01:05:43.487999 systemd-logind[1550]: Session 20 logged out. Waiting for processes to exit. Aug 13 01:05:43.489125 systemd-logind[1550]: Removed session 20. Aug 13 01:05:46.256829 containerd[1575]: time="2025-08-13T01:05:46.256790678Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6353c1e40a0c53412135ac4fbfd177615839c7b54973b0c0cad5c4d48fa83c96\" id:\"181c96bf23389cea72b1a21614cb2f74281418a87e2967222646046ea5e06f0f\" pid:5884 exited_at:{seconds:1755047146 nanos:256273172}" Aug 13 01:05:48.542673 systemd[1]: Started sshd@20-172.233.209.21:22-147.75.109.163:45582.service - OpenSSH per-connection server daemon (147.75.109.163:45582). Aug 13 01:05:48.894396 sshd[5896]: Accepted publickey for core from 147.75.109.163 port 45582 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:05:48.895752 sshd-session[5896]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:05:48.901490 systemd-logind[1550]: New session 21 of user core. Aug 13 01:05:48.905367 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 01:05:49.203693 sshd[5898]: Connection closed by 147.75.109.163 port 45582 Aug 13 01:05:49.205404 sshd-session[5896]: pam_unix(sshd:session): session closed for user core Aug 13 01:05:49.209918 systemd[1]: sshd@20-172.233.209.21:22-147.75.109.163:45582.service: Deactivated successfully. Aug 13 01:05:49.212049 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 01:05:49.214093 systemd-logind[1550]: Session 21 logged out. Waiting for processes to exit. Aug 13 01:05:49.215857 systemd-logind[1550]: Removed session 21. Aug 13 01:05:49.764751 kubelet[2717]: E0813 01:05:49.764713 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:05:50.950984 kubelet[2717]: I0813 01:05:50.950952 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:50.950984 kubelet[2717]: I0813 01:05:50.950987 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:05:50.952842 kubelet[2717]: I0813 01:05:50.952795 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:05:50.967738 kubelet[2717]: I0813 01:05:50.967717 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:05:50.967842 kubelet[2717]: I0813 01:05:50.967823 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/coredns-7c65d6cfc9-dk5p7","kube-system/coredns-7c65d6cfc9-tp469","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:05:50.967944 kubelet[2717]: E0813 01:05:50.967852 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:05:50.967944 kubelet[2717]: E0813 01:05:50.967865 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:05:50.967944 kubelet[2717]: E0813 01:05:50.967875 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:05:50.967944 kubelet[2717]: E0813 01:05:50.967882 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:05:50.967944 kubelet[2717]: E0813 01:05:50.967890 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:05:50.967944 kubelet[2717]: E0813 01:05:50.967899 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:05:50.967944 kubelet[2717]: E0813 01:05:50.967906 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:05:50.967944 kubelet[2717]: E0813 01:05:50.967918 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:05:50.967944 kubelet[2717]: E0813 01:05:50.967926 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:05:50.967944 kubelet[2717]: E0813 01:05:50.967934 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:05:50.967944 kubelet[2717]: I0813 01:05:50.967943 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:05:53.765359 kubelet[2717]: E0813 01:05:53.765310 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:05:54.263590 systemd[1]: Started sshd@21-172.233.209.21:22-147.75.109.163:45594.service - OpenSSH per-connection server daemon (147.75.109.163:45594). Aug 13 01:05:54.603154 sshd[5910]: Accepted publickey for core from 147.75.109.163 port 45594 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:05:54.604352 sshd-session[5910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:05:54.608669 systemd-logind[1550]: New session 22 of user core. Aug 13 01:05:54.616307 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 01:05:54.904047 sshd[5912]: Connection closed by 147.75.109.163 port 45594 Aug 13 01:05:54.904867 sshd-session[5910]: pam_unix(sshd:session): session closed for user core Aug 13 01:05:54.909177 systemd[1]: sshd@21-172.233.209.21:22-147.75.109.163:45594.service: Deactivated successfully. Aug 13 01:05:54.911081 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 01:05:54.912046 systemd-logind[1550]: Session 22 logged out. Waiting for processes to exit. Aug 13 01:05:54.913411 systemd-logind[1550]: Removed session 22. Aug 13 01:05:59.968651 systemd[1]: Started sshd@22-172.233.209.21:22-147.75.109.163:41994.service - OpenSSH per-connection server daemon (147.75.109.163:41994). Aug 13 01:06:00.313791 sshd[5930]: Accepted publickey for core from 147.75.109.163 port 41994 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:06:00.315542 sshd-session[5930]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:06:00.321421 systemd-logind[1550]: New session 23 of user core. Aug 13 01:06:00.328718 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 01:06:00.634077 sshd[5932]: Connection closed by 147.75.109.163 port 41994 Aug 13 01:06:00.635840 sshd-session[5930]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:00.640252 systemd-logind[1550]: Session 23 logged out. Waiting for processes to exit. Aug 13 01:06:00.641098 systemd[1]: sshd@22-172.233.209.21:22-147.75.109.163:41994.service: Deactivated successfully. Aug 13 01:06:00.643035 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 01:06:00.647622 systemd-logind[1550]: Removed session 23. Aug 13 01:06:00.996946 kubelet[2717]: I0813 01:06:00.996915 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:00.996946 kubelet[2717]: I0813 01:06:00.996952 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:06:00.998538 kubelet[2717]: I0813 01:06:00.998517 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:06:01.012300 kubelet[2717]: I0813 01:06:01.012283 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:01.012509 kubelet[2717]: I0813 01:06:01.012487 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/coredns-7c65d6cfc9-dk5p7","kube-system/coredns-7c65d6cfc9-tp469","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:06:01.012603 kubelet[2717]: E0813 01:06:01.012521 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:06:01.012603 kubelet[2717]: E0813 01:06:01.012534 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:06:01.012603 kubelet[2717]: E0813 01:06:01.012543 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:06:01.012603 kubelet[2717]: E0813 01:06:01.012553 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:06:01.012603 kubelet[2717]: E0813 01:06:01.012561 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:06:01.012603 kubelet[2717]: E0813 01:06:01.012569 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:06:01.012603 kubelet[2717]: E0813 01:06:01.012576 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:06:01.012603 kubelet[2717]: E0813 01:06:01.012586 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:06:01.012603 kubelet[2717]: E0813 01:06:01.012594 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:06:01.012603 kubelet[2717]: E0813 01:06:01.012602 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:06:01.012811 kubelet[2717]: I0813 01:06:01.012610 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:06:05.696499 systemd[1]: Started sshd@23-172.233.209.21:22-147.75.109.163:42000.service - OpenSSH per-connection server daemon (147.75.109.163:42000). Aug 13 01:06:05.764620 containerd[1575]: time="2025-08-13T01:06:05.764584076Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:06:06.039258 sshd[5948]: Accepted publickey for core from 147.75.109.163 port 42000 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:06:06.040982 sshd-session[5948]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:06:06.051873 systemd-logind[1550]: New session 24 of user core. Aug 13 01:06:06.056311 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 01:06:06.355101 sshd[5950]: Connection closed by 147.75.109.163 port 42000 Aug 13 01:06:06.355874 sshd-session[5948]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:06.360148 systemd[1]: sshd@23-172.233.209.21:22-147.75.109.163:42000.service: Deactivated successfully. Aug 13 01:06:06.362820 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 01:06:06.363644 systemd-logind[1550]: Session 24 logged out. Waiting for processes to exit. Aug 13 01:06:06.365062 systemd-logind[1550]: Removed session 24. Aug 13 01:06:07.562886 containerd[1575]: time="2025-08-13T01:06:07.562834555Z" level=error msg="failed to cleanup \"extract-339944724-7EBc sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:06:07.563382 containerd[1575]: time="2025-08-13T01:06:07.563344271Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 01:06:07.563451 containerd[1575]: time="2025-08-13T01:06:07.563414780Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=42995946" Aug 13 01:06:07.563658 kubelet[2717]: E0813 01:06:07.563616 2717 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:06:07.563926 kubelet[2717]: E0813 01:06:07.563669 2717 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:06:07.563926 kubelet[2717]: E0813 01:06:07.563779 2717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqfjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 01:06:07.565159 kubelet[2717]: E0813 01:06:07.565121 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:06:09.763737 kubelet[2717]: E0813 01:06:09.763708 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:06:10.764483 kubelet[2717]: E0813 01:06:10.764359 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:06:11.035266 kubelet[2717]: I0813 01:06:11.034904 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:11.035266 kubelet[2717]: I0813 01:06:11.035131 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:06:11.038062 kubelet[2717]: I0813 01:06:11.038033 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:06:11.056739 kubelet[2717]: I0813 01:06:11.056721 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:11.056900 kubelet[2717]: I0813 01:06:11.056868 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/coredns-7c65d6cfc9-dk5p7","kube-system/coredns-7c65d6cfc9-tp469","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:06:11.056900 kubelet[2717]: E0813 01:06:11.056896 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:06:11.057249 kubelet[2717]: E0813 01:06:11.056909 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:06:11.057249 kubelet[2717]: E0813 01:06:11.056918 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:06:11.057249 kubelet[2717]: E0813 01:06:11.056926 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:06:11.057249 kubelet[2717]: E0813 01:06:11.056934 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:06:11.057249 kubelet[2717]: E0813 01:06:11.056942 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:06:11.057249 kubelet[2717]: E0813 01:06:11.056950 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:06:11.057249 kubelet[2717]: E0813 01:06:11.056960 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:06:11.057249 kubelet[2717]: E0813 01:06:11.056968 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:06:11.057249 kubelet[2717]: E0813 01:06:11.056975 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:06:11.057249 kubelet[2717]: I0813 01:06:11.056984 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:06:11.418991 systemd[1]: Started sshd@24-172.233.209.21:22-147.75.109.163:55344.service - OpenSSH per-connection server daemon (147.75.109.163:55344). Aug 13 01:06:11.781887 sshd[5967]: Accepted publickey for core from 147.75.109.163 port 55344 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:06:11.784953 sshd-session[5967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:06:11.792390 systemd-logind[1550]: New session 25 of user core. Aug 13 01:06:11.796411 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 01:06:12.092780 sshd[5969]: Connection closed by 147.75.109.163 port 55344 Aug 13 01:06:12.093768 sshd-session[5967]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:12.097398 systemd-logind[1550]: Session 25 logged out. Waiting for processes to exit. Aug 13 01:06:12.097975 systemd[1]: sshd@24-172.233.209.21:22-147.75.109.163:55344.service: Deactivated successfully. Aug 13 01:06:12.099906 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 01:06:12.101940 systemd-logind[1550]: Removed session 25. Aug 13 01:06:16.260947 containerd[1575]: time="2025-08-13T01:06:16.260826212Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6353c1e40a0c53412135ac4fbfd177615839c7b54973b0c0cad5c4d48fa83c96\" id:\"4df2ee419e1bfbc2b2119390bcaeb7617fff5787661db8bb097d4bb9335e0182\" pid:6001 exited_at:{seconds:1755047176 nanos:260228866}" Aug 13 01:06:17.154534 systemd[1]: Started sshd@25-172.233.209.21:22-147.75.109.163:55350.service - OpenSSH per-connection server daemon (147.75.109.163:55350). Aug 13 01:06:17.485581 sshd[6012]: Accepted publickey for core from 147.75.109.163 port 55350 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:06:17.488345 sshd-session[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:06:17.494696 systemd-logind[1550]: New session 26 of user core. Aug 13 01:06:17.500414 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 01:06:17.815501 sshd[6014]: Connection closed by 147.75.109.163 port 55350 Aug 13 01:06:17.816526 sshd-session[6012]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:17.821318 systemd[1]: sshd@25-172.233.209.21:22-147.75.109.163:55350.service: Deactivated successfully. Aug 13 01:06:17.825636 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 01:06:17.832277 systemd-logind[1550]: Session 26 logged out. Waiting for processes to exit. Aug 13 01:06:17.835350 systemd-logind[1550]: Removed session 26. Aug 13 01:06:20.765223 kubelet[2717]: E0813 01:06:20.764453 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:06:21.079946 kubelet[2717]: I0813 01:06:21.079748 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:21.080170 kubelet[2717]: I0813 01:06:21.080158 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:06:21.082234 kubelet[2717]: I0813 01:06:21.082223 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:06:21.094719 kubelet[2717]: I0813 01:06:21.094692 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:21.094867 kubelet[2717]: I0813 01:06:21.094846 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/coredns-7c65d6cfc9-tp469","kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:06:21.094931 kubelet[2717]: E0813 01:06:21.094876 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:06:21.094931 kubelet[2717]: E0813 01:06:21.094888 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:06:21.094931 kubelet[2717]: E0813 01:06:21.094896 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:06:21.094931 kubelet[2717]: E0813 01:06:21.094903 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:06:21.094931 kubelet[2717]: E0813 01:06:21.094910 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:06:21.094931 kubelet[2717]: E0813 01:06:21.094918 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:06:21.094931 kubelet[2717]: E0813 01:06:21.094925 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:06:21.094931 kubelet[2717]: E0813 01:06:21.094934 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:06:21.095158 kubelet[2717]: E0813 01:06:21.094944 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:06:21.095158 kubelet[2717]: E0813 01:06:21.094951 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:06:21.095158 kubelet[2717]: I0813 01:06:21.094959 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:06:21.765621 kubelet[2717]: E0813 01:06:21.765275 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:06:22.877430 systemd[1]: Started sshd@26-172.233.209.21:22-147.75.109.163:42004.service - OpenSSH per-connection server daemon (147.75.109.163:42004). Aug 13 01:06:23.207589 sshd[6026]: Accepted publickey for core from 147.75.109.163 port 42004 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:06:23.209031 sshd-session[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:06:23.213658 systemd-logind[1550]: New session 27 of user core. Aug 13 01:06:23.219311 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 01:06:23.517753 sshd[6028]: Connection closed by 147.75.109.163 port 42004 Aug 13 01:06:23.518315 sshd-session[6026]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:23.522950 systemd[1]: sshd@26-172.233.209.21:22-147.75.109.163:42004.service: Deactivated successfully. Aug 13 01:06:23.525918 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 01:06:23.529584 systemd-logind[1550]: Session 27 logged out. Waiting for processes to exit. Aug 13 01:06:23.531898 systemd-logind[1550]: Removed session 27. Aug 13 01:06:28.581347 systemd[1]: Started sshd@27-172.233.209.21:22-147.75.109.163:36350.service - OpenSSH per-connection server daemon (147.75.109.163:36350). Aug 13 01:06:28.918182 sshd[6041]: Accepted publickey for core from 147.75.109.163 port 36350 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:06:28.919542 sshd-session[6041]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:06:28.924741 systemd-logind[1550]: New session 28 of user core. Aug 13 01:06:28.930516 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 01:06:29.226672 sshd[6043]: Connection closed by 147.75.109.163 port 36350 Aug 13 01:06:29.228205 sshd-session[6041]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:29.231049 systemd[1]: sshd@27-172.233.209.21:22-147.75.109.163:36350.service: Deactivated successfully. Aug 13 01:06:29.233183 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 01:06:29.234901 systemd-logind[1550]: Session 28 logged out. Waiting for processes to exit. Aug 13 01:06:29.236538 systemd-logind[1550]: Removed session 28. Aug 13 01:06:29.764525 kubelet[2717]: E0813 01:06:29.764489 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:06:31.115397 kubelet[2717]: I0813 01:06:31.115353 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:31.115397 kubelet[2717]: I0813 01:06:31.115388 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:06:31.116990 kubelet[2717]: I0813 01:06:31.116968 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:06:31.136749 kubelet[2717]: I0813 01:06:31.136719 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:31.136942 kubelet[2717]: I0813 01:06:31.136867 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/coredns-7c65d6cfc9-tp469","kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:06:31.136942 kubelet[2717]: E0813 01:06:31.136894 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:06:31.136942 kubelet[2717]: E0813 01:06:31.136906 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:06:31.136942 kubelet[2717]: E0813 01:06:31.136915 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:06:31.136942 kubelet[2717]: E0813 01:06:31.136922 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:06:31.136942 kubelet[2717]: E0813 01:06:31.136929 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:06:31.136942 kubelet[2717]: E0813 01:06:31.136937 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:06:31.136942 kubelet[2717]: E0813 01:06:31.136945 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:06:31.136942 kubelet[2717]: E0813 01:06:31.136954 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:06:31.136942 kubelet[2717]: E0813 01:06:31.136962 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:06:31.136942 kubelet[2717]: E0813 01:06:31.136969 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:06:31.136942 kubelet[2717]: I0813 01:06:31.136977 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:06:33.764180 kubelet[2717]: E0813 01:06:33.764147 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:06:34.287030 systemd[1]: Started sshd@28-172.233.209.21:22-147.75.109.163:36358.service - OpenSSH per-connection server daemon (147.75.109.163:36358). Aug 13 01:06:34.628803 sshd[6057]: Accepted publickey for core from 147.75.109.163 port 36358 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:06:34.630879 sshd-session[6057]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:06:34.641084 systemd-logind[1550]: New session 29 of user core. Aug 13 01:06:34.645391 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 01:06:34.770055 kubelet[2717]: E0813 01:06:34.770015 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:06:34.951185 sshd[6060]: Connection closed by 147.75.109.163 port 36358 Aug 13 01:06:34.952702 sshd-session[6057]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:34.957035 systemd[1]: sshd@28-172.233.209.21:22-147.75.109.163:36358.service: Deactivated successfully. Aug 13 01:06:34.959621 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 01:06:34.960683 systemd-logind[1550]: Session 29 logged out. Waiting for processes to exit. Aug 13 01:06:34.962651 systemd-logind[1550]: Removed session 29. Aug 13 01:06:40.012564 systemd[1]: Started sshd@29-172.233.209.21:22-147.75.109.163:37598.service - OpenSSH per-connection server daemon (147.75.109.163:37598). Aug 13 01:06:40.353403 sshd[6072]: Accepted publickey for core from 147.75.109.163 port 37598 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:06:40.354872 sshd-session[6072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:06:40.359518 systemd-logind[1550]: New session 30 of user core. Aug 13 01:06:40.364316 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 01:06:40.662266 sshd[6075]: Connection closed by 147.75.109.163 port 37598 Aug 13 01:06:40.663855 sshd-session[6072]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:40.667808 systemd-logind[1550]: Session 30 logged out. Waiting for processes to exit. Aug 13 01:06:40.668587 systemd[1]: sshd@29-172.233.209.21:22-147.75.109.163:37598.service: Deactivated successfully. Aug 13 01:06:40.671278 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 01:06:40.673481 systemd-logind[1550]: Removed session 30. Aug 13 01:06:41.157112 kubelet[2717]: I0813 01:06:41.157075 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:41.157112 kubelet[2717]: I0813 01:06:41.157116 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:06:41.159547 kubelet[2717]: I0813 01:06:41.159267 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:06:41.176333 kubelet[2717]: I0813 01:06:41.176295 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:41.176429 kubelet[2717]: I0813 01:06:41.176400 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/coredns-7c65d6cfc9-dk5p7","kube-system/coredns-7c65d6cfc9-tp469","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:06:41.176429 kubelet[2717]: E0813 01:06:41.176428 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:06:41.176513 kubelet[2717]: E0813 01:06:41.176440 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:06:41.176513 kubelet[2717]: E0813 01:06:41.176449 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:06:41.176513 kubelet[2717]: E0813 01:06:41.176458 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:06:41.176513 kubelet[2717]: E0813 01:06:41.176466 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:06:41.176513 kubelet[2717]: E0813 01:06:41.176473 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:06:41.176513 kubelet[2717]: E0813 01:06:41.176481 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:06:41.176513 kubelet[2717]: E0813 01:06:41.176490 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:06:41.176513 kubelet[2717]: E0813 01:06:41.176498 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:06:41.176513 kubelet[2717]: E0813 01:06:41.176505 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:06:41.176513 kubelet[2717]: I0813 01:06:41.176513 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:06:45.726348 systemd[1]: Started sshd@30-172.233.209.21:22-147.75.109.163:37614.service - OpenSSH per-connection server daemon (147.75.109.163:37614). Aug 13 01:06:46.065184 sshd[6087]: Accepted publickey for core from 147.75.109.163 port 37614 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:06:46.067538 sshd-session[6087]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:06:46.074031 systemd-logind[1550]: New session 31 of user core. Aug 13 01:06:46.079302 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 13 01:06:46.236908 containerd[1575]: time="2025-08-13T01:06:46.236862963Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6353c1e40a0c53412135ac4fbfd177615839c7b54973b0c0cad5c4d48fa83c96\" id:\"dbff6d41dca3e1d2bb5d0b53318a55f6f32f810b207e3e0c5de924b66626bd7c\" pid:6104 exited_at:{seconds:1755047206 nanos:236605795}" Aug 13 01:06:46.369377 sshd[6090]: Connection closed by 147.75.109.163 port 37614 Aug 13 01:06:46.371002 sshd-session[6087]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:46.377125 systemd[1]: sshd@30-172.233.209.21:22-147.75.109.163:37614.service: Deactivated successfully. Aug 13 01:06:46.380246 systemd[1]: session-31.scope: Deactivated successfully. Aug 13 01:06:46.381185 systemd-logind[1550]: Session 31 logged out. Waiting for processes to exit. Aug 13 01:06:46.383688 systemd-logind[1550]: Removed session 31. Aug 13 01:06:48.764918 kubelet[2717]: E0813 01:06:48.764747 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:06:49.764840 kubelet[2717]: E0813 01:06:49.764801 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:06:51.201614 kubelet[2717]: I0813 01:06:51.201572 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:51.201614 kubelet[2717]: I0813 01:06:51.201609 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:06:51.203802 kubelet[2717]: I0813 01:06:51.203786 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:06:51.217660 kubelet[2717]: I0813 01:06:51.217640 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:06:51.217790 kubelet[2717]: I0813 01:06:51.217770 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/coredns-7c65d6cfc9-tp469","kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:06:51.217871 kubelet[2717]: E0813 01:06:51.217799 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:06:51.217871 kubelet[2717]: E0813 01:06:51.217813 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:06:51.217871 kubelet[2717]: E0813 01:06:51.217822 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:06:51.217871 kubelet[2717]: E0813 01:06:51.217830 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:06:51.217871 kubelet[2717]: E0813 01:06:51.217839 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:06:51.217871 kubelet[2717]: E0813 01:06:51.217848 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:06:51.217871 kubelet[2717]: E0813 01:06:51.217855 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:06:51.217871 kubelet[2717]: E0813 01:06:51.217865 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:06:51.217871 kubelet[2717]: E0813 01:06:51.217873 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:06:51.217871 kubelet[2717]: E0813 01:06:51.217880 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:06:51.217871 kubelet[2717]: I0813 01:06:51.217889 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:06:51.434465 systemd[1]: Started sshd@31-172.233.209.21:22-147.75.109.163:56184.service - OpenSSH per-connection server daemon (147.75.109.163:56184). Aug 13 01:06:51.775369 sshd[6127]: Accepted publickey for core from 147.75.109.163 port 56184 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:06:51.777707 sshd-session[6127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:06:51.784233 systemd-logind[1550]: New session 32 of user core. Aug 13 01:06:51.793557 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 13 01:06:52.100983 sshd[6129]: Connection closed by 147.75.109.163 port 56184 Aug 13 01:06:52.101851 sshd-session[6127]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:52.106695 systemd-logind[1550]: Session 32 logged out. Waiting for processes to exit. Aug 13 01:06:52.106839 systemd[1]: sshd@31-172.233.209.21:22-147.75.109.163:56184.service: Deactivated successfully. Aug 13 01:06:52.109110 systemd[1]: session-32.scope: Deactivated successfully. Aug 13 01:06:52.112696 systemd-logind[1550]: Removed session 32. Aug 13 01:06:57.167037 systemd[1]: Started sshd@32-172.233.209.21:22-147.75.109.163:56188.service - OpenSSH per-connection server daemon (147.75.109.163:56188). Aug 13 01:06:57.515495 sshd[6142]: Accepted publickey for core from 147.75.109.163 port 56188 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:06:57.517201 sshd-session[6142]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:06:57.522344 systemd-logind[1550]: New session 33 of user core. Aug 13 01:06:57.529344 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 13 01:06:57.765821 kubelet[2717]: E0813 01:06:57.764509 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:06:57.844406 sshd[6144]: Connection closed by 147.75.109.163 port 56188 Aug 13 01:06:57.845396 sshd-session[6142]: pam_unix(sshd:session): session closed for user core Aug 13 01:06:57.850810 systemd[1]: sshd@32-172.233.209.21:22-147.75.109.163:56188.service: Deactivated successfully. Aug 13 01:06:57.853542 systemd[1]: session-33.scope: Deactivated successfully. Aug 13 01:06:57.855286 systemd-logind[1550]: Session 33 logged out. Waiting for processes to exit. Aug 13 01:06:57.857024 systemd-logind[1550]: Removed session 33. Aug 13 01:07:01.242839 kubelet[2717]: I0813 01:07:01.242807 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:01.242839 kubelet[2717]: I0813 01:07:01.242849 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:07:01.244759 kubelet[2717]: I0813 01:07:01.244729 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:07:01.260758 kubelet[2717]: I0813 01:07:01.260731 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:01.260884 kubelet[2717]: I0813 01:07:01.260855 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/coredns-7c65d6cfc9-tp469","kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:07:01.260945 kubelet[2717]: E0813 01:07:01.260884 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:07:01.260945 kubelet[2717]: E0813 01:07:01.260897 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:07:01.260945 kubelet[2717]: E0813 01:07:01.260905 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:07:01.260945 kubelet[2717]: E0813 01:07:01.260913 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:07:01.260945 kubelet[2717]: E0813 01:07:01.260920 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:07:01.260945 kubelet[2717]: E0813 01:07:01.260929 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:07:01.260945 kubelet[2717]: E0813 01:07:01.260937 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:07:01.260945 kubelet[2717]: E0813 01:07:01.260947 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:07:01.261139 kubelet[2717]: E0813 01:07:01.260954 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:07:01.261139 kubelet[2717]: E0813 01:07:01.260962 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:07:01.261139 kubelet[2717]: I0813 01:07:01.260970 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:07:02.905331 systemd[1]: Started sshd@33-172.233.209.21:22-147.75.109.163:41138.service - OpenSSH per-connection server daemon (147.75.109.163:41138). Aug 13 01:07:03.244909 sshd[6156]: Accepted publickey for core from 147.75.109.163 port 41138 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:07:03.246441 sshd-session[6156]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:03.251867 systemd-logind[1550]: New session 34 of user core. Aug 13 01:07:03.256337 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 13 01:07:03.555526 sshd[6160]: Connection closed by 147.75.109.163 port 41138 Aug 13 01:07:03.556337 sshd-session[6156]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:03.560495 systemd-logind[1550]: Session 34 logged out. Waiting for processes to exit. Aug 13 01:07:03.561118 systemd[1]: sshd@33-172.233.209.21:22-147.75.109.163:41138.service: Deactivated successfully. Aug 13 01:07:03.563756 systemd[1]: session-34.scope: Deactivated successfully. Aug 13 01:07:03.565179 systemd-logind[1550]: Removed session 34. Aug 13 01:07:04.770958 kubelet[2717]: E0813 01:07:04.770875 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:07:08.618636 systemd[1]: Started sshd@34-172.233.209.21:22-147.75.109.163:40052.service - OpenSSH per-connection server daemon (147.75.109.163:40052). Aug 13 01:07:08.951932 sshd[6173]: Accepted publickey for core from 147.75.109.163 port 40052 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:07:08.953297 sshd-session[6173]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:08.957432 systemd-logind[1550]: New session 35 of user core. Aug 13 01:07:08.962337 systemd[1]: Started session-35.scope - Session 35 of User core. Aug 13 01:07:09.288754 sshd[6175]: Connection closed by 147.75.109.163 port 40052 Aug 13 01:07:09.289662 sshd-session[6173]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:09.294041 systemd[1]: sshd@34-172.233.209.21:22-147.75.109.163:40052.service: Deactivated successfully. Aug 13 01:07:09.296642 systemd[1]: session-35.scope: Deactivated successfully. Aug 13 01:07:09.298289 systemd-logind[1550]: Session 35 logged out. Waiting for processes to exit. Aug 13 01:07:09.300056 systemd-logind[1550]: Removed session 35. Aug 13 01:07:11.285374 kubelet[2717]: I0813 01:07:11.285327 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:11.285374 kubelet[2717]: I0813 01:07:11.285368 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:07:11.287701 kubelet[2717]: I0813 01:07:11.287659 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:07:11.302451 kubelet[2717]: I0813 01:07:11.302429 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:11.302614 kubelet[2717]: I0813 01:07:11.302577 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/coredns-7c65d6cfc9-dk5p7","kube-system/coredns-7c65d6cfc9-tp469","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:07:11.302614 kubelet[2717]: E0813 01:07:11.302608 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:07:11.302723 kubelet[2717]: E0813 01:07:11.302622 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:07:11.302723 kubelet[2717]: E0813 01:07:11.302631 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:07:11.302723 kubelet[2717]: E0813 01:07:11.302639 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:07:11.302723 kubelet[2717]: E0813 01:07:11.302647 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:07:11.302723 kubelet[2717]: E0813 01:07:11.302654 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:07:11.302723 kubelet[2717]: E0813 01:07:11.302661 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:07:11.302723 kubelet[2717]: E0813 01:07:11.302672 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:07:11.302723 kubelet[2717]: E0813 01:07:11.302680 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:07:11.302723 kubelet[2717]: E0813 01:07:11.302687 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:07:11.302723 kubelet[2717]: I0813 01:07:11.302695 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:07:14.354393 systemd[1]: Started sshd@35-172.233.209.21:22-147.75.109.163:40056.service - OpenSSH per-connection server daemon (147.75.109.163:40056). Aug 13 01:07:14.688549 sshd[6187]: Accepted publickey for core from 147.75.109.163 port 40056 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:07:14.690504 sshd-session[6187]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:14.696540 systemd-logind[1550]: New session 36 of user core. Aug 13 01:07:14.707324 systemd[1]: Started session-36.scope - Session 36 of User core. Aug 13 01:07:15.006382 sshd[6189]: Connection closed by 147.75.109.163 port 40056 Aug 13 01:07:15.006940 sshd-session[6187]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:15.011584 systemd-logind[1550]: Session 36 logged out. Waiting for processes to exit. Aug 13 01:07:15.012456 systemd[1]: sshd@35-172.233.209.21:22-147.75.109.163:40056.service: Deactivated successfully. Aug 13 01:07:15.014875 systemd[1]: session-36.scope: Deactivated successfully. Aug 13 01:07:15.016824 systemd-logind[1550]: Removed session 36. Aug 13 01:07:16.241518 containerd[1575]: time="2025-08-13T01:07:16.241463590Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6353c1e40a0c53412135ac4fbfd177615839c7b54973b0c0cad5c4d48fa83c96\" id:\"564dee260ae7fcbb056e2f525cd6fe289acab819e757aeb78d6a97b8906870ca\" pid:6213 exited_at:{seconds:1755047236 nanos:240974914}" Aug 13 01:07:16.765411 kubelet[2717]: E0813 01:07:16.765050 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:07:20.072395 systemd[1]: Started sshd@36-172.233.209.21:22-147.75.109.163:51902.service - OpenSSH per-connection server daemon (147.75.109.163:51902). Aug 13 01:07:20.404229 sshd[6231]: Accepted publickey for core from 147.75.109.163 port 51902 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:07:20.405826 sshd-session[6231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:20.410248 systemd-logind[1550]: New session 37 of user core. Aug 13 01:07:20.416522 systemd[1]: Started session-37.scope - Session 37 of User core. Aug 13 01:07:20.570263 containerd[1575]: time="2025-08-13T01:07:20.570147409Z" level=warning msg="container event discarded" container=bd651942f9d3447e48f150306a91a5368a971d0f2692282ecc148fdc85f369be type=CONTAINER_CREATED_EVENT Aug 13 01:07:20.580472 containerd[1575]: time="2025-08-13T01:07:20.580430118Z" level=warning msg="container event discarded" container=bd651942f9d3447e48f150306a91a5368a971d0f2692282ecc148fdc85f369be type=CONTAINER_STARTED_EVENT Aug 13 01:07:20.603121 containerd[1575]: time="2025-08-13T01:07:20.603060611Z" level=warning msg="container event discarded" container=af7599726c4a7bb0f84095840add605519925661b1fd0d7c44e6e916a2db03a0 type=CONTAINER_CREATED_EVENT Aug 13 01:07:20.603338 containerd[1575]: time="2025-08-13T01:07:20.603094131Z" level=warning msg="container event discarded" container=b7d5a29b541638c5db5273cbe54b0c1bb924d30867f082edced29f91f09be526 type=CONTAINER_CREATED_EVENT Aug 13 01:07:20.603338 containerd[1575]: time="2025-08-13T01:07:20.603258400Z" level=warning msg="container event discarded" container=b7d5a29b541638c5db5273cbe54b0c1bb924d30867f082edced29f91f09be526 type=CONTAINER_STARTED_EVENT Aug 13 01:07:20.648477 containerd[1575]: time="2025-08-13T01:07:20.648424467Z" level=warning msg="container event discarded" container=0b6e87b87d88e3bd18447e040239f34829b57f7871e191e0c1fc3e8ad09456eb type=CONTAINER_CREATED_EVENT Aug 13 01:07:20.660915 containerd[1575]: time="2025-08-13T01:07:20.660734282Z" level=warning msg="container event discarded" container=75e99c3b3cc7006464adc35ae58981cc654fb986a652f2e7217296735a7b8e4e type=CONTAINER_CREATED_EVENT Aug 13 01:07:20.660915 containerd[1575]: time="2025-08-13T01:07:20.660790351Z" level=warning msg="container event discarded" container=75e99c3b3cc7006464adc35ae58981cc654fb986a652f2e7217296735a7b8e4e type=CONTAINER_STARTED_EVENT Aug 13 01:07:20.698976 containerd[1575]: time="2025-08-13T01:07:20.698946807Z" level=warning msg="container event discarded" container=3073eeb3c8dd69d4913099e5d19baef1ad1f1065889b95e1161dfed0f2b7f499 type=CONTAINER_CREATED_EVENT Aug 13 01:07:20.709447 sshd[6233]: Connection closed by 147.75.109.163 port 51902 Aug 13 01:07:20.710137 sshd-session[6231]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:20.714592 systemd-logind[1550]: Session 37 logged out. Waiting for processes to exit. Aug 13 01:07:20.714958 systemd[1]: sshd@36-172.233.209.21:22-147.75.109.163:51902.service: Deactivated successfully. Aug 13 01:07:20.717125 systemd[1]: session-37.scope: Deactivated successfully. Aug 13 01:07:20.721756 systemd-logind[1550]: Removed session 37. Aug 13 01:07:20.722323 containerd[1575]: time="2025-08-13T01:07:20.721901068Z" level=warning msg="container event discarded" container=af7599726c4a7bb0f84095840add605519925661b1fd0d7c44e6e916a2db03a0 type=CONTAINER_STARTED_EVENT Aug 13 01:07:20.780608 containerd[1575]: time="2025-08-13T01:07:20.780565521Z" level=warning msg="container event discarded" container=0b6e87b87d88e3bd18447e040239f34829b57f7871e191e0c1fc3e8ad09456eb type=CONTAINER_STARTED_EVENT Aug 13 01:07:20.847877 containerd[1575]: time="2025-08-13T01:07:20.847819575Z" level=warning msg="container event discarded" container=3073eeb3c8dd69d4913099e5d19baef1ad1f1065889b95e1161dfed0f2b7f499 type=CONTAINER_STARTED_EVENT Aug 13 01:07:21.333353 kubelet[2717]: I0813 01:07:21.332767 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:21.333353 kubelet[2717]: I0813 01:07:21.332815 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:07:21.335265 kubelet[2717]: I0813 01:07:21.334995 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:07:21.349422 kubelet[2717]: I0813 01:07:21.349148 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:21.349422 kubelet[2717]: I0813 01:07:21.349291 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/coredns-7c65d6cfc9-tp469","kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:07:21.349422 kubelet[2717]: E0813 01:07:21.349318 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:07:21.349422 kubelet[2717]: E0813 01:07:21.349330 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:07:21.349422 kubelet[2717]: E0813 01:07:21.349340 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:07:21.349422 kubelet[2717]: E0813 01:07:21.349348 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:07:21.349422 kubelet[2717]: E0813 01:07:21.349357 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:07:21.349422 kubelet[2717]: E0813 01:07:21.349366 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:07:21.349422 kubelet[2717]: E0813 01:07:21.349374 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:07:21.349422 kubelet[2717]: E0813 01:07:21.349385 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:07:21.349422 kubelet[2717]: E0813 01:07:21.349392 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:07:21.349422 kubelet[2717]: E0813 01:07:21.349399 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:07:21.349422 kubelet[2717]: I0813 01:07:21.349408 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:07:23.765615 kubelet[2717]: E0813 01:07:23.765049 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:07:24.771426 kubelet[2717]: I0813 01:07:24.771013 2717 image_gc_manager.go:383] "Disk usage on image filesystem is over the high threshold, trying to free bytes down to the low threshold" usage=100 highThreshold=85 amountToFree=411531673 lowThreshold=80 Aug 13 01:07:24.771426 kubelet[2717]: E0813 01:07:24.771044 2717 kubelet.go:1474] "Image garbage collection failed multiple times in a row" err="Failed to garbage collect required amount of images. Attempted to free 411531673 bytes, but only found 0 bytes eligible to free." Aug 13 01:07:25.770643 systemd[1]: Started sshd@37-172.233.209.21:22-147.75.109.163:51912.service - OpenSSH per-connection server daemon (147.75.109.163:51912). Aug 13 01:07:26.101746 sshd[6249]: Accepted publickey for core from 147.75.109.163 port 51912 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:07:26.103495 sshd-session[6249]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:26.108441 systemd-logind[1550]: New session 38 of user core. Aug 13 01:07:26.115308 systemd[1]: Started session-38.scope - Session 38 of User core. Aug 13 01:07:26.408637 sshd[6251]: Connection closed by 147.75.109.163 port 51912 Aug 13 01:07:26.409620 sshd-session[6249]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:26.413681 systemd-logind[1550]: Session 38 logged out. Waiting for processes to exit. Aug 13 01:07:26.414406 systemd[1]: sshd@37-172.233.209.21:22-147.75.109.163:51912.service: Deactivated successfully. Aug 13 01:07:26.416587 systemd[1]: session-38.scope: Deactivated successfully. Aug 13 01:07:26.418521 systemd-logind[1550]: Removed session 38. Aug 13 01:07:30.768582 containerd[1575]: time="2025-08-13T01:07:30.768142420Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:07:31.375601 kubelet[2717]: I0813 01:07:31.375548 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:31.375601 kubelet[2717]: I0813 01:07:31.375604 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:07:31.378764 kubelet[2717]: I0813 01:07:31.378693 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:07:31.412969 kubelet[2717]: I0813 01:07:31.412949 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:31.413437 kubelet[2717]: I0813 01:07:31.413224 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/coredns-7c65d6cfc9-dk5p7","kube-system/coredns-7c65d6cfc9-tp469","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:07:31.413572 kubelet[2717]: E0813 01:07:31.413560 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:07:31.413654 kubelet[2717]: E0813 01:07:31.413644 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:07:31.413744 kubelet[2717]: E0813 01:07:31.413725 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:07:31.413811 kubelet[2717]: E0813 01:07:31.413802 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:07:31.413875 kubelet[2717]: E0813 01:07:31.413849 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:07:31.413949 kubelet[2717]: E0813 01:07:31.413914 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:07:31.413949 kubelet[2717]: E0813 01:07:31.413926 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:07:31.414245 kubelet[2717]: E0813 01:07:31.414013 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:07:31.414245 kubelet[2717]: E0813 01:07:31.414028 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:07:31.414245 kubelet[2717]: E0813 01:07:31.414036 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:07:31.414245 kubelet[2717]: I0813 01:07:31.414044 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:07:31.470229 systemd[1]: Started sshd@38-172.233.209.21:22-147.75.109.163:60538.service - OpenSSH per-connection server daemon (147.75.109.163:60538). Aug 13 01:07:31.549803 containerd[1575]: time="2025-08-13T01:07:31.549759664Z" level=error msg="failed to cleanup \"extract-349979276-zDGJ sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:07:31.550360 containerd[1575]: time="2025-08-13T01:07:31.550322240Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 01:07:31.550464 containerd[1575]: time="2025-08-13T01:07:31.550393190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=42995946" Aug 13 01:07:31.550600 kubelet[2717]: E0813 01:07:31.550557 2717 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:07:31.550689 kubelet[2717]: E0813 01:07:31.550605 2717 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:07:31.550755 kubelet[2717]: E0813 01:07:31.550710 2717 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-vqfjp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-564d8b8748-ps97n_calico-system(a7e8405c-2c82-420c-bac7-a7277571f968): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 01:07:31.553038 kubelet[2717]: E0813 01:07:31.551926 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:07:31.729860 containerd[1575]: time="2025-08-13T01:07:31.729802979Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/kube-system_kube-controller-manager-172-233-209-21_64d5421c8c4ac817f60c78f054d6ea67/kube-controller-manager/0.log\"" error="write /var/log/pods/kube-system_kube-controller-manager-172-233-209-21_64d5421c8c4ac817f60c78f054d6ea67/kube-controller-manager/0.log: no space left on device" Aug 13 01:07:31.732651 containerd[1575]: time="2025-08-13T01:07:31.732608239Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/kube-system_kube-controller-manager-172-233-209-21_64d5421c8c4ac817f60c78f054d6ea67/kube-controller-manager/0.log\"" error="write /var/log/pods/kube-system_kube-controller-manager-172-233-209-21_64d5421c8c4ac817f60c78f054d6ea67/kube-controller-manager/0.log: no space left on device" Aug 13 01:07:31.800461 sshd[6264]: Accepted publickey for core from 147.75.109.163 port 60538 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:07:31.802136 sshd-session[6264]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:31.806813 systemd-logind[1550]: New session 39 of user core. Aug 13 01:07:31.812332 systemd[1]: Started session-39.scope - Session 39 of User core. Aug 13 01:07:32.097117 sshd[6266]: Connection closed by 147.75.109.163 port 60538 Aug 13 01:07:32.096922 sshd-session[6264]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:32.101744 systemd-logind[1550]: Session 39 logged out. Waiting for processes to exit. Aug 13 01:07:32.102109 systemd[1]: sshd@38-172.233.209.21:22-147.75.109.163:60538.service: Deactivated successfully. Aug 13 01:07:32.104766 systemd[1]: session-39.scope: Deactivated successfully. Aug 13 01:07:32.106324 systemd-logind[1550]: Removed session 39. Aug 13 01:07:32.274005 containerd[1575]: time="2025-08-13T01:07:32.273888516Z" level=warning msg="container event discarded" container=dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd type=CONTAINER_CREATED_EVENT Aug 13 01:07:32.274005 containerd[1575]: time="2025-08-13T01:07:32.273990445Z" level=warning msg="container event discarded" container=dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd type=CONTAINER_STARTED_EVENT Aug 13 01:07:32.668952 containerd[1575]: time="2025-08-13T01:07:32.668880165Z" level=warning msg="container event discarded" container=6e4659e6647912f96c24c4c199010edceb24af93264ea85d14a1c2f555935745 type=CONTAINER_CREATED_EVENT Aug 13 01:07:32.668952 containerd[1575]: time="2025-08-13T01:07:32.668918794Z" level=warning msg="container event discarded" container=6e4659e6647912f96c24c4c199010edceb24af93264ea85d14a1c2f555935745 type=CONTAINER_STARTED_EVENT Aug 13 01:07:32.686805 containerd[1575]: time="2025-08-13T01:07:32.686754151Z" level=warning msg="container event discarded" container=13874db445858d5039e0ae6525e003a6bee52c860065ff3b8da35836ae38375c type=CONTAINER_CREATED_EVENT Aug 13 01:07:32.760702 containerd[1575]: time="2025-08-13T01:07:32.760632990Z" level=warning msg="container event discarded" container=13874db445858d5039e0ae6525e003a6bee52c860065ff3b8da35836ae38375c type=CONTAINER_STARTED_EVENT Aug 13 01:07:33.626734 containerd[1575]: time="2025-08-13T01:07:33.626531193Z" level=warning msg="container event discarded" container=60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733 type=CONTAINER_CREATED_EVENT Aug 13 01:07:33.690538 containerd[1575]: time="2025-08-13T01:07:33.690432961Z" level=warning msg="container event discarded" container=60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733 type=CONTAINER_STARTED_EVENT Aug 13 01:07:36.764217 kubelet[2717]: E0813 01:07:36.764027 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:07:37.167650 systemd[1]: Started sshd@39-172.233.209.21:22-147.75.109.163:60548.service - OpenSSH per-connection server daemon (147.75.109.163:60548). Aug 13 01:07:37.518832 sshd[6281]: Accepted publickey for core from 147.75.109.163 port 60548 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:07:37.520595 sshd-session[6281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:37.531370 systemd-logind[1550]: New session 40 of user core. Aug 13 01:07:37.535317 systemd[1]: Started session-40.scope - Session 40 of User core. Aug 13 01:07:37.844654 sshd[6283]: Connection closed by 147.75.109.163 port 60548 Aug 13 01:07:37.845496 sshd-session[6281]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:37.850638 systemd[1]: sshd@39-172.233.209.21:22-147.75.109.163:60548.service: Deactivated successfully. Aug 13 01:07:37.853563 systemd[1]: session-40.scope: Deactivated successfully. Aug 13 01:07:37.855023 systemd-logind[1550]: Session 40 logged out. Waiting for processes to exit. Aug 13 01:07:37.859007 systemd-logind[1550]: Removed session 40. Aug 13 01:07:39.763799 kubelet[2717]: E0813 01:07:39.763765 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:07:41.451109 kubelet[2717]: I0813 01:07:41.451066 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:41.451109 kubelet[2717]: I0813 01:07:41.451104 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:07:41.452715 kubelet[2717]: I0813 01:07:41.452676 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:07:41.470617 kubelet[2717]: I0813 01:07:41.470584 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:41.470903 kubelet[2717]: I0813 01:07:41.470877 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/coredns-7c65d6cfc9-tp469","kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:07:41.471055 kubelet[2717]: E0813 01:07:41.470921 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:07:41.471055 kubelet[2717]: E0813 01:07:41.470942 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:07:41.471055 kubelet[2717]: E0813 01:07:41.470958 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:07:41.471055 kubelet[2717]: E0813 01:07:41.470971 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:07:41.471055 kubelet[2717]: E0813 01:07:41.470983 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:07:41.471055 kubelet[2717]: E0813 01:07:41.470995 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:07:41.471055 kubelet[2717]: E0813 01:07:41.471008 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:07:41.471055 kubelet[2717]: E0813 01:07:41.471025 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:07:41.471055 kubelet[2717]: E0813 01:07:41.471039 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:07:41.471055 kubelet[2717]: E0813 01:07:41.471050 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:07:41.471410 kubelet[2717]: I0813 01:07:41.471066 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:07:42.905439 systemd[1]: Started sshd@40-172.233.209.21:22-147.75.109.163:38402.service - OpenSSH per-connection server daemon (147.75.109.163:38402). Aug 13 01:07:43.159687 containerd[1575]: time="2025-08-13T01:07:43.159496756Z" level=warning msg="container event discarded" container=b135e3a363965c51514b25fe9381f404baf7f08abdd110257a0a2b464913aa29 type=CONTAINER_CREATED_EVENT Aug 13 01:07:43.159687 containerd[1575]: time="2025-08-13T01:07:43.159550565Z" level=warning msg="container event discarded" container=b135e3a363965c51514b25fe9381f404baf7f08abdd110257a0a2b464913aa29 type=CONTAINER_STARTED_EVENT Aug 13 01:07:43.243020 sshd[6300]: Accepted publickey for core from 147.75.109.163 port 38402 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:07:43.244586 sshd-session[6300]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:43.250365 systemd-logind[1550]: New session 41 of user core. Aug 13 01:07:43.259337 systemd[1]: Started session-41.scope - Session 41 of User core. Aug 13 01:07:43.452414 containerd[1575]: time="2025-08-13T01:07:43.452349103Z" level=warning msg="container event discarded" container=d12cf5aacc597b7c6167326efb88e368e58730ffb459492ad3a89d6754a56a6d type=CONTAINER_CREATED_EVENT Aug 13 01:07:43.452414 containerd[1575]: time="2025-08-13T01:07:43.452393373Z" level=warning msg="container event discarded" container=d12cf5aacc597b7c6167326efb88e368e58730ffb459492ad3a89d6754a56a6d type=CONTAINER_STARTED_EVENT Aug 13 01:07:43.555218 sshd[6302]: Connection closed by 147.75.109.163 port 38402 Aug 13 01:07:43.556615 sshd-session[6300]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:43.562461 systemd[1]: sshd@40-172.233.209.21:22-147.75.109.163:38402.service: Deactivated successfully. Aug 13 01:07:43.566575 systemd[1]: session-41.scope: Deactivated successfully. Aug 13 01:07:43.568488 systemd-logind[1550]: Session 41 logged out. Waiting for processes to exit. Aug 13 01:07:43.570835 systemd-logind[1550]: Removed session 41. Aug 13 01:07:44.475085 containerd[1575]: time="2025-08-13T01:07:44.475019820Z" level=warning msg="container event discarded" container=6867f4b46ce5c2c14a591a7f6045f73bfd4793092b7a915b4c0c284cbf28d414 type=CONTAINER_CREATED_EVENT Aug 13 01:07:44.569055 containerd[1575]: time="2025-08-13T01:07:44.568981791Z" level=warning msg="container event discarded" container=6867f4b46ce5c2c14a591a7f6045f73bfd4793092b7a915b4c0c284cbf28d414 type=CONTAINER_STARTED_EVENT Aug 13 01:07:45.204576 containerd[1575]: time="2025-08-13T01:07:45.204513103Z" level=warning msg="container event discarded" container=b70e5d90e0f0869799ff12c4a3266cb1daf342aba56b6b79724849f65ffb2782 type=CONTAINER_CREATED_EVENT Aug 13 01:07:45.277111 containerd[1575]: time="2025-08-13T01:07:45.277057492Z" level=warning msg="container event discarded" container=b70e5d90e0f0869799ff12c4a3266cb1daf342aba56b6b79724849f65ffb2782 type=CONTAINER_STARTED_EVENT Aug 13 01:07:45.381116 containerd[1575]: time="2025-08-13T01:07:45.381059124Z" level=warning msg="container event discarded" container=b70e5d90e0f0869799ff12c4a3266cb1daf342aba56b6b79724849f65ffb2782 type=CONTAINER_STOPPED_EVENT Aug 13 01:07:45.765244 kubelet[2717]: E0813 01:07:45.765079 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:07:46.255161 containerd[1575]: time="2025-08-13T01:07:46.255094438Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6353c1e40a0c53412135ac4fbfd177615839c7b54973b0c0cad5c4d48fa83c96\" id:\"c3fc55a8bddad294abf0e99ceb03cbb3273ce5e69368e5a362d3ccd1b7b882e1\" pid:6325 exited_at:{seconds:1755047266 nanos:254570751}" Aug 13 01:07:47.877698 containerd[1575]: time="2025-08-13T01:07:47.877647074Z" level=warning msg="container event discarded" container=e45e328e632598e8efb022a9d3ea880ef0aa58b0833c2795785ba9a8063fea15 type=CONTAINER_CREATED_EVENT Aug 13 01:07:47.955525 containerd[1575]: time="2025-08-13T01:07:47.955403998Z" level=warning msg="container event discarded" container=e45e328e632598e8efb022a9d3ea880ef0aa58b0833c2795785ba9a8063fea15 type=CONTAINER_STARTED_EVENT Aug 13 01:07:48.506818 containerd[1575]: time="2025-08-13T01:07:48.506745891Z" level=warning msg="container event discarded" container=e45e328e632598e8efb022a9d3ea880ef0aa58b0833c2795785ba9a8063fea15 type=CONTAINER_STOPPED_EVENT Aug 13 01:07:48.624760 systemd[1]: Started sshd@41-172.233.209.21:22-147.75.109.163:59292.service - OpenSSH per-connection server daemon (147.75.109.163:59292). Aug 13 01:07:48.967408 sshd[6347]: Accepted publickey for core from 147.75.109.163 port 59292 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:07:48.969177 sshd-session[6347]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:48.974808 systemd-logind[1550]: New session 42 of user core. Aug 13 01:07:48.981584 systemd[1]: Started session-42.scope - Session 42 of User core. Aug 13 01:07:49.283271 sshd[6349]: Connection closed by 147.75.109.163 port 59292 Aug 13 01:07:49.284143 sshd-session[6347]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:49.288886 systemd[1]: sshd@41-172.233.209.21:22-147.75.109.163:59292.service: Deactivated successfully. Aug 13 01:07:49.292052 systemd[1]: session-42.scope: Deactivated successfully. Aug 13 01:07:49.293099 systemd-logind[1550]: Session 42 logged out. Waiting for processes to exit. Aug 13 01:07:49.295655 systemd-logind[1550]: Removed session 42. Aug 13 01:07:51.503314 kubelet[2717]: I0813 01:07:51.503273 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:51.503314 kubelet[2717]: I0813 01:07:51.503316 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:07:51.505381 kubelet[2717]: I0813 01:07:51.505324 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:07:51.519883 kubelet[2717]: I0813 01:07:51.519851 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:07:51.520008 kubelet[2717]: I0813 01:07:51.519984 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/coredns-7c65d6cfc9-tp469","kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:07:51.520101 kubelet[2717]: E0813 01:07:51.520020 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:07:51.520101 kubelet[2717]: E0813 01:07:51.520034 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:07:51.520101 kubelet[2717]: E0813 01:07:51.520042 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:07:51.520101 kubelet[2717]: E0813 01:07:51.520051 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:07:51.520101 kubelet[2717]: E0813 01:07:51.520059 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:07:51.520101 kubelet[2717]: E0813 01:07:51.520067 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:07:51.520101 kubelet[2717]: E0813 01:07:51.520075 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:07:51.520101 kubelet[2717]: E0813 01:07:51.520087 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:07:51.520101 kubelet[2717]: E0813 01:07:51.520094 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:07:51.520101 kubelet[2717]: E0813 01:07:51.520103 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:07:51.520360 kubelet[2717]: I0813 01:07:51.520112 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:07:52.765690 kubelet[2717]: E0813 01:07:52.765649 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:07:54.346773 systemd[1]: Started sshd@42-172.233.209.21:22-147.75.109.163:59294.service - OpenSSH per-connection server daemon (147.75.109.163:59294). Aug 13 01:07:54.699041 sshd[6361]: Accepted publickey for core from 147.75.109.163 port 59294 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:07:54.700587 sshd-session[6361]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:07:54.706272 systemd-logind[1550]: New session 43 of user core. Aug 13 01:07:54.709604 systemd[1]: Started session-43.scope - Session 43 of User core. Aug 13 01:07:55.017901 sshd[6363]: Connection closed by 147.75.109.163 port 59294 Aug 13 01:07:55.018759 sshd-session[6361]: pam_unix(sshd:session): session closed for user core Aug 13 01:07:55.025167 systemd-logind[1550]: Session 43 logged out. Waiting for processes to exit. Aug 13 01:07:55.026818 systemd[1]: sshd@42-172.233.209.21:22-147.75.109.163:59294.service: Deactivated successfully. Aug 13 01:07:55.030935 systemd[1]: session-43.scope: Deactivated successfully. Aug 13 01:07:55.033964 systemd-logind[1550]: Removed session 43. Aug 13 01:07:57.764179 kubelet[2717]: E0813 01:07:57.764099 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:07:59.764555 kubelet[2717]: E0813 01:07:59.764519 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:08:00.083015 systemd[1]: Started sshd@43-172.233.209.21:22-147.75.109.163:59862.service - OpenSSH per-connection server daemon (147.75.109.163:59862). Aug 13 01:08:00.414958 sshd[6375]: Accepted publickey for core from 147.75.109.163 port 59862 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:08:00.415396 sshd-session[6375]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:08:00.422164 systemd-logind[1550]: New session 44 of user core. Aug 13 01:08:00.427465 systemd[1]: Started session-44.scope - Session 44 of User core. Aug 13 01:08:00.723924 sshd[6377]: Connection closed by 147.75.109.163 port 59862 Aug 13 01:08:00.725753 sshd-session[6375]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:00.730854 systemd-logind[1550]: Session 44 logged out. Waiting for processes to exit. Aug 13 01:08:00.731122 systemd[1]: sshd@43-172.233.209.21:22-147.75.109.163:59862.service: Deactivated successfully. Aug 13 01:08:00.734071 systemd[1]: session-44.scope: Deactivated successfully. Aug 13 01:08:00.738898 systemd-logind[1550]: Removed session 44. Aug 13 01:08:00.766832 kubelet[2717]: E0813 01:08:00.766745 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:08:01.553934 kubelet[2717]: I0813 01:08:01.553894 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:01.554105 kubelet[2717]: I0813 01:08:01.553960 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:08:01.555579 kubelet[2717]: I0813 01:08:01.555553 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:08:01.571552 kubelet[2717]: I0813 01:08:01.571526 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:01.571748 kubelet[2717]: I0813 01:08:01.571730 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/coredns-7c65d6cfc9-dk5p7","kube-system/coredns-7c65d6cfc9-tp469","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:08:01.571850 kubelet[2717]: E0813 01:08:01.571759 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:08:01.571850 kubelet[2717]: E0813 01:08:01.571772 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:08:01.571850 kubelet[2717]: E0813 01:08:01.571780 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:08:01.571850 kubelet[2717]: E0813 01:08:01.571788 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:08:01.571850 kubelet[2717]: E0813 01:08:01.571795 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:08:01.571850 kubelet[2717]: E0813 01:08:01.571802 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:08:01.571850 kubelet[2717]: E0813 01:08:01.571811 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:08:01.571850 kubelet[2717]: E0813 01:08:01.571820 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:08:01.571850 kubelet[2717]: E0813 01:08:01.571828 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:08:01.571850 kubelet[2717]: E0813 01:08:01.571835 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:08:01.571850 kubelet[2717]: I0813 01:08:01.571843 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:08:05.789467 systemd[1]: Started sshd@44-172.233.209.21:22-147.75.109.163:59864.service - OpenSSH per-connection server daemon (147.75.109.163:59864). Aug 13 01:08:06.128405 sshd[6391]: Accepted publickey for core from 147.75.109.163 port 59864 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:08:06.129351 sshd-session[6391]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:08:06.135094 systemd-logind[1550]: New session 45 of user core. Aug 13 01:08:06.139307 systemd[1]: Started session-45.scope - Session 45 of User core. Aug 13 01:08:06.435771 sshd[6393]: Connection closed by 147.75.109.163 port 59864 Aug 13 01:08:06.436366 sshd-session[6391]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:06.439882 systemd-logind[1550]: Session 45 logged out. Waiting for processes to exit. Aug 13 01:08:06.440936 systemd[1]: sshd@44-172.233.209.21:22-147.75.109.163:59864.service: Deactivated successfully. Aug 13 01:08:06.443068 systemd[1]: session-45.scope: Deactivated successfully. Aug 13 01:08:06.443959 systemd-logind[1550]: Removed session 45. Aug 13 01:08:11.507662 systemd[1]: Started sshd@45-172.233.209.21:22-147.75.109.163:33332.service - OpenSSH per-connection server daemon (147.75.109.163:33332). Aug 13 01:08:11.599447 kubelet[2717]: I0813 01:08:11.599321 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:11.600173 kubelet[2717]: I0813 01:08:11.600150 2717 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:08:11.602746 kubelet[2717]: I0813 01:08:11.602732 2717 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:08:11.636258 kubelet[2717]: I0813 01:08:11.636222 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:11.636469 kubelet[2717]: I0813 01:08:11.636443 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/coredns-7c65d6cfc9-tp469","kube-system/coredns-7c65d6cfc9-dk5p7","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:08:11.636558 kubelet[2717]: E0813 01:08:11.636500 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:08:11.636558 kubelet[2717]: E0813 01:08:11.636516 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:08:11.636558 kubelet[2717]: E0813 01:08:11.636525 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:08:11.636558 kubelet[2717]: E0813 01:08:11.636534 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:08:11.636558 kubelet[2717]: E0813 01:08:11.636541 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:08:11.636558 kubelet[2717]: E0813 01:08:11.636549 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:08:11.636722 kubelet[2717]: E0813 01:08:11.636578 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:08:11.636722 kubelet[2717]: E0813 01:08:11.636590 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:08:11.636722 kubelet[2717]: E0813 01:08:11.636601 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:08:11.636722 kubelet[2717]: E0813 01:08:11.636608 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:08:11.636722 kubelet[2717]: I0813 01:08:11.636618 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:08:11.858995 sshd[6405]: Accepted publickey for core from 147.75.109.163 port 33332 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:08:11.861250 sshd-session[6405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:08:11.867115 systemd-logind[1550]: New session 46 of user core. Aug 13 01:08:11.872089 systemd[1]: Started session-46.scope - Session 46 of User core. Aug 13 01:08:12.183359 sshd[6407]: Connection closed by 147.75.109.163 port 33332 Aug 13 01:08:12.183854 sshd-session[6405]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:12.188035 systemd[1]: sshd@45-172.233.209.21:22-147.75.109.163:33332.service: Deactivated successfully. Aug 13 01:08:12.191490 systemd[1]: session-46.scope: Deactivated successfully. Aug 13 01:08:12.192970 systemd-logind[1550]: Session 46 logged out. Waiting for processes to exit. Aug 13 01:08:12.194415 systemd-logind[1550]: Removed session 46. Aug 13 01:08:15.764477 kubelet[2717]: E0813 01:08:15.764438 2717 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" podUID="a7e8405c-2c82-420c-bac7-a7277571f968" Aug 13 01:08:16.261394 containerd[1575]: time="2025-08-13T01:08:16.261296264Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6353c1e40a0c53412135ac4fbfd177615839c7b54973b0c0cad5c4d48fa83c96\" id:\"8ad8340debfa30ecc48389d44fecd13c81ef14c282379ccc7222345ddd9dbf24\" pid:6430 exited_at:{seconds:1755047296 nanos:260221051}" Aug 13 01:08:17.246619 systemd[1]: Started sshd@46-172.233.209.21:22-147.75.109.163:33334.service - OpenSSH per-connection server daemon (147.75.109.163:33334). Aug 13 01:08:17.600482 sshd[6442]: Accepted publickey for core from 147.75.109.163 port 33334 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:08:17.602162 sshd-session[6442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:08:17.607975 systemd-logind[1550]: New session 47 of user core. Aug 13 01:08:17.613334 systemd[1]: Started session-47.scope - Session 47 of User core. Aug 13 01:08:17.927067 sshd[6444]: Connection closed by 147.75.109.163 port 33334 Aug 13 01:08:17.928413 sshd-session[6442]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:17.934946 systemd[1]: sshd@46-172.233.209.21:22-147.75.109.163:33334.service: Deactivated successfully. Aug 13 01:08:17.936603 systemd-logind[1550]: Session 47 logged out. Waiting for processes to exit. Aug 13 01:08:17.943000 systemd[1]: session-47.scope: Deactivated successfully. Aug 13 01:08:17.946066 systemd-logind[1550]: Removed session 47. Aug 13 01:08:17.992454 systemd[1]: Started sshd@47-172.233.209.21:22-147.75.109.163:33336.service - OpenSSH per-connection server daemon (147.75.109.163:33336). Aug 13 01:08:18.335052 sshd[6456]: Accepted publickey for core from 147.75.109.163 port 33336 ssh2: RSA SHA256:ID0gih5M8ShJwEB/aVku53fCQJj3YzoD+r3eZLPdBeU Aug 13 01:08:18.337043 sshd-session[6456]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:08:18.343354 systemd-logind[1550]: New session 48 of user core. Aug 13 01:08:18.347357 systemd[1]: Started session-48.scope - Session 48 of User core. Aug 13 01:08:18.673645 sshd[6458]: Connection closed by 147.75.109.163 port 33336 Aug 13 01:08:18.676132 sshd-session[6456]: pam_unix(sshd:session): session closed for user core Aug 13 01:08:18.680125 systemd-logind[1550]: Session 48 logged out. Waiting for processes to exit. Aug 13 01:08:18.681321 systemd[1]: sshd@47-172.233.209.21:22-147.75.109.163:33336.service: Deactivated successfully. Aug 13 01:08:18.685481 systemd[1]: session-48.scope: Deactivated successfully. Aug 13 01:08:18.688984 systemd-logind[1550]: Removed session 48. Aug 13 01:08:19.208725 containerd[1575]: time="2025-08-13T01:08:19.208627416Z" level=warning msg="container event discarded" container=60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733 type=CONTAINER_STOPPED_EVENT Aug 13 01:08:19.249015 containerd[1575]: time="2025-08-13T01:08:19.248942738Z" level=warning msg="container event discarded" container=dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd type=CONTAINER_STOPPED_EVENT Aug 13 01:08:19.974186 containerd[1575]: time="2025-08-13T01:08:19.974010190Z" level=warning msg="container event discarded" container=60e2160d8b51ff7f0502d3d80c0d6bca69166c1d36c9e23988db5102a7056733 type=CONTAINER_DELETED_EVENT Aug 13 01:08:20.764858 kubelet[2717]: E0813 01:08:20.764554 2717 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.16 172.232.0.21 172.232.0.13" Aug 13 01:08:21.670214 kubelet[2717]: I0813 01:08:21.670075 2717 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:21.670214 kubelet[2717]: I0813 01:08:21.670104 2717 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:08:21.671117 kubelet[2717]: I0813 01:08:21.670982 2717 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-564d8b8748-ps97n","calico-system/calico-typha-5fdd567c68-zgxjx","kube-system/coredns-7c65d6cfc9-dk5p7","kube-system/coredns-7c65d6cfc9-tp469","calico-system/calico-node-7pdcs","kube-system/kube-controller-manager-172-233-209-21","kube-system/kube-proxy-ff6qp","calico-system/csi-node-driver-84hvc","kube-system/kube-apiserver-172-233-209-21","kube-system/kube-scheduler-172-233-209-21"] Aug 13 01:08:21.671117 kubelet[2717]: E0813 01:08:21.671014 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-564d8b8748-ps97n" Aug 13 01:08:21.671117 kubelet[2717]: E0813 01:08:21.671028 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-5fdd567c68-zgxjx" Aug 13 01:08:21.671117 kubelet[2717]: E0813 01:08:21.671037 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-dk5p7" Aug 13 01:08:21.671117 kubelet[2717]: E0813 01:08:21.671045 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-tp469" Aug 13 01:08:21.671117 kubelet[2717]: E0813 01:08:21.671055 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-7pdcs" Aug 13 01:08:21.671117 kubelet[2717]: E0813 01:08:21.671062 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-233-209-21" Aug 13 01:08:21.671117 kubelet[2717]: E0813 01:08:21.671070 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-ff6qp" Aug 13 01:08:21.671117 kubelet[2717]: E0813 01:08:21.671080 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-84hvc" Aug 13 01:08:21.671117 kubelet[2717]: E0813 01:08:21.671088 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-233-209-21" Aug 13 01:08:21.671117 kubelet[2717]: E0813 01:08:21.671095 2717 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-233-209-21" Aug 13 01:08:21.671117 kubelet[2717]: I0813 01:08:21.671106 2717 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:08:24.773902 containerd[1575]: time="2025-08-13T01:08:24.773821607Z" level=warning msg="container event discarded" container=dcd59683c6f8a450aedf245d711a951256bd00e0c8b8d7ea49ba965b5a9d7edd type=CONTAINER_DELETED_EVENT