Aug 13 01:44:23.938156 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 01:44:23.938178 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:44:23.938188 kernel: BIOS-provided physical RAM map: Aug 13 01:44:23.938196 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 01:44:23.938202 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 01:44:23.938208 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 01:44:23.938215 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 01:44:23.938221 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 01:44:23.938228 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 01:44:23.938234 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 01:44:23.938240 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 01:44:23.938246 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 01:44:23.938254 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 01:44:23.938260 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 01:44:23.938267 kernel: NX (Execute Disable) protection: active Aug 13 01:44:23.938274 kernel: APIC: Static calls initialized Aug 13 01:44:23.938280 kernel: SMBIOS 2.8 present. Aug 13 01:44:23.938289 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 01:44:23.938296 kernel: DMI: Memory slots populated: 1/1 Aug 13 01:44:23.938305 kernel: Hypervisor detected: KVM Aug 13 01:44:23.938316 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 01:44:23.938326 kernel: kvm-clock: using sched offset of 7535406500 cycles Aug 13 01:44:23.938336 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 01:44:23.938346 kernel: tsc: Detected 2000.000 MHz processor Aug 13 01:44:23.938353 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:44:23.938361 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:44:23.938368 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 01:44:23.938377 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 01:44:23.938384 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:44:23.938391 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 01:44:23.938398 kernel: Using GB pages for direct mapping Aug 13 01:44:23.938404 kernel: ACPI: Early table checksum verification disabled Aug 13 01:44:23.938411 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 01:44:23.938418 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:23.938424 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:23.938431 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:23.938440 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 01:44:23.938447 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:23.938453 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:23.938460 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:23.938470 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:23.938477 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 01:44:23.938486 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 01:44:23.938494 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 01:44:23.938501 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 01:44:23.938508 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 01:44:23.938515 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 01:44:23.938525 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 01:44:23.938537 kernel: No NUMA configuration found Aug 13 01:44:23.938548 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 01:44:23.938559 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Aug 13 01:44:23.938566 kernel: Zone ranges: Aug 13 01:44:23.938573 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:44:23.938580 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 01:44:23.938587 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:44:23.938594 kernel: Device empty Aug 13 01:44:23.938601 kernel: Movable zone start for each node Aug 13 01:44:23.938608 kernel: Early memory node ranges Aug 13 01:44:23.938615 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 01:44:23.938624 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 01:44:23.938631 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:44:23.938670 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 01:44:23.938678 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:44:23.938685 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 01:44:23.938692 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 01:44:23.938699 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 01:44:23.938706 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 01:44:23.938713 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:44:23.938723 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 01:44:23.938731 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 01:44:23.938738 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:44:23.938745 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 01:44:23.938752 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 01:44:23.939255 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:44:23.939271 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 01:44:23.939279 kernel: TSC deadline timer available Aug 13 01:44:23.939286 kernel: CPU topo: Max. logical packages: 1 Aug 13 01:44:23.939296 kernel: CPU topo: Max. logical dies: 1 Aug 13 01:44:23.939303 kernel: CPU topo: Max. dies per package: 1 Aug 13 01:44:23.939310 kernel: CPU topo: Max. threads per core: 1 Aug 13 01:44:23.939317 kernel: CPU topo: Num. cores per package: 2 Aug 13 01:44:23.939324 kernel: CPU topo: Num. threads per package: 2 Aug 13 01:44:23.939331 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 01:44:23.939337 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 01:44:23.939344 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 01:44:23.939351 kernel: kvm-guest: setup PV sched yield Aug 13 01:44:23.939358 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 01:44:23.939367 kernel: Booting paravirtualized kernel on KVM Aug 13 01:44:23.939374 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:44:23.939381 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 01:44:23.939388 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 01:44:23.939395 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 01:44:23.939402 kernel: pcpu-alloc: [0] 0 1 Aug 13 01:44:23.939409 kernel: kvm-guest: PV spinlocks enabled Aug 13 01:44:23.939416 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 01:44:23.939424 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:44:23.939433 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:44:23.939440 kernel: random: crng init done Aug 13 01:44:23.939447 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 01:44:23.939454 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:44:23.939461 kernel: Fallback order for Node 0: 0 Aug 13 01:44:23.939468 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Aug 13 01:44:23.939475 kernel: Policy zone: Normal Aug 13 01:44:23.939481 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:44:23.939490 kernel: software IO TLB: area num 2. Aug 13 01:44:23.939497 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 01:44:23.939504 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 01:44:23.939511 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 01:44:23.939518 kernel: Dynamic Preempt: voluntary Aug 13 01:44:23.939525 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 01:44:23.939532 kernel: rcu: RCU event tracing is enabled. Aug 13 01:44:23.939540 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 01:44:23.939547 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 01:44:23.939556 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:44:23.939563 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:44:23.939569 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:44:23.939576 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 01:44:23.939583 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:44:23.939597 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:44:23.939606 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:44:23.939614 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 01:44:23.939621 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 01:44:23.939628 kernel: Console: colour VGA+ 80x25 Aug 13 01:44:23.939635 kernel: printk: legacy console [tty0] enabled Aug 13 01:44:23.939660 kernel: printk: legacy console [ttyS0] enabled Aug 13 01:44:23.939670 kernel: ACPI: Core revision 20240827 Aug 13 01:44:23.939678 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 01:44:23.939685 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:44:23.939692 kernel: x2apic enabled Aug 13 01:44:23.939699 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 01:44:23.939709 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 01:44:23.939716 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 01:44:23.939723 kernel: kvm-guest: setup PV IPIs Aug 13 01:44:23.939730 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:44:23.939738 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Aug 13 01:44:23.939745 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Aug 13 01:44:23.939752 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 01:44:23.939759 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 01:44:23.939769 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 01:44:23.939776 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:44:23.939783 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 01:44:23.939791 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 01:44:23.939798 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 01:44:23.939805 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:44:23.939812 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 01:44:23.939819 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 01:44:23.939827 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 01:44:23.939836 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 01:44:23.939844 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 01:44:23.939851 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:44:23.939858 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:44:23.939875 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:44:23.939883 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:44:23.939890 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 01:44:23.939897 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:44:23.939906 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 01:44:23.939914 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 01:44:23.939921 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:44:23.939928 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:44:23.939935 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 01:44:23.939942 kernel: landlock: Up and running. Aug 13 01:44:23.939950 kernel: SELinux: Initializing. Aug 13 01:44:23.939957 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:44:23.939964 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:44:23.939973 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 01:44:23.939981 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 01:44:23.939988 kernel: ... version: 0 Aug 13 01:44:23.939995 kernel: ... bit width: 48 Aug 13 01:44:23.940002 kernel: ... generic registers: 6 Aug 13 01:44:23.940009 kernel: ... value mask: 0000ffffffffffff Aug 13 01:44:23.940016 kernel: ... max period: 00007fffffffffff Aug 13 01:44:23.940023 kernel: ... fixed-purpose events: 0 Aug 13 01:44:23.940031 kernel: ... event mask: 000000000000003f Aug 13 01:44:23.940040 kernel: signal: max sigframe size: 3376 Aug 13 01:44:23.940047 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:44:23.940064 kernel: rcu: Max phase no-delay instances is 400. Aug 13 01:44:23.940084 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 01:44:23.940098 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:44:23.940105 kernel: smpboot: x86: Booting SMP configuration: Aug 13 01:44:23.940112 kernel: .... node #0, CPUs: #1 Aug 13 01:44:23.940120 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:44:23.940127 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Aug 13 01:44:23.940134 kernel: Memory: 3961808K/4193772K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 227288K reserved, 0K cma-reserved) Aug 13 01:44:23.940144 kernel: devtmpfs: initialized Aug 13 01:44:23.940151 kernel: x86/mm: Memory block size: 128MB Aug 13 01:44:23.940158 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:44:23.940165 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 01:44:23.940172 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:44:23.940180 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:44:23.940187 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:44:23.940194 kernel: audit: type=2000 audit(1755049459.602:1): state=initialized audit_enabled=0 res=1 Aug 13 01:44:23.940204 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:44:23.940211 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:44:23.940218 kernel: cpuidle: using governor menu Aug 13 01:44:23.940225 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:44:23.940232 kernel: dca service started, version 1.12.1 Aug 13 01:44:23.940239 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Aug 13 01:44:23.940247 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 01:44:23.940254 kernel: PCI: Using configuration type 1 for base access Aug 13 01:44:23.940262 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:44:23.940271 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:44:23.940279 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 01:44:23.940286 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:44:23.940293 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 01:44:23.940300 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:44:23.940307 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:44:23.940324 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:44:23.940340 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 01:44:23.940347 kernel: ACPI: Interpreter enabled Aug 13 01:44:23.940356 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 01:44:23.940363 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:44:23.940370 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:44:23.940377 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 01:44:23.940385 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 01:44:23.940392 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 01:44:23.940578 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:44:23.942279 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 01:44:23.942407 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 01:44:23.942419 kernel: PCI host bridge to bus 0000:00 Aug 13 01:44:23.942540 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:44:23.942658 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:44:23.943813 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:44:23.943918 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 01:44:23.944030 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 01:44:23.944147 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 01:44:23.944253 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 01:44:23.944396 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 01:44:23.944549 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 01:44:23.945558 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Aug 13 01:44:23.945773 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Aug 13 01:44:23.945917 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Aug 13 01:44:23.946062 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:44:23.946207 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 01:44:23.946334 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Aug 13 01:44:23.946460 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Aug 13 01:44:23.946600 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 01:44:23.947769 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 01:44:23.947894 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Aug 13 01:44:23.948003 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Aug 13 01:44:23.948110 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 01:44:23.948232 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Aug 13 01:44:23.948355 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 01:44:23.948464 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 01:44:23.973821 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 01:44:23.973997 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Aug 13 01:44:23.974114 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Aug 13 01:44:23.974389 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 01:44:23.974507 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Aug 13 01:44:23.974518 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 01:44:23.974526 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 01:44:23.974534 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:44:23.974547 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 01:44:23.974554 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 01:44:23.974562 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 01:44:23.974569 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 01:44:23.974576 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 01:44:23.974584 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 01:44:23.974591 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 01:44:23.974598 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 01:44:23.974606 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 01:44:23.974615 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 01:44:23.974623 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 01:44:23.974630 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 01:44:23.974637 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 01:44:23.974665 kernel: iommu: Default domain type: Translated Aug 13 01:44:23.974673 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:44:23.974681 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:44:23.974688 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:44:23.974696 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 01:44:23.974707 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 01:44:23.974824 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 01:44:23.974932 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 01:44:23.975037 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:44:23.975048 kernel: vgaarb: loaded Aug 13 01:44:23.975056 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 01:44:23.975063 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 01:44:23.975071 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 01:44:23.975081 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:44:23.975089 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:44:23.975097 kernel: pnp: PnP ACPI init Aug 13 01:44:23.975258 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 01:44:23.975272 kernel: pnp: PnP ACPI: found 5 devices Aug 13 01:44:23.975281 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:44:23.975288 kernel: NET: Registered PF_INET protocol family Aug 13 01:44:23.975296 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:44:23.975303 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 01:44:23.975314 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:44:23.975322 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:44:23.975350 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 01:44:23.975358 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 01:44:23.975366 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:44:23.975373 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:44:23.975380 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:44:23.975388 kernel: NET: Registered PF_XDP protocol family Aug 13 01:44:23.975505 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:44:23.975606 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:44:23.975732 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:44:23.975833 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 01:44:23.975931 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 01:44:23.976115 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 01:44:23.976127 kernel: PCI: CLS 0 bytes, default 64 Aug 13 01:44:23.976134 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 01:44:23.976142 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 01:44:23.976154 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Aug 13 01:44:23.976162 kernel: Initialise system trusted keyrings Aug 13 01:44:23.976170 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 01:44:23.976177 kernel: Key type asymmetric registered Aug 13 01:44:23.976186 kernel: Asymmetric key parser 'x509' registered Aug 13 01:44:23.976193 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 01:44:23.976201 kernel: io scheduler mq-deadline registered Aug 13 01:44:23.976209 kernel: io scheduler kyber registered Aug 13 01:44:23.976216 kernel: io scheduler bfq registered Aug 13 01:44:23.976226 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:44:23.976234 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 01:44:23.976241 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 01:44:23.976295 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:44:23.976303 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:44:23.976311 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 01:44:23.976318 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:44:23.976326 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:44:23.976448 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 01:44:23.976464 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 01:44:23.976565 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 01:44:23.976686 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T01:44:23 UTC (1755049463) Aug 13 01:44:23.976799 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 01:44:23.976810 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 01:44:23.976818 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:44:23.976825 kernel: Segment Routing with IPv6 Aug 13 01:44:23.976835 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:44:23.976843 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:44:23.976851 kernel: Key type dns_resolver registered Aug 13 01:44:23.976858 kernel: IPI shorthand broadcast: enabled Aug 13 01:44:23.976866 kernel: sched_clock: Marking stable (4589031690, 222776723)->(4878141610, -66333197) Aug 13 01:44:23.976873 kernel: registered taskstats version 1 Aug 13 01:44:23.976881 kernel: Loading compiled-in X.509 certificates Aug 13 01:44:23.976888 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 01:44:23.976895 kernel: Demotion targets for Node 0: null Aug 13 01:44:23.976905 kernel: Key type .fscrypt registered Aug 13 01:44:23.976912 kernel: Key type fscrypt-provisioning registered Aug 13 01:44:23.976920 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:44:23.976927 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:44:23.976934 kernel: ima: No architecture policies found Aug 13 01:44:23.976942 kernel: clk: Disabling unused clocks Aug 13 01:44:23.976950 kernel: Warning: unable to open an initial console. Aug 13 01:44:23.976958 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 01:44:23.976965 kernel: Write protecting the kernel read-only data: 24576k Aug 13 01:44:23.976974 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 01:44:23.976982 kernel: Run /init as init process Aug 13 01:44:23.976989 kernel: with arguments: Aug 13 01:44:23.976997 kernel: /init Aug 13 01:44:23.977004 kernel: with environment: Aug 13 01:44:23.977012 kernel: HOME=/ Aug 13 01:44:23.977035 kernel: TERM=linux Aug 13 01:44:23.977045 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:44:23.977054 systemd[1]: Successfully made /usr/ read-only. Aug 13 01:44:23.977068 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:44:23.977077 systemd[1]: Detected virtualization kvm. Aug 13 01:44:23.977085 systemd[1]: Detected architecture x86-64. Aug 13 01:44:23.977094 systemd[1]: Running in initrd. Aug 13 01:44:23.977101 systemd[1]: No hostname configured, using default hostname. Aug 13 01:44:23.977110 systemd[1]: Hostname set to . Aug 13 01:44:23.977118 systemd[1]: Initializing machine ID from random generator. Aug 13 01:44:23.977128 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:44:23.977137 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:44:23.977145 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:44:23.978080 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 01:44:23.978091 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:44:23.978100 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 01:44:23.978109 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 01:44:23.978124 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 01:44:23.978132 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 01:44:23.978141 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:44:23.978149 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:44:23.978157 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:44:23.978165 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:44:23.978174 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:44:23.978182 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:44:23.978192 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:44:23.978202 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:44:23.978211 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 01:44:23.978219 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 01:44:23.978228 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:44:23.978236 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:44:23.978245 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:44:23.978254 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:44:23.978264 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 01:44:23.978273 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:44:23.978282 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 01:44:23.978291 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 01:44:23.978299 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:44:23.978308 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:44:23.978319 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:44:23.978327 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:44:23.978335 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 01:44:23.978344 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:44:23.978355 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:44:23.978394 systemd-journald[205]: Collecting audit messages is disabled. Aug 13 01:44:23.978417 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:44:23.978427 systemd-journald[205]: Journal started Aug 13 01:44:23.978449 systemd-journald[205]: Runtime Journal (/run/log/journal/a37333d773f34cddac0718f901593861) is 8M, max 78.5M, 70.5M free. Aug 13 01:44:23.940359 systemd-modules-load[207]: Inserted module 'overlay' Aug 13 01:44:24.122179 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:44:24.122211 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:44:24.122226 kernel: Bridge firewalling registered Aug 13 01:44:24.028699 systemd-modules-load[207]: Inserted module 'br_netfilter' Aug 13 01:44:24.031374 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:44:24.122958 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:44:24.131821 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:44:24.133478 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:44:24.140765 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:44:24.141185 systemd-tmpfiles[218]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 01:44:24.144751 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:44:24.147068 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:44:24.153840 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:44:24.177858 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:44:24.180350 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:44:24.184801 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:44:24.186209 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:44:24.188790 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 01:44:24.212057 dracut-cmdline[245]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:44:24.232760 systemd-resolved[244]: Positive Trust Anchors: Aug 13 01:44:24.233528 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:44:24.233580 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:44:24.239049 systemd-resolved[244]: Defaulting to hostname 'linux'. Aug 13 01:44:24.240220 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:44:24.241021 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:44:24.316719 kernel: SCSI subsystem initialized Aug 13 01:44:24.325741 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:44:24.336700 kernel: iscsi: registered transport (tcp) Aug 13 01:44:24.376711 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:44:24.376832 kernel: QLogic iSCSI HBA Driver Aug 13 01:44:24.399354 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:44:24.413712 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:44:24.416755 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:44:24.485274 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 01:44:24.487549 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 01:44:24.545696 kernel: raid6: avx2x4 gen() 24206 MB/s Aug 13 01:44:24.563685 kernel: raid6: avx2x2 gen() 22535 MB/s Aug 13 01:44:24.582072 kernel: raid6: avx2x1 gen() 13986 MB/s Aug 13 01:44:24.582161 kernel: raid6: using algorithm avx2x4 gen() 24206 MB/s Aug 13 01:44:24.601032 kernel: raid6: .... xor() 3024 MB/s, rmw enabled Aug 13 01:44:24.601143 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:44:24.642709 kernel: xor: automatically using best checksumming function avx Aug 13 01:44:24.819709 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 01:44:24.829070 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:44:24.831555 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:44:24.855154 systemd-udevd[454]: Using default interface naming scheme 'v255'. Aug 13 01:44:24.860836 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:44:24.864937 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 01:44:24.892500 dracut-pre-trigger[468]: rd.md=0: removing MD RAID activation Aug 13 01:44:24.926367 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:44:24.928507 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:44:25.001324 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:44:25.005157 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 01:44:25.109730 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Aug 13 01:44:25.256671 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:44:25.256753 kernel: scsi host0: Virtio SCSI HBA Aug 13 01:44:25.267692 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 01:44:25.290676 kernel: libata version 3.00 loaded. Aug 13 01:44:25.297704 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 01:44:25.300140 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:44:25.300279 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:44:25.314817 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 01:44:25.314852 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 01:44:25.315045 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 01:44:25.315177 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 01:44:25.302742 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:44:25.319727 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:44:25.333102 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:44:25.340789 kernel: AES CTR mode by8 optimization enabled Aug 13 01:44:25.340833 kernel: scsi host1: ahci Aug 13 01:44:25.343516 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 01:44:25.370672 kernel: scsi host2: ahci Aug 13 01:44:25.375717 kernel: scsi host3: ahci Aug 13 01:44:25.381683 kernel: scsi host4: ahci Aug 13 01:44:25.386674 kernel: scsi host5: ahci Aug 13 01:44:25.390672 kernel: scsi host6: ahci Aug 13 01:44:25.402068 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Aug 13 01:44:25.402096 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Aug 13 01:44:25.402108 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Aug 13 01:44:25.410811 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Aug 13 01:44:25.410840 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Aug 13 01:44:25.410852 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Aug 13 01:44:25.433143 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 01:44:25.436857 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 01:44:25.437035 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 01:44:25.437174 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 01:44:25.440676 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 01:44:25.448700 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:44:25.448724 kernel: GPT:9289727 != 9297919 Aug 13 01:44:25.448736 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:44:25.448747 kernel: GPT:9289727 != 9297919 Aug 13 01:44:25.448758 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:44:25.448768 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:44:25.448786 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 01:44:25.534012 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:44:25.712683 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 01:44:25.721664 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 01:44:25.721701 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 01:44:25.724800 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 01:44:25.724822 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 01:44:25.725667 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 01:44:25.776791 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 01:44:25.785776 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 01:44:25.806800 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:44:25.807630 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 01:44:25.816098 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 01:44:25.816708 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 01:44:25.819044 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:44:25.819633 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:44:25.821002 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:44:25.823760 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 01:44:25.825250 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 01:44:25.835786 disk-uuid[634]: Primary Header is updated. Aug 13 01:44:25.835786 disk-uuid[634]: Secondary Entries is updated. Aug 13 01:44:25.835786 disk-uuid[634]: Secondary Header is updated. Aug 13 01:44:25.843445 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:44:25.847772 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:44:26.863685 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:44:26.864319 disk-uuid[636]: The operation has completed successfully. Aug 13 01:44:26.915843 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:44:26.915970 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 01:44:26.946562 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 01:44:26.959993 sh[656]: Success Aug 13 01:44:26.977986 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:44:26.978044 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:44:26.981082 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 01:44:26.990683 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 01:44:27.058609 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 01:44:27.061077 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 01:44:27.081449 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 01:44:27.094481 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 01:44:27.094535 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (668) Aug 13 01:44:27.097994 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 01:44:27.098020 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:44:27.099769 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 01:44:27.111033 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 01:44:27.112153 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:44:27.112894 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 01:44:27.113730 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 01:44:27.116767 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 01:44:27.152154 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (703) Aug 13 01:44:27.152224 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:44:27.155633 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:44:27.155672 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:44:27.167722 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:44:27.169628 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 01:44:27.171334 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 01:44:27.303846 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:44:27.333836 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:44:27.424639 systemd-networkd[837]: lo: Link UP Aug 13 01:44:27.424680 systemd-networkd[837]: lo: Gained carrier Aug 13 01:44:27.442694 systemd-networkd[837]: Enumeration completed Aug 13 01:44:27.442865 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:44:27.443886 systemd-networkd[837]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:44:27.443890 systemd-networkd[837]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:44:27.451489 systemd[1]: Reached target network.target - Network. Aug 13 01:44:27.451988 systemd-networkd[837]: eth0: Link UP Aug 13 01:44:27.452757 systemd-networkd[837]: eth0: Gained carrier Aug 13 01:44:27.452779 systemd-networkd[837]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:44:27.478605 ignition[770]: Ignition 2.21.0 Aug 13 01:44:27.478626 ignition[770]: Stage: fetch-offline Aug 13 01:44:27.478747 ignition[770]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:27.478760 ignition[770]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:27.478867 ignition[770]: parsed url from cmdline: "" Aug 13 01:44:27.481210 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:44:27.478874 ignition[770]: no config URL provided Aug 13 01:44:27.478882 ignition[770]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:44:27.478898 ignition[770]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:44:27.478904 ignition[770]: failed to fetch config: resource requires networking Aug 13 01:44:27.479175 ignition[770]: Ignition finished successfully Aug 13 01:44:27.484839 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 01:44:27.586583 ignition[845]: Ignition 2.21.0 Aug 13 01:44:27.586600 ignition[845]: Stage: fetch Aug 13 01:44:27.586778 ignition[845]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:27.586789 ignition[845]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:27.586886 ignition[845]: parsed url from cmdline: "" Aug 13 01:44:27.586891 ignition[845]: no config URL provided Aug 13 01:44:27.586896 ignition[845]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:44:27.586904 ignition[845]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:44:27.586943 ignition[845]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 01:44:27.600621 ignition[845]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:44:27.801162 ignition[845]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 01:44:27.801436 ignition[845]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:44:28.066762 systemd-networkd[837]: eth0: DHCPv4 address 172.232.7.32/24, gateway 172.232.7.1 acquired from 23.40.196.251 Aug 13 01:44:28.201603 ignition[845]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 01:44:28.311868 ignition[845]: PUT result: OK Aug 13 01:44:28.311965 ignition[845]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 01:44:28.443057 ignition[845]: GET result: OK Aug 13 01:44:28.443858 ignition[845]: parsing config with SHA512: 921dafca17d81d6d995f0a4a87c310cfe23801bfd5e8248592c5831f5b558ab7763fd2570340836c9c1067d03b8d7a1206c4d03a7501ed04d9ac33ad0bf547f7 Aug 13 01:44:28.452166 unknown[845]: fetched base config from "system" Aug 13 01:44:28.452182 unknown[845]: fetched base config from "system" Aug 13 01:44:28.452622 ignition[845]: fetch: fetch complete Aug 13 01:44:28.452192 unknown[845]: fetched user config from "akamai" Aug 13 01:44:28.452632 ignition[845]: fetch: fetch passed Aug 13 01:44:28.452721 ignition[845]: Ignition finished successfully Aug 13 01:44:28.457300 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 01:44:28.480949 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 01:44:28.537743 ignition[853]: Ignition 2.21.0 Aug 13 01:44:28.537760 ignition[853]: Stage: kargs Aug 13 01:44:28.537913 ignition[853]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:28.540540 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 01:44:28.537924 ignition[853]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:28.538519 ignition[853]: kargs: kargs passed Aug 13 01:44:28.538567 ignition[853]: Ignition finished successfully Aug 13 01:44:28.544793 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 01:44:28.568911 ignition[859]: Ignition 2.21.0 Aug 13 01:44:28.568928 ignition[859]: Stage: disks Aug 13 01:44:28.569069 ignition[859]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:28.569080 ignition[859]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:28.572301 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 01:44:28.570482 ignition[859]: disks: disks passed Aug 13 01:44:28.573927 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 01:44:28.570540 ignition[859]: Ignition finished successfully Aug 13 01:44:28.574861 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 01:44:28.576006 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:44:28.577455 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:44:28.578614 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:44:28.581077 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 01:44:28.613706 systemd-fsck[867]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 01:44:28.617884 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 01:44:28.619756 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 01:44:28.774685 kernel: EXT4-fs (sda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 01:44:28.776206 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 01:44:28.777505 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 01:44:28.780712 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:44:28.783726 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 01:44:28.785062 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 01:44:28.785113 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:44:28.785142 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:44:28.804050 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 01:44:28.806796 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 01:44:28.813681 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (875) Aug 13 01:44:28.817989 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:44:28.818043 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:44:28.818058 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:44:28.825220 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:44:28.873434 initrd-setup-root[899]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:44:28.880465 initrd-setup-root[906]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:44:28.887930 initrd-setup-root[913]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:44:28.892415 initrd-setup-root[920]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:44:28.990345 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 01:44:28.992424 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 01:44:28.994281 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 01:44:29.007428 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 01:44:29.020678 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:44:29.047388 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 01:44:29.053052 ignition[987]: INFO : Ignition 2.21.0 Aug 13 01:44:29.055673 ignition[987]: INFO : Stage: mount Aug 13 01:44:29.055673 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:29.055673 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:29.057777 ignition[987]: INFO : mount: mount passed Aug 13 01:44:29.057777 ignition[987]: INFO : Ignition finished successfully Aug 13 01:44:29.059665 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 01:44:29.062321 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 01:44:29.136904 systemd-networkd[837]: eth0: Gained IPv6LL Aug 13 01:44:29.777631 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:44:29.803707 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (999) Aug 13 01:44:29.808833 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:44:29.808880 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:44:29.808897 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:44:29.816628 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:44:29.868537 ignition[1015]: INFO : Ignition 2.21.0 Aug 13 01:44:29.868537 ignition[1015]: INFO : Stage: files Aug 13 01:44:29.870242 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:29.870242 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:29.889371 ignition[1015]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:44:29.891778 ignition[1015]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:44:29.891778 ignition[1015]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:44:29.894424 ignition[1015]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:44:29.895481 ignition[1015]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:44:29.896723 ignition[1015]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:44:29.896262 unknown[1015]: wrote ssh authorized keys file for user: core Aug 13 01:44:29.898602 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:44:29.899640 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.13.2-linux-amd64.tar.gz: attempt #1 Aug 13 01:44:30.255275 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 01:44:31.499326 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.13.2-linux-amd64.tar.gz" Aug 13 01:44:31.499326 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:44:31.502242 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:44:31.502242 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:44:31.502242 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:44:31.502242 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:44:31.502242 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:44:31.502242 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:44:31.502242 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:44:31.502242 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:44:31.502242 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:44:31.502242 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:44:31.511411 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:44:31.511411 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:44:31.511411 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.31.8-x86-64.raw: attempt #1 Aug 13 01:44:31.994576 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 01:44:32.877243 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.31.8-x86-64.raw" Aug 13 01:44:32.877243 ignition[1015]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 01:44:32.880193 ignition[1015]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:44:32.880193 ignition[1015]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:44:32.880193 ignition[1015]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 01:44:32.880193 ignition[1015]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 13 01:44:32.880193 ignition[1015]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:44:32.885897 ignition[1015]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:44:32.885897 ignition[1015]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 13 01:44:32.885897 ignition[1015]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:44:32.885897 ignition[1015]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:44:32.885897 ignition[1015]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:44:32.885897 ignition[1015]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:44:32.885897 ignition[1015]: INFO : files: files passed Aug 13 01:44:32.885897 ignition[1015]: INFO : Ignition finished successfully Aug 13 01:44:32.887502 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 01:44:32.891814 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 01:44:32.897805 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 01:44:32.932694 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:44:32.933594 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 01:44:32.941017 initrd-setup-root-after-ignition[1046]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:44:32.942199 initrd-setup-root-after-ignition[1046]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:44:32.943607 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:44:32.946075 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:44:32.947478 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 01:44:32.949162 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 01:44:33.002581 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:44:33.002775 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 01:44:33.004169 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 01:44:33.005079 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 01:44:33.006402 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 01:44:33.007305 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 01:44:33.047059 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:44:33.049471 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 01:44:33.071785 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:44:33.072700 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:44:33.074205 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 01:44:33.075570 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:44:33.075847 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:44:33.077221 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 01:44:33.078083 systemd[1]: Stopped target basic.target - Basic System. Aug 13 01:44:33.079469 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 01:44:33.080756 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:44:33.081973 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 01:44:33.083365 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:44:33.084735 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 01:44:33.086050 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:44:33.087387 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 01:44:33.088924 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 01:44:33.090106 systemd[1]: Stopped target swap.target - Swaps. Aug 13 01:44:33.091466 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:44:33.091673 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:44:33.092924 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:44:33.093716 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:44:33.094766 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 01:44:33.095401 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:44:33.096826 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:44:33.096976 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 01:44:33.098422 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:44:33.098598 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:44:33.100113 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:44:33.100252 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 01:44:33.103732 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 01:44:33.104856 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:44:33.104989 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:44:33.108759 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 01:44:33.109772 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:44:33.109932 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:44:33.111234 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:44:33.111333 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:44:33.119984 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:44:33.120124 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 01:44:33.134094 ignition[1070]: INFO : Ignition 2.21.0 Aug 13 01:44:33.134094 ignition[1070]: INFO : Stage: umount Aug 13 01:44:33.136810 ignition[1070]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:33.136810 ignition[1070]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:33.136810 ignition[1070]: INFO : umount: umount passed Aug 13 01:44:33.136810 ignition[1070]: INFO : Ignition finished successfully Aug 13 01:44:33.141921 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:44:33.151216 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:44:33.151355 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 01:44:33.161841 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:44:33.161975 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 01:44:33.163930 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:44:33.164005 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 01:44:33.164891 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:44:33.164944 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 01:44:33.165934 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 01:44:33.165980 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 01:44:33.167003 systemd[1]: Stopped target network.target - Network. Aug 13 01:44:33.167983 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:44:33.168036 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:44:33.169133 systemd[1]: Stopped target paths.target - Path Units. Aug 13 01:44:33.170200 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:44:33.173685 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:44:33.174323 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 01:44:33.175532 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 01:44:33.176679 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:44:33.176736 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:44:33.178004 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:44:33.178046 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:44:33.179396 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:44:33.179486 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 01:44:33.180525 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 01:44:33.180583 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 01:44:33.181676 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:44:33.181760 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 01:44:33.182946 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 01:44:33.184180 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 01:44:33.187776 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:44:33.187983 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 01:44:33.194005 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 01:44:33.194289 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:44:33.194433 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 01:44:33.196972 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 01:44:33.198107 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 01:44:33.199431 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:44:33.199474 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:44:33.201433 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 01:44:33.203004 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:44:33.203058 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:44:33.205016 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:44:33.205064 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:44:33.207754 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:44:33.207802 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 01:44:33.209146 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 01:44:33.209198 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:44:33.210840 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:44:33.215455 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:44:33.215522 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:44:33.230642 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:44:33.231797 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:44:33.234053 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:44:33.234244 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 01:44:33.236451 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:44:33.236581 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 01:44:33.237428 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:44:33.237478 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:44:33.238798 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:44:33.238864 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:44:33.240560 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:44:33.240609 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 01:44:33.242023 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:44:33.242104 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:44:33.245763 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 01:44:33.246989 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 01:44:33.247090 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:44:33.249849 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:44:33.249938 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:44:33.253091 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:44:33.253172 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:44:33.256103 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 13 01:44:33.256168 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 01:44:33.256218 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:44:33.265330 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:44:33.266440 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 01:44:33.268791 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 01:44:33.270546 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 01:44:33.284494 systemd[1]: Switching root. Aug 13 01:44:33.322749 systemd-journald[205]: Journal stopped Aug 13 01:44:34.673001 systemd-journald[205]: Received SIGTERM from PID 1 (systemd). Aug 13 01:44:34.673030 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:44:34.673042 kernel: SELinux: policy capability open_perms=1 Aug 13 01:44:34.673054 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:44:34.673063 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:44:34.673072 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:44:34.673081 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:44:34.673091 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:44:34.673099 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:44:34.673108 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 01:44:34.673121 kernel: audit: type=1403 audit(1755049473.485:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:44:34.673131 systemd[1]: Successfully loaded SELinux policy in 80.902ms. Aug 13 01:44:34.673142 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 12.329ms. Aug 13 01:44:34.673153 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:44:34.673163 systemd[1]: Detected virtualization kvm. Aug 13 01:44:34.673177 systemd[1]: Detected architecture x86-64. Aug 13 01:44:34.673186 systemd[1]: Detected first boot. Aug 13 01:44:34.673197 systemd[1]: Initializing machine ID from random generator. Aug 13 01:44:34.673207 zram_generator::config[1115]: No configuration found. Aug 13 01:44:34.673218 kernel: Guest personality initialized and is inactive Aug 13 01:44:34.673227 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 01:44:34.673237 kernel: Initialized host personality Aug 13 01:44:34.673249 kernel: NET: Registered PF_VSOCK protocol family Aug 13 01:44:34.673259 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:44:34.673270 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 01:44:34.673280 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:44:34.673290 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 01:44:34.673300 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:44:34.673310 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 01:44:34.673324 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 01:44:34.673335 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 01:44:34.673345 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 01:44:34.673355 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 01:44:34.673365 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 01:44:34.673375 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 01:44:34.673393 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 01:44:34.673406 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:44:34.673423 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:44:34.673434 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 01:44:34.673444 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 01:44:34.673457 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 01:44:34.673468 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:44:34.673479 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 01:44:34.673489 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:44:34.673501 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:44:34.673512 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 01:44:34.673522 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 01:44:34.673532 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 01:44:34.673542 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 01:44:34.673553 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:44:34.673562 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:44:34.673573 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:44:34.673585 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:44:34.673597 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 01:44:34.673607 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 01:44:34.673617 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 01:44:34.673628 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:44:34.673640 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:44:34.673665 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:44:34.673676 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 01:44:34.673686 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 01:44:34.673696 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 01:44:34.673707 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 01:44:34.673717 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:44:34.673727 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 01:44:34.673740 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 01:44:34.673750 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 01:44:34.673761 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:44:34.673771 systemd[1]: Reached target machines.target - Containers. Aug 13 01:44:34.673782 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 01:44:34.673792 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:44:34.673803 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:44:34.673813 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 01:44:34.673825 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:44:34.673837 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:44:34.673847 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:44:34.673857 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 01:44:34.673868 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:44:34.673878 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:44:34.673889 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:44:34.673899 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 01:44:34.673909 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:44:34.673921 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:44:34.673931 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:44:34.673942 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:44:34.673952 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:44:34.673962 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:44:34.673972 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 01:44:34.673983 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 01:44:34.673993 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:44:34.674005 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:44:34.674015 systemd[1]: Stopped verity-setup.service. Aug 13 01:44:34.674026 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:44:34.674036 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 01:44:34.674046 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 01:44:34.674057 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 01:44:34.674067 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 01:44:34.674077 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 01:44:34.674090 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 01:44:34.674100 kernel: loop: module loaded Aug 13 01:44:34.674109 kernel: fuse: init (API version 7.41) Aug 13 01:44:34.674119 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 01:44:34.674129 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:44:34.674139 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:44:34.674149 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 01:44:34.674159 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:44:34.674169 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:44:34.674181 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:44:34.674192 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:44:34.674202 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:44:34.674212 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 01:44:34.674222 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:44:34.674232 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:44:34.674242 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:44:34.674252 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 01:44:34.674262 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 01:44:34.674275 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 01:44:34.674286 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:44:34.674301 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:44:34.674313 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 01:44:34.674324 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:44:34.674334 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 01:44:34.674347 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 01:44:34.674357 kernel: ACPI: bus type drm_connector registered Aug 13 01:44:34.674367 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:44:34.674378 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:44:34.674388 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 01:44:34.674420 systemd-journald[1199]: Collecting audit messages is disabled. Aug 13 01:44:34.674446 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 01:44:34.674457 systemd-journald[1199]: Journal started Aug 13 01:44:34.674477 systemd-journald[1199]: Runtime Journal (/run/log/journal/720f9e8877c8404f9db210ebd39d5cd1) is 8M, max 78.5M, 70.5M free. Aug 13 01:44:34.151370 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:44:34.176107 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 01:44:34.176961 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:44:34.678723 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:44:34.684037 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 01:44:34.689671 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:44:34.693929 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 01:44:34.703750 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 01:44:34.735999 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 01:44:34.736097 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:44:34.738232 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:44:34.738523 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:44:34.739525 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:44:34.741102 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:44:34.751419 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 01:44:34.754467 kernel: loop0: detected capacity change from 0 to 146240 Aug 13 01:44:34.768962 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 01:44:34.769746 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:44:34.773835 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 01:44:34.805014 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 01:44:34.838973 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:44:34.907366 systemd-journald[1199]: Time spent on flushing to /var/log/journal/720f9e8877c8404f9db210ebd39d5cd1 is 22.839ms for 1004 entries. Aug 13 01:44:34.907366 systemd-journald[1199]: System Journal (/var/log/journal/720f9e8877c8404f9db210ebd39d5cd1) is 8M, max 195.6M, 187.6M free. Aug 13 01:44:34.963296 systemd-journald[1199]: Received client request to flush runtime journal. Aug 13 01:44:34.964464 kernel: loop1: detected capacity change from 0 to 113872 Aug 13 01:44:34.915107 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 01:44:34.975055 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 01:44:34.985878 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 01:44:34.989892 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:44:35.026262 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Aug 13 01:44:35.026737 systemd-tmpfiles[1258]: ACLs are not supported, ignoring. Aug 13 01:44:35.034514 kernel: loop2: detected capacity change from 0 to 8 Aug 13 01:44:35.033317 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:44:35.054699 kernel: loop3: detected capacity change from 0 to 221472 Aug 13 01:44:35.134703 kernel: loop4: detected capacity change from 0 to 146240 Aug 13 01:44:35.179087 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:44:35.202702 kernel: loop5: detected capacity change from 0 to 113872 Aug 13 01:44:35.287678 kernel: loop6: detected capacity change from 0 to 8 Aug 13 01:44:35.291680 kernel: loop7: detected capacity change from 0 to 221472 Aug 13 01:44:35.374002 (sd-merge)[1264]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 01:44:35.375204 (sd-merge)[1264]: Merged extensions into '/usr'. Aug 13 01:44:35.404446 systemd[1]: Reload requested from client PID 1224 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 01:44:35.404467 systemd[1]: Reloading... Aug 13 01:44:35.710682 zram_generator::config[1289]: No configuration found. Aug 13 01:44:35.916282 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:44:36.027755 systemd[1]: Reloading finished in 622 ms. Aug 13 01:44:36.050666 ldconfig[1219]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:44:36.077569 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 01:44:36.078978 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 01:44:36.091127 systemd[1]: Starting ensure-sysext.service... Aug 13 01:44:36.098919 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:44:36.113966 systemd[1]: Reload requested from client PID 1333 ('systemctl') (unit ensure-sysext.service)... Aug 13 01:44:36.114066 systemd[1]: Reloading... Aug 13 01:44:36.203972 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 01:44:36.204737 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 01:44:36.206148 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:44:36.207029 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 01:44:36.210767 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:44:36.211136 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Aug 13 01:44:36.211815 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Aug 13 01:44:36.235125 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:44:36.235304 systemd-tmpfiles[1334]: Skipping /boot Aug 13 01:44:36.264353 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:44:36.265834 systemd-tmpfiles[1334]: Skipping /boot Aug 13 01:44:36.328615 zram_generator::config[1361]: No configuration found. Aug 13 01:44:36.460533 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:44:36.535490 systemd[1]: Reloading finished in 421 ms. Aug 13 01:44:36.558043 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 01:44:36.570068 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:44:36.596234 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:44:36.600878 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 01:44:36.604837 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 01:44:36.608391 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:44:36.612720 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:44:36.616110 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 01:44:36.621085 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:44:36.621248 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:44:36.622380 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:44:36.627761 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:44:36.632134 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:44:36.633804 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:44:36.633917 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:44:36.634002 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:44:36.654750 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 01:44:36.667924 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 01:44:36.679163 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 01:44:36.683473 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:44:36.686027 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:44:36.707777 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:44:36.708109 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:44:36.713199 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:44:36.713575 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:44:36.719809 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:44:36.722776 systemd-udevd[1410]: Using default interface naming scheme 'v255'. Aug 13 01:44:36.725191 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:44:36.725902 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:44:36.726022 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:44:36.726120 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:44:36.729153 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 01:44:36.757602 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:44:36.801685 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:44:36.807409 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:44:36.808156 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:44:36.810515 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:44:36.811538 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:44:36.812443 augenrules[1441]: No rules Aug 13 01:44:36.815143 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:44:36.817806 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:44:36.819585 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 01:44:36.835900 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 01:44:36.841322 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:44:36.843809 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 01:44:36.852235 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:44:36.855811 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:44:36.856524 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:44:36.857802 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:44:36.860926 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:44:36.862562 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:44:36.874018 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:44:36.874871 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:44:36.874982 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:44:36.879895 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:44:36.880471 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:44:36.880568 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:44:36.884740 systemd[1]: Finished ensure-sysext.service. Aug 13 01:44:36.893792 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 01:44:36.950231 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:44:36.950539 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:44:36.951554 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:44:36.951847 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:44:36.953757 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:44:36.954000 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:44:36.957065 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:44:36.957200 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:44:36.960940 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:44:36.961187 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:44:36.969197 augenrules[1470]: /sbin/augenrules: No change Aug 13 01:44:36.988496 systemd-resolved[1409]: Positive Trust Anchors: Aug 13 01:44:36.988514 systemd-resolved[1409]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:44:36.988546 systemd-resolved[1409]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:44:36.995572 systemd-resolved[1409]: Defaulting to hostname 'linux'. Aug 13 01:44:37.004822 augenrules[1511]: No rules Aug 13 01:44:37.007059 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:44:37.008405 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:44:37.008779 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:44:37.010921 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:44:37.172897 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 01:44:37.173747 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:44:37.174418 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 01:44:37.175089 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 01:44:37.175686 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 01:44:37.176271 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 01:44:37.177013 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:44:37.177053 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:44:37.177611 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 01:44:37.179371 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 01:44:37.180094 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 01:44:37.180762 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:44:37.183293 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 01:44:37.187094 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 01:44:37.189232 systemd-networkd[1477]: lo: Link UP Aug 13 01:44:37.189540 systemd-networkd[1477]: lo: Gained carrier Aug 13 01:44:37.190600 systemd-networkd[1477]: Enumeration completed Aug 13 01:44:37.191757 systemd-timesyncd[1481]: No network connectivity, watching for changes. Aug 13 01:44:37.193401 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 01:44:37.194857 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 01:44:37.195903 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 01:44:37.205402 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 01:44:37.208377 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 01:44:37.210420 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:44:37.212733 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 01:44:37.214595 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 01:44:37.214684 systemd[1]: Reached target network.target - Network. Aug 13 01:44:37.216253 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:44:37.216821 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:44:37.217766 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:44:37.217814 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:44:37.220204 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 01:44:37.224190 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 01:44:37.245344 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 01:44:37.251065 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 01:44:37.263336 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 01:44:37.286552 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 01:44:37.287290 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 01:44:37.289196 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 01:44:37.296679 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 01:44:37.302839 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 01:44:37.308252 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 01:44:37.384945 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Refreshing passwd entry cache Aug 13 01:44:37.385246 oslogin_cache_refresh[1535]: Refreshing passwd entry cache Aug 13 01:44:37.387505 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Failure getting users, quitting Aug 13 01:44:37.387676 oslogin_cache_refresh[1535]: Failure getting users, quitting Aug 13 01:44:37.388747 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:44:37.388747 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Refreshing group entry cache Aug 13 01:44:37.388747 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Failure getting groups, quitting Aug 13 01:44:37.388747 google_oslogin_nss_cache[1535]: oslogin_cache_refresh[1535]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:44:37.388831 jq[1532]: false Aug 13 01:44:37.387700 oslogin_cache_refresh[1535]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:44:37.387743 oslogin_cache_refresh[1535]: Refreshing group entry cache Aug 13 01:44:37.388231 oslogin_cache_refresh[1535]: Failure getting groups, quitting Aug 13 01:44:37.388240 oslogin_cache_refresh[1535]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:44:37.389261 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 01:44:37.399836 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 01:44:37.403831 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 01:44:37.406964 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 01:44:37.410629 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:44:37.411240 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:44:37.416513 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 01:44:37.419803 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 01:44:37.424320 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 01:44:37.426918 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:44:37.427217 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 01:44:37.427548 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 01:44:37.427862 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 01:44:37.432731 extend-filesystems[1533]: Found /dev/sda6 Aug 13 01:44:37.472825 extend-filesystems[1533]: Found /dev/sda9 Aug 13 01:44:37.472825 extend-filesystems[1533]: Checking size of /dev/sda9 Aug 13 01:44:37.493138 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:44:37.494717 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 01:44:37.495681 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:44:37.496716 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 01:44:37.533370 update_engine[1548]: I20250813 01:44:37.533277 1548 main.cc:92] Flatcar Update Engine starting Aug 13 01:44:37.542265 tar[1563]: linux-amd64/helm Aug 13 01:44:37.547178 extend-filesystems[1533]: Resized partition /dev/sda9 Aug 13 01:44:37.614362 coreos-metadata[1529]: Aug 13 01:44:37.614 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:44:37.634684 jq[1556]: true Aug 13 01:44:37.646230 dbus-daemon[1530]: [system] SELinux support is enabled Aug 13 01:44:37.646439 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 01:44:37.650041 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:44:37.650078 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 01:44:37.651795 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:44:37.651817 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 01:44:37.655998 systemd-logind[1544]: New seat seat0. Aug 13 01:44:37.656877 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 01:44:37.658696 extend-filesystems[1577]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 01:44:37.695526 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 01:44:37.684037 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:44:37.684054 systemd-networkd[1477]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:44:37.689702 systemd-networkd[1477]: eth0: Link UP Aug 13 01:44:37.689972 systemd-networkd[1477]: eth0: Gained carrier Aug 13 01:44:37.689995 systemd-networkd[1477]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:44:37.699479 (ntainerd)[1581]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 01:44:37.727422 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 01:44:37.727581 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 01:44:37.752970 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 01:44:37.753356 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 01:44:38.224830 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:44:37.734859 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 01:44:38.225288 update_engine[1548]: I20250813 01:44:37.745178 1548 update_check_scheduler.cc:74] Next update check in 11m26s Aug 13 01:44:38.225362 extend-filesystems[1577]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 01:44:38.225362 extend-filesystems[1577]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 01:44:38.225362 extend-filesystems[1577]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 01:44:38.233529 sshd_keygen[1579]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:44:37.739782 systemd[1]: Started update-engine.service - Update Engine. Aug 13 01:44:38.233735 jq[1580]: true Aug 13 01:44:38.233921 extend-filesystems[1533]: Resized filesystem in /dev/sda9 Aug 13 01:44:37.754080 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 01:44:37.763791 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:44:37.764066 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 01:44:38.238201 bash[1604]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:44:38.126350 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 01:44:38.145636 systemd[1]: Starting sshkeys.service... Aug 13 01:44:38.258489 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 01:44:38.283102 locksmithd[1587]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:44:38.287835 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 01:44:38.289834 systemd-networkd[1477]: eth0: DHCPv4 address 172.232.7.32/24, gateway 172.232.7.1 acquired from 23.40.196.251 Aug 13 01:44:38.289937 dbus-daemon[1530]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.2' (uid=244 pid=1477 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 01:44:38.292537 systemd-timesyncd[1481]: Network configuration changed, trying to establish connection. Aug 13 01:44:38.297752 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 01:44:38.327452 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 01:44:38.352696 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:44:38.353902 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 01:44:38.449495 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:44:38.459404 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 01:44:38.469079 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 01:44:38.538955 systemd-timesyncd[1481]: Contacted time server 162.159.200.1:123 (2.flatcar.pool.ntp.org). Aug 13 01:44:38.539025 systemd-timesyncd[1481]: Initial clock synchronization to Wed 2025-08-13 01:44:38.764973 UTC. Aug 13 01:44:38.562817 coreos-metadata[1632]: Aug 13 01:44:38.562 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:44:38.643676 coreos-metadata[1529]: Aug 13 01:44:38.631 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 01:44:38.650449 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 01:44:38.653970 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 01:44:38.658170 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 01:44:38.659000 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 01:44:38.715563 coreos-metadata[1632]: Aug 13 01:44:38.715 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 01:44:38.774671 coreos-metadata[1529]: Aug 13 01:44:38.771 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 01:44:38.896926 coreos-metadata[1632]: Aug 13 01:44:38.896 INFO Fetch successful Aug 13 01:44:38.909129 containerd[1581]: time="2025-08-13T01:44:38Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 01:44:38.944467 containerd[1581]: time="2025-08-13T01:44:38.944396796Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 01:44:38.953977 update-ssh-keys[1653]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:44:38.956185 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 01:44:38.962526 systemd[1]: Finished sshkeys.service. Aug 13 01:44:38.987688 coreos-metadata[1529]: Aug 13 01:44:38.987 INFO Fetch successful Aug 13 01:44:38.987688 coreos-metadata[1529]: Aug 13 01:44:38.987 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 01:44:38.988553 containerd[1581]: time="2025-08-13T01:44:38.988513364Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="12.73µs" Aug 13 01:44:38.988615 containerd[1581]: time="2025-08-13T01:44:38.988600583Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 01:44:38.988811 containerd[1581]: time="2025-08-13T01:44:38.988794163Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 01:44:38.989123 containerd[1581]: time="2025-08-13T01:44:38.989106913Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 01:44:38.990172 containerd[1581]: time="2025-08-13T01:44:38.990153253Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 01:44:38.990261 containerd[1581]: time="2025-08-13T01:44:38.990245023Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:44:38.990392 containerd[1581]: time="2025-08-13T01:44:38.990372963Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:44:38.991203 containerd[1581]: time="2025-08-13T01:44:38.991181052Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:44:38.991559 containerd[1581]: time="2025-08-13T01:44:38.991535182Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:44:38.991614 containerd[1581]: time="2025-08-13T01:44:38.991601982Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:44:38.991721 containerd[1581]: time="2025-08-13T01:44:38.991702532Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:44:38.992267 containerd[1581]: time="2025-08-13T01:44:38.992254872Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 01:44:38.992420 containerd[1581]: time="2025-08-13T01:44:38.992403022Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 01:44:38.993097 containerd[1581]: time="2025-08-13T01:44:38.993078971Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:44:38.993793 containerd[1581]: time="2025-08-13T01:44:38.993772961Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:44:38.996201 containerd[1581]: time="2025-08-13T01:44:38.996138070Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 01:44:38.996316 containerd[1581]: time="2025-08-13T01:44:38.996300330Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 01:44:38.996671 kernel: EDAC MC: Ver: 3.0.0 Aug 13 01:44:38.997285 containerd[1581]: time="2025-08-13T01:44:38.997263429Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 01:44:38.998143 containerd[1581]: time="2025-08-13T01:44:38.998125899Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:44:39.002070 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:44:39.006552 containerd[1581]: time="2025-08-13T01:44:39.005879692Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 01:44:39.006552 containerd[1581]: time="2025-08-13T01:44:39.005940957Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 01:44:39.006552 containerd[1581]: time="2025-08-13T01:44:39.006003619Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 01:44:39.006552 containerd[1581]: time="2025-08-13T01:44:39.006017758Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 01:44:39.006552 containerd[1581]: time="2025-08-13T01:44:39.006030806Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 01:44:39.006552 containerd[1581]: time="2025-08-13T01:44:39.006041953Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 01:44:39.006552 containerd[1581]: time="2025-08-13T01:44:39.006074539Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 01:44:39.006552 containerd[1581]: time="2025-08-13T01:44:39.006089284Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 01:44:39.006552 containerd[1581]: time="2025-08-13T01:44:39.006101366Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 01:44:39.006552 containerd[1581]: time="2025-08-13T01:44:39.006111320Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 01:44:39.006552 containerd[1581]: time="2025-08-13T01:44:39.006121386Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 01:44:39.006552 containerd[1581]: time="2025-08-13T01:44:39.006153530Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 01:44:39.006552 containerd[1581]: time="2025-08-13T01:44:39.006327348Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 01:44:39.006552 containerd[1581]: time="2025-08-13T01:44:39.006349929Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 01:44:39.006949 containerd[1581]: time="2025-08-13T01:44:39.006364612Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 01:44:39.006949 containerd[1581]: time="2025-08-13T01:44:39.006395902Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 01:44:39.006949 containerd[1581]: time="2025-08-13T01:44:39.006408848Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 01:44:39.006949 containerd[1581]: time="2025-08-13T01:44:39.006419851Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 01:44:39.006949 containerd[1581]: time="2025-08-13T01:44:39.006431398Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 01:44:39.006949 containerd[1581]: time="2025-08-13T01:44:39.006442236Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 01:44:39.006949 containerd[1581]: time="2025-08-13T01:44:39.006454452Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 01:44:39.006949 containerd[1581]: time="2025-08-13T01:44:39.006483737Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 01:44:39.006949 containerd[1581]: time="2025-08-13T01:44:39.006493588Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 01:44:39.007460 containerd[1581]: time="2025-08-13T01:44:39.007163121Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 01:44:39.007460 containerd[1581]: time="2025-08-13T01:44:39.007205219Z" level=info msg="Start snapshots syncer" Aug 13 01:44:39.007460 containerd[1581]: time="2025-08-13T01:44:39.007239213Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 01:44:39.058577 containerd[1581]: time="2025-08-13T01:44:39.032788110Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 01:44:39.058577 containerd[1581]: time="2025-08-13T01:44:39.032919574Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 01:44:39.058893 containerd[1581]: time="2025-08-13T01:44:39.033134790Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 01:44:39.058893 containerd[1581]: time="2025-08-13T01:44:39.033371558Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 01:44:39.058893 containerd[1581]: time="2025-08-13T01:44:39.033416699Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 01:44:39.058893 containerd[1581]: time="2025-08-13T01:44:39.033438725Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 01:44:39.058893 containerd[1581]: time="2025-08-13T01:44:39.033455095Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 01:44:39.058893 containerd[1581]: time="2025-08-13T01:44:39.033477192Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 01:44:39.058893 containerd[1581]: time="2025-08-13T01:44:39.033492163Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 01:44:39.058893 containerd[1581]: time="2025-08-13T01:44:39.033510795Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 01:44:39.058893 containerd[1581]: time="2025-08-13T01:44:39.033557571Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 01:44:39.058893 containerd[1581]: time="2025-08-13T01:44:39.033573108Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 01:44:39.058893 containerd[1581]: time="2025-08-13T01:44:39.033588110Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 01:44:39.058893 containerd[1581]: time="2025-08-13T01:44:39.033659492Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:44:39.058893 containerd[1581]: time="2025-08-13T01:44:39.055760348Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:44:39.058893 containerd[1581]: time="2025-08-13T01:44:39.055869303Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:44:39.059170 containerd[1581]: time="2025-08-13T01:44:39.055890537Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:44:39.059170 containerd[1581]: time="2025-08-13T01:44:39.055900716Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 01:44:39.059170 containerd[1581]: time="2025-08-13T01:44:39.055993579Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 01:44:39.059170 containerd[1581]: time="2025-08-13T01:44:39.056013569Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 01:44:39.059170 containerd[1581]: time="2025-08-13T01:44:39.056035985Z" level=info msg="runtime interface created" Aug 13 01:44:39.059170 containerd[1581]: time="2025-08-13T01:44:39.056042021Z" level=info msg="created NRI interface" Aug 13 01:44:39.059170 containerd[1581]: time="2025-08-13T01:44:39.056069105Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 01:44:39.059170 containerd[1581]: time="2025-08-13T01:44:39.056091676Z" level=info msg="Connect containerd service" Aug 13 01:44:39.059170 containerd[1581]: time="2025-08-13T01:44:39.056211303Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 01:44:39.059170 containerd[1581]: time="2025-08-13T01:44:39.058052979Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:44:39.167906 systemd-logind[1544]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 01:44:39.168021 systemd-logind[1544]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:44:39.186309 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:44:39.215212 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 01:44:39.275227 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 01:44:39.282455 systemd[1]: Started sshd@0-172.232.7.32:22-147.75.109.163:48202.service - OpenSSH per-connection server daemon (147.75.109.163:48202). Aug 13 01:44:39.303874 coreos-metadata[1529]: Aug 13 01:44:39.291 INFO Fetch successful Aug 13 01:44:39.304493 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 01:44:39.386746 systemd-networkd[1477]: eth0: Gained IPv6LL Aug 13 01:44:39.391582 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 01:44:39.393019 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 01:44:39.403278 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:44:39.427092 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 01:44:39.707944 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 01:44:39.723477 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 01:44:39.728491 dbus-daemon[1530]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 01:44:39.732903 dbus-daemon[1530]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1633 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 01:44:39.792380 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 01:44:39.923513 containerd[1581]: time="2025-08-13T01:44:39.910542692Z" level=info msg="Start subscribing containerd event" Aug 13 01:44:39.923513 containerd[1581]: time="2025-08-13T01:44:39.910755286Z" level=info msg="Start recovering state" Aug 13 01:44:39.923513 containerd[1581]: time="2025-08-13T01:44:39.910879244Z" level=info msg="Start event monitor" Aug 13 01:44:39.923513 containerd[1581]: time="2025-08-13T01:44:39.910894729Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:44:39.923513 containerd[1581]: time="2025-08-13T01:44:39.910904817Z" level=info msg="Start streaming server" Aug 13 01:44:39.923513 containerd[1581]: time="2025-08-13T01:44:39.910916076Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 01:44:39.923513 containerd[1581]: time="2025-08-13T01:44:39.910924857Z" level=info msg="runtime interface starting up..." Aug 13 01:44:39.923513 containerd[1581]: time="2025-08-13T01:44:39.910930564Z" level=info msg="starting plugins..." Aug 13 01:44:39.923513 containerd[1581]: time="2025-08-13T01:44:39.910944045Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 01:44:39.923513 containerd[1581]: time="2025-08-13T01:44:39.915323242Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:44:39.923513 containerd[1581]: time="2025-08-13T01:44:39.915409596Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:44:39.916754 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 01:44:39.930966 containerd[1581]: time="2025-08-13T01:44:39.930910550Z" level=info msg="containerd successfully booted in 1.022351s" Aug 13 01:44:40.025730 sshd[1668]: Accepted publickey for core from 147.75.109.163 port 48202 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:44:40.027768 sshd-session[1668]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:44:40.038256 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 01:44:40.109645 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 01:44:40.146238 systemd-logind[1544]: New session 1 of user core. Aug 13 01:44:40.178488 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 01:44:40.236560 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 01:44:40.256646 polkitd[1702]: Started polkitd version 126 Aug 13 01:44:40.266763 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 01:44:40.267987 tar[1563]: linux-amd64/LICENSE Aug 13 01:44:40.268831 tar[1563]: linux-amd64/README.md Aug 13 01:44:40.272011 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 01:44:40.308823 (systemd)[1721]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:44:40.327182 polkitd[1702]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 01:44:40.312471 systemd-logind[1544]: New session c1 of user core. Aug 13 01:44:40.327703 polkitd[1702]: Loading rules from directory /run/polkit-1/rules.d Aug 13 01:44:40.327770 polkitd[1702]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:44:40.328143 polkitd[1702]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 13 01:44:40.328188 polkitd[1702]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:44:40.328298 polkitd[1702]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 01:44:40.330005 polkitd[1702]: Finished loading, compiling and executing 2 rules Aug 13 01:44:40.336651 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 01:44:40.358055 dbus-daemon[1530]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 01:44:40.358387 polkitd[1702]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 01:44:40.425025 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:44:40.433253 systemd-hostnamed[1633]: Hostname set to <172-232-7-32> (transient) Aug 13 01:44:40.433375 systemd-resolved[1409]: System hostname changed to '172-232-7-32'. Aug 13 01:44:40.437482 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 01:44:40.567472 systemd[1721]: Queued start job for default target default.target. Aug 13 01:44:40.596114 systemd[1721]: Created slice app.slice - User Application Slice. Aug 13 01:44:40.596256 systemd[1721]: Reached target paths.target - Paths. Aug 13 01:44:40.596328 systemd[1721]: Reached target timers.target - Timers. Aug 13 01:44:40.598929 systemd[1721]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 01:44:40.616090 systemd[1721]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 01:44:40.616282 systemd[1721]: Reached target sockets.target - Sockets. Aug 13 01:44:40.616486 systemd[1721]: Reached target basic.target - Basic System. Aug 13 01:44:40.616588 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 01:44:40.617244 systemd[1721]: Reached target default.target - Main User Target. Aug 13 01:44:40.617369 systemd[1721]: Startup finished in 228ms. Aug 13 01:44:40.637984 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 01:44:40.926839 systemd[1]: Started sshd@1-172.232.7.32:22-147.75.109.163:48218.service - OpenSSH per-connection server daemon (147.75.109.163:48218). Aug 13 01:44:41.337769 sshd[1741]: Accepted publickey for core from 147.75.109.163 port 48218 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:44:41.339069 sshd-session[1741]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:44:41.346757 systemd-logind[1544]: New session 2 of user core. Aug 13 01:44:41.349955 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 01:44:41.628116 sshd[1743]: Connection closed by 147.75.109.163 port 48218 Aug 13 01:44:41.650599 sshd-session[1741]: pam_unix(sshd:session): session closed for user core Aug 13 01:44:41.656331 systemd-logind[1544]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:44:41.656558 systemd[1]: sshd@1-172.232.7.32:22-147.75.109.163:48218.service: Deactivated successfully. Aug 13 01:44:41.660538 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:44:41.664499 systemd-logind[1544]: Removed session 2. Aug 13 01:44:41.696292 systemd[1]: Started sshd@2-172.232.7.32:22-147.75.109.163:48232.service - OpenSSH per-connection server daemon (147.75.109.163:48232). Aug 13 01:44:42.063089 sshd[1749]: Accepted publickey for core from 147.75.109.163 port 48232 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:44:42.080998 sshd-session[1749]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:44:42.089788 systemd-logind[1544]: New session 3 of user core. Aug 13 01:44:42.109981 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 01:44:42.374600 sshd[1751]: Connection closed by 147.75.109.163 port 48232 Aug 13 01:44:42.375369 sshd-session[1749]: pam_unix(sshd:session): session closed for user core Aug 13 01:44:42.393706 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:44:42.396509 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 01:44:42.397948 systemd[1]: Startup finished in 4.682s (kernel) + 9.815s (initrd) + 8.969s (userspace) = 23.468s. Aug 13 01:44:42.404330 systemd[1]: sshd@2-172.232.7.32:22-147.75.109.163:48232.service: Deactivated successfully. Aug 13 01:44:42.432100 (kubelet)[1758]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:44:42.443841 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:44:42.448852 systemd-logind[1544]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:44:42.451569 systemd-logind[1544]: Removed session 3. Aug 13 01:44:43.502791 kubelet[1758]: E0813 01:44:43.502641 1758 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:44:43.507425 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:44:43.507738 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:44:43.508241 systemd[1]: kubelet.service: Consumed 2.660s CPU time, 264.8M memory peak. Aug 13 01:44:52.546098 systemd[1]: Started sshd@3-172.232.7.32:22-147.75.109.163:42876.service - OpenSSH per-connection server daemon (147.75.109.163:42876). Aug 13 01:44:52.882466 sshd[1772]: Accepted publickey for core from 147.75.109.163 port 42876 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:44:52.884236 sshd-session[1772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:44:52.890645 systemd-logind[1544]: New session 4 of user core. Aug 13 01:44:52.895800 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 01:44:53.130270 sshd[1774]: Connection closed by 147.75.109.163 port 42876 Aug 13 01:44:53.131140 sshd-session[1772]: pam_unix(sshd:session): session closed for user core Aug 13 01:44:53.136066 systemd[1]: sshd@3-172.232.7.32:22-147.75.109.163:42876.service: Deactivated successfully. Aug 13 01:44:53.138923 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:44:53.139888 systemd-logind[1544]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:44:53.141197 systemd-logind[1544]: Removed session 4. Aug 13 01:44:53.198723 systemd[1]: Started sshd@4-172.232.7.32:22-147.75.109.163:42882.service - OpenSSH per-connection server daemon (147.75.109.163:42882). Aug 13 01:44:53.554385 sshd[1780]: Accepted publickey for core from 147.75.109.163 port 42882 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:44:53.556260 sshd-session[1780]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:44:53.557465 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:44:53.559794 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:44:53.564394 systemd-logind[1544]: New session 5 of user core. Aug 13 01:44:53.578200 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 01:44:53.766835 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:44:53.778980 (kubelet)[1792]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:44:53.802682 sshd[1785]: Connection closed by 147.75.109.163 port 42882 Aug 13 01:44:53.803317 sshd-session[1780]: pam_unix(sshd:session): session closed for user core Aug 13 01:44:53.808425 systemd[1]: sshd@4-172.232.7.32:22-147.75.109.163:42882.service: Deactivated successfully. Aug 13 01:44:53.809620 systemd-logind[1544]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:44:53.812261 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:44:53.815607 systemd-logind[1544]: Removed session 5. Aug 13 01:44:53.861433 kubelet[1792]: E0813 01:44:53.861272 1792 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:44:53.867001 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:44:53.867239 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:44:53.867718 systemd[1]: kubelet.service: Consumed 255ms CPU time, 110.3M memory peak. Aug 13 01:44:53.870999 systemd[1]: Started sshd@5-172.232.7.32:22-147.75.109.163:42892.service - OpenSSH per-connection server daemon (147.75.109.163:42892). Aug 13 01:44:54.222208 sshd[1803]: Accepted publickey for core from 147.75.109.163 port 42892 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:44:54.224081 sshd-session[1803]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:44:54.230501 systemd-logind[1544]: New session 6 of user core. Aug 13 01:44:54.236839 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 01:44:54.476729 sshd[1805]: Connection closed by 147.75.109.163 port 42892 Aug 13 01:44:54.477821 sshd-session[1803]: pam_unix(sshd:session): session closed for user core Aug 13 01:44:54.483312 systemd-logind[1544]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:44:54.483870 systemd[1]: sshd@5-172.232.7.32:22-147.75.109.163:42892.service: Deactivated successfully. Aug 13 01:44:54.486947 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:44:54.489740 systemd-logind[1544]: Removed session 6. Aug 13 01:44:54.545608 systemd[1]: Started sshd@6-172.232.7.32:22-147.75.109.163:42908.service - OpenSSH per-connection server daemon (147.75.109.163:42908). Aug 13 01:44:54.890943 sshd[1811]: Accepted publickey for core from 147.75.109.163 port 42908 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:44:54.892736 sshd-session[1811]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:44:54.899260 systemd-logind[1544]: New session 7 of user core. Aug 13 01:44:54.905885 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 01:44:55.094610 sudo[1814]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 01:44:55.094958 sudo[1814]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:44:55.111972 sudo[1814]: pam_unix(sudo:session): session closed for user root Aug 13 01:44:55.162762 sshd[1813]: Connection closed by 147.75.109.163 port 42908 Aug 13 01:44:55.163887 sshd-session[1811]: pam_unix(sshd:session): session closed for user core Aug 13 01:44:55.168869 systemd[1]: sshd@6-172.232.7.32:22-147.75.109.163:42908.service: Deactivated successfully. Aug 13 01:44:55.171171 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:44:55.173552 systemd-logind[1544]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:44:55.175190 systemd-logind[1544]: Removed session 7. Aug 13 01:44:55.227208 systemd[1]: Started sshd@7-172.232.7.32:22-147.75.109.163:42916.service - OpenSSH per-connection server daemon (147.75.109.163:42916). Aug 13 01:44:55.582389 sshd[1820]: Accepted publickey for core from 147.75.109.163 port 42916 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:44:55.584367 sshd-session[1820]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:44:55.590405 systemd-logind[1544]: New session 8 of user core. Aug 13 01:44:55.601802 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 01:44:55.785161 sudo[1824]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 01:44:55.785544 sudo[1824]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:44:55.792184 sudo[1824]: pam_unix(sudo:session): session closed for user root Aug 13 01:44:55.799366 sudo[1823]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 01:44:55.799728 sudo[1823]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:44:55.811577 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:44:55.851922 augenrules[1846]: No rules Aug 13 01:44:55.853632 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:44:55.853982 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:44:55.855164 sudo[1823]: pam_unix(sudo:session): session closed for user root Aug 13 01:44:55.907231 sshd[1822]: Connection closed by 147.75.109.163 port 42916 Aug 13 01:44:55.907902 sshd-session[1820]: pam_unix(sshd:session): session closed for user core Aug 13 01:44:55.913321 systemd[1]: sshd@7-172.232.7.32:22-147.75.109.163:42916.service: Deactivated successfully. Aug 13 01:44:55.915529 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:44:55.916442 systemd-logind[1544]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:44:55.918345 systemd-logind[1544]: Removed session 8. Aug 13 01:44:55.966428 systemd[1]: Started sshd@8-172.232.7.32:22-147.75.109.163:42924.service - OpenSSH per-connection server daemon (147.75.109.163:42924). Aug 13 01:44:56.308728 sshd[1855]: Accepted publickey for core from 147.75.109.163 port 42924 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:44:56.310619 sshd-session[1855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:44:56.316724 systemd-logind[1544]: New session 9 of user core. Aug 13 01:44:56.321823 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 01:44:56.507396 sudo[1858]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:44:56.507814 sudo[1858]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:44:57.710981 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 01:44:57.722073 (dockerd)[1875]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 01:44:58.619180 dockerd[1875]: time="2025-08-13T01:44:58.618861843Z" level=info msg="Starting up" Aug 13 01:44:58.623824 dockerd[1875]: time="2025-08-13T01:44:58.623634808Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 01:44:58.690196 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1965624534-merged.mount: Deactivated successfully. Aug 13 01:44:58.714038 dockerd[1875]: time="2025-08-13T01:44:58.713972070Z" level=info msg="Loading containers: start." Aug 13 01:44:58.741380 kernel: Initializing XFRM netlink socket Aug 13 01:44:59.031284 systemd-networkd[1477]: docker0: Link UP Aug 13 01:44:59.035163 dockerd[1875]: time="2025-08-13T01:44:59.035102594Z" level=info msg="Loading containers: done." Aug 13 01:44:59.070788 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1099256467-merged.mount: Deactivated successfully. Aug 13 01:44:59.072122 dockerd[1875]: time="2025-08-13T01:44:59.072045743Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:44:59.072222 dockerd[1875]: time="2025-08-13T01:44:59.072186448Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 01:44:59.072427 dockerd[1875]: time="2025-08-13T01:44:59.072324628Z" level=info msg="Initializing buildkit" Aug 13 01:44:59.095899 dockerd[1875]: time="2025-08-13T01:44:59.095820109Z" level=info msg="Completed buildkit initialization" Aug 13 01:44:59.105927 dockerd[1875]: time="2025-08-13T01:44:59.105788517Z" level=info msg="Daemon has completed initialization" Aug 13 01:44:59.105927 dockerd[1875]: time="2025-08-13T01:44:59.105849365Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:44:59.106353 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 01:45:00.102528 containerd[1581]: time="2025-08-13T01:45:00.102456584Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\"" Aug 13 01:45:00.990952 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1649193040.mount: Deactivated successfully. Aug 13 01:45:03.617539 containerd[1581]: time="2025-08-13T01:45:03.616391465Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:03.617539 containerd[1581]: time="2025-08-13T01:45:03.617390409Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.31.11: active requests=0, bytes read=28077759" Aug 13 01:45:03.617539 containerd[1581]: time="2025-08-13T01:45:03.617450407Z" level=info msg="ImageCreate event name:\"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:03.620394 containerd[1581]: time="2025-08-13T01:45:03.620354132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:03.621253 containerd[1581]: time="2025-08-13T01:45:03.621206408Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.31.11\" with image id \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\", repo tag \"registry.k8s.io/kube-apiserver:v1.31.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:a3d1c4440817725a1b503a7ccce94f3dce2b208ebf257b405dc2d97817df3dde\", size \"28074559\" in 3.518685569s" Aug 13 01:45:03.621721 containerd[1581]: time="2025-08-13T01:45:03.621488711Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.31.11\" returns image reference \"sha256:ea7fa3cfabed1b85e7de8e0a02356b6dcb7708442d6e4600d68abaebe1e9b1fc\"" Aug 13 01:45:03.622609 containerd[1581]: time="2025-08-13T01:45:03.622542137Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\"" Aug 13 01:45:03.971518 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 01:45:03.973801 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:04.185481 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:04.193068 (kubelet)[2140]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:45:04.342842 kubelet[2140]: E0813 01:45:04.342612 2140 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:45:04.348013 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:45:04.348285 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:45:04.349110 systemd[1]: kubelet.service: Consumed 349ms CPU time, 109.2M memory peak. Aug 13 01:45:06.101355 containerd[1581]: time="2025-08-13T01:45:06.101294122Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:06.102569 containerd[1581]: time="2025-08-13T01:45:06.102294280Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.31.11: active requests=0, bytes read=24713245" Aug 13 01:45:06.103086 containerd[1581]: time="2025-08-13T01:45:06.103059286Z" level=info msg="ImageCreate event name:\"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:06.105010 containerd[1581]: time="2025-08-13T01:45:06.104986534Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:06.105914 containerd[1581]: time="2025-08-13T01:45:06.105890698Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.31.11\" with image id \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\", repo tag \"registry.k8s.io/kube-controller-manager:v1.31.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:0f19de157f3d251f5ddeb6e9d026895bc55cb02592874b326fa345c57e5e2848\", size \"26315079\" in 2.483317749s" Aug 13 01:45:06.105994 containerd[1581]: time="2025-08-13T01:45:06.105979166Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.31.11\" returns image reference \"sha256:c057eceea4b436b01f9ce394734cfb06f13b2a3688c3983270e99743370b6051\"" Aug 13 01:45:06.106883 containerd[1581]: time="2025-08-13T01:45:06.106841488Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\"" Aug 13 01:45:08.182899 containerd[1581]: time="2025-08-13T01:45:08.182807107Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:08.183980 containerd[1581]: time="2025-08-13T01:45:08.183945475Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.31.11: active requests=0, bytes read=18783700" Aug 13 01:45:08.184698 containerd[1581]: time="2025-08-13T01:45:08.184579048Z" level=info msg="ImageCreate event name:\"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:08.187810 containerd[1581]: time="2025-08-13T01:45:08.187593259Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:08.188394 containerd[1581]: time="2025-08-13T01:45:08.188356106Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.31.11\" with image id \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\", repo tag \"registry.k8s.io/kube-scheduler:v1.31.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1a9b59b3bfa6c1f1911f6f865a795620c461d079e413061bb71981cadd67f39d\", size \"20385552\" in 2.081347593s" Aug 13 01:45:08.188446 containerd[1581]: time="2025-08-13T01:45:08.188397200Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.31.11\" returns image reference \"sha256:64e6a0b453108c87da0bb61473b35fd54078119a09edc56a4c8cb31602437c58\"" Aug 13 01:45:08.189258 containerd[1581]: time="2025-08-13T01:45:08.189218083Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\"" Aug 13 01:45:09.893325 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1453648748.mount: Deactivated successfully. Aug 13 01:45:10.447494 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 01:45:10.808900 containerd[1581]: time="2025-08-13T01:45:10.808795529Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.31.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:10.809928 containerd[1581]: time="2025-08-13T01:45:10.809849166Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.31.11: active requests=0, bytes read=30383612" Aug 13 01:45:10.810537 containerd[1581]: time="2025-08-13T01:45:10.810497680Z" level=info msg="ImageCreate event name:\"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:10.813666 containerd[1581]: time="2025-08-13T01:45:10.812775366Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:10.815814 containerd[1581]: time="2025-08-13T01:45:10.815775334Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.31.11\" with image id \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\", repo tag \"registry.k8s.io/kube-proxy:v1.31.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:a31da847792c5e7e92e91b78da1ad21d693e4b2b48d0e9f4610c8764dc2a5d79\", size \"30382631\" in 2.626515909s" Aug 13 01:45:10.815861 containerd[1581]: time="2025-08-13T01:45:10.815831502Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.31.11\" returns image reference \"sha256:0cec28fd5c3c446ec52e2886ddea38bf7f7e17755aa5d0095d50d3df5914a8fd\"" Aug 13 01:45:10.816459 containerd[1581]: time="2025-08-13T01:45:10.816439156Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:45:11.573851 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2039851072.mount: Deactivated successfully. Aug 13 01:45:12.961273 containerd[1581]: time="2025-08-13T01:45:12.961215269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:12.962258 containerd[1581]: time="2025-08-13T01:45:12.962226085Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 01:45:12.963742 containerd[1581]: time="2025-08-13T01:45:12.963697825Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:12.965959 containerd[1581]: time="2025-08-13T01:45:12.965920432Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:12.966970 containerd[1581]: time="2025-08-13T01:45:12.966791480Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.150174786s" Aug 13 01:45:12.966970 containerd[1581]: time="2025-08-13T01:45:12.966823603Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:45:12.967608 containerd[1581]: time="2025-08-13T01:45:12.967569327Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:45:13.666127 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3945290162.mount: Deactivated successfully. Aug 13 01:45:13.672538 containerd[1581]: time="2025-08-13T01:45:13.672451092Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:45:13.673568 containerd[1581]: time="2025-08-13T01:45:13.673525679Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 01:45:13.673850 containerd[1581]: time="2025-08-13T01:45:13.673810323Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:45:13.675758 containerd[1581]: time="2025-08-13T01:45:13.675716182Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:45:13.676425 containerd[1581]: time="2025-08-13T01:45:13.676389920Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 708.78844ms" Aug 13 01:45:13.676494 containerd[1581]: time="2025-08-13T01:45:13.676480576Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:45:13.677278 containerd[1581]: time="2025-08-13T01:45:13.677242169Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 01:45:14.430191 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 01:45:14.433792 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:14.491455 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2901208045.mount: Deactivated successfully. Aug 13 01:45:14.707791 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:14.732341 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:45:14.882938 kubelet[2243]: E0813 01:45:14.882876 2243 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:45:14.887319 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:45:14.887521 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:45:14.887959 systemd[1]: kubelet.service: Consumed 351ms CPU time, 108.1M memory peak. Aug 13 01:45:16.743691 containerd[1581]: time="2025-08-13T01:45:16.743269452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.15-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:16.744376 containerd[1581]: time="2025-08-13T01:45:16.744047083Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.15-0: active requests=0, bytes read=56780013" Aug 13 01:45:16.745114 containerd[1581]: time="2025-08-13T01:45:16.745083743Z" level=info msg="ImageCreate event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:16.747759 containerd[1581]: time="2025-08-13T01:45:16.747718900Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:16.748810 containerd[1581]: time="2025-08-13T01:45:16.748663709Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.15-0\" with image id \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\", repo tag \"registry.k8s.io/etcd:3.5.15-0\", repo digest \"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\", size \"56909194\" in 3.071364648s" Aug 13 01:45:16.748810 containerd[1581]: time="2025-08-13T01:45:16.748695950Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.15-0\" returns image reference \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:45:19.321430 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:19.321698 systemd[1]: kubelet.service: Consumed 351ms CPU time, 108.1M memory peak. Aug 13 01:45:19.324919 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:19.360233 systemd[1]: Reload requested from client PID 2321 ('systemctl') (unit session-9.scope)... Aug 13 01:45:19.360399 systemd[1]: Reloading... Aug 13 01:45:19.580674 zram_generator::config[2364]: No configuration found. Aug 13 01:45:19.679053 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:45:19.798723 systemd[1]: Reloading finished in 437 ms. Aug 13 01:45:19.857319 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 01:45:19.857424 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 01:45:19.857767 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:19.857814 systemd[1]: kubelet.service: Consumed 219ms CPU time, 98.3M memory peak. Aug 13 01:45:19.859475 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:20.118393 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:20.125956 (kubelet)[2418]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:45:20.203047 kubelet[2418]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:45:20.204669 kubelet[2418]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:45:20.204669 kubelet[2418]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:45:20.204669 kubelet[2418]: I0813 01:45:20.203716 2418 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:45:20.546188 kubelet[2418]: I0813 01:45:20.546113 2418 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:45:20.546188 kubelet[2418]: I0813 01:45:20.546152 2418 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:45:20.546485 kubelet[2418]: I0813 01:45:20.546441 2418 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:45:20.587232 kubelet[2418]: E0813 01:45:20.587166 2418 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.232.7.32:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.232.7.32:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:45:20.588349 kubelet[2418]: I0813 01:45:20.588190 2418 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:45:20.595865 kubelet[2418]: I0813 01:45:20.595830 2418 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:45:20.602698 kubelet[2418]: I0813 01:45:20.602356 2418 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:45:20.603130 kubelet[2418]: I0813 01:45:20.603087 2418 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:45:20.603300 kubelet[2418]: I0813 01:45:20.603258 2418 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:45:20.603488 kubelet[2418]: I0813 01:45:20.603291 2418 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-7-32","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:45:20.603637 kubelet[2418]: I0813 01:45:20.603494 2418 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:45:20.603637 kubelet[2418]: I0813 01:45:20.603503 2418 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:45:20.603751 kubelet[2418]: I0813 01:45:20.603688 2418 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:45:20.625889 kubelet[2418]: I0813 01:45:20.625816 2418 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:45:20.625889 kubelet[2418]: I0813 01:45:20.625868 2418 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:45:20.626551 kubelet[2418]: I0813 01:45:20.625917 2418 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:45:20.626551 kubelet[2418]: I0813 01:45:20.625946 2418 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:45:20.627989 kubelet[2418]: W0813 01:45:20.627891 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.232.7.32:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-7-32&limit=500&resourceVersion=0": dial tcp 172.232.7.32:6443: connect: connection refused Aug 13 01:45:20.627989 kubelet[2418]: E0813 01:45:20.627982 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.232.7.32:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-7-32&limit=500&resourceVersion=0\": dial tcp 172.232.7.32:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:45:20.630518 kubelet[2418]: W0813 01:45:20.630151 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.232.7.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.232.7.32:6443: connect: connection refused Aug 13 01:45:20.630711 kubelet[2418]: E0813 01:45:20.630688 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.232.7.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.7.32:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:45:20.630857 kubelet[2418]: I0813 01:45:20.630843 2418 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:45:20.631383 kubelet[2418]: I0813 01:45:20.631344 2418 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:45:20.631483 kubelet[2418]: W0813 01:45:20.631434 2418 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:45:20.635516 kubelet[2418]: I0813 01:45:20.634889 2418 server.go:1274] "Started kubelet" Aug 13 01:45:20.636772 kubelet[2418]: I0813 01:45:20.636755 2418 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:45:20.658363 kubelet[2418]: I0813 01:45:20.658264 2418 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:45:20.659639 kubelet[2418]: I0813 01:45:20.659537 2418 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:45:20.662203 kubelet[2418]: E0813 01:45:20.661044 2418 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.232.7.32:6443/api/v1/namespaces/default/events\": dial tcp 172.232.7.32:6443: connect: connection refused" event="&Event{ObjectMeta:{172-232-7-32.185b303d88a8cc56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-7-32,UID:172-232-7-32,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-7-32,},FirstTimestamp:2025-08-13 01:45:20.634842198 +0000 UTC m=+0.504388290,LastTimestamp:2025-08-13 01:45:20.634842198 +0000 UTC m=+0.504388290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-7-32,}" Aug 13 01:45:20.663735 kubelet[2418]: I0813 01:45:20.663683 2418 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:45:20.664508 kubelet[2418]: I0813 01:45:20.664473 2418 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:45:20.665191 kubelet[2418]: I0813 01:45:20.664839 2418 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:45:20.665753 kubelet[2418]: I0813 01:45:20.665708 2418 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:45:20.666670 kubelet[2418]: E0813 01:45:20.666601 2418 kubelet_node_status.go:453] "Error getting the current node from lister" err="node \"172-232-7-32\" not found" Aug 13 01:45:20.667716 kubelet[2418]: I0813 01:45:20.667372 2418 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:45:20.667716 kubelet[2418]: I0813 01:45:20.667446 2418 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:45:20.669011 kubelet[2418]: E0813 01:45:20.668928 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.7.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-7-32?timeout=10s\": dial tcp 172.232.7.32:6443: connect: connection refused" interval="200ms" Aug 13 01:45:20.669342 kubelet[2418]: W0813 01:45:20.669285 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.232.7.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.232.7.32:6443: connect: connection refused Aug 13 01:45:20.669455 kubelet[2418]: E0813 01:45:20.669437 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.232.7.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.7.32:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:45:20.670619 kubelet[2418]: I0813 01:45:20.670604 2418 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:45:20.670892 kubelet[2418]: I0813 01:45:20.670860 2418 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:45:20.672585 kubelet[2418]: E0813 01:45:20.672559 2418 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:45:20.672846 kubelet[2418]: I0813 01:45:20.672804 2418 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:45:20.697064 kubelet[2418]: I0813 01:45:20.696966 2418 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:45:20.697064 kubelet[2418]: I0813 01:45:20.696987 2418 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:45:20.697064 kubelet[2418]: I0813 01:45:20.697015 2418 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:45:20.697927 kubelet[2418]: I0813 01:45:20.697847 2418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:45:20.700066 kubelet[2418]: I0813 01:45:20.699829 2418 policy_none.go:49] "None policy: Start" Aug 13 01:45:20.700066 kubelet[2418]: I0813 01:45:20.699961 2418 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:45:20.700066 kubelet[2418]: I0813 01:45:20.699989 2418 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:45:20.700066 kubelet[2418]: I0813 01:45:20.700024 2418 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:45:20.700228 kubelet[2418]: E0813 01:45:20.700081 2418 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:45:20.701295 kubelet[2418]: I0813 01:45:20.701271 2418 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:45:20.701587 kubelet[2418]: I0813 01:45:20.701363 2418 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:45:20.708453 kubelet[2418]: W0813 01:45:20.708279 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.232.7.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.232.7.32:6443: connect: connection refused Aug 13 01:45:20.708700 kubelet[2418]: E0813 01:45:20.708674 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.232.7.32:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.7.32:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:45:20.711179 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 01:45:20.744026 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 01:45:20.748970 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 01:45:20.760703 kubelet[2418]: I0813 01:45:20.760290 2418 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:45:20.760703 kubelet[2418]: I0813 01:45:20.760627 2418 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:45:20.761013 kubelet[2418]: I0813 01:45:20.760640 2418 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:45:20.761639 kubelet[2418]: I0813 01:45:20.761621 2418 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:45:20.764513 kubelet[2418]: E0813 01:45:20.764486 2418 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-232-7-32\" not found" Aug 13 01:45:20.815028 systemd[1]: Created slice kubepods-burstable-pod3eb295726c208b0c6250be9be257c0c4.slice - libcontainer container kubepods-burstable-pod3eb295726c208b0c6250be9be257c0c4.slice. Aug 13 01:45:20.839104 systemd[1]: Created slice kubepods-burstable-pod58f1974cc78effcb44ad126b43b9c686.slice - libcontainer container kubepods-burstable-pod58f1974cc78effcb44ad126b43b9c686.slice. Aug 13 01:45:20.845145 systemd[1]: Created slice kubepods-burstable-pod3559a48a51661341b481ed8b441f4259.slice - libcontainer container kubepods-burstable-pod3559a48a51661341b481ed8b441f4259.slice. Aug 13 01:45:20.864215 kubelet[2418]: I0813 01:45:20.864095 2418 kubelet_node_status.go:72] "Attempting to register node" node="172-232-7-32" Aug 13 01:45:20.865176 kubelet[2418]: E0813 01:45:20.864954 2418 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.232.7.32:6443/api/v1/nodes\": dial tcp 172.232.7.32:6443: connect: connection refused" node="172-232-7-32" Aug 13 01:45:20.868674 kubelet[2418]: I0813 01:45:20.868545 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3eb295726c208b0c6250be9be257c0c4-ca-certs\") pod \"kube-apiserver-172-232-7-32\" (UID: \"3eb295726c208b0c6250be9be257c0c4\") " pod="kube-system/kube-apiserver-172-232-7-32" Aug 13 01:45:20.868674 kubelet[2418]: I0813 01:45:20.868575 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58f1974cc78effcb44ad126b43b9c686-ca-certs\") pod \"kube-controller-manager-172-232-7-32\" (UID: \"58f1974cc78effcb44ad126b43b9c686\") " pod="kube-system/kube-controller-manager-172-232-7-32" Aug 13 01:45:20.868674 kubelet[2418]: I0813 01:45:20.868598 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3559a48a51661341b481ed8b441f4259-kubeconfig\") pod \"kube-scheduler-172-232-7-32\" (UID: \"3559a48a51661341b481ed8b441f4259\") " pod="kube-system/kube-scheduler-172-232-7-32" Aug 13 01:45:20.868674 kubelet[2418]: I0813 01:45:20.868617 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58f1974cc78effcb44ad126b43b9c686-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-7-32\" (UID: \"58f1974cc78effcb44ad126b43b9c686\") " pod="kube-system/kube-controller-manager-172-232-7-32" Aug 13 01:45:20.868674 kubelet[2418]: I0813 01:45:20.868637 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3eb295726c208b0c6250be9be257c0c4-k8s-certs\") pod \"kube-apiserver-172-232-7-32\" (UID: \"3eb295726c208b0c6250be9be257c0c4\") " pod="kube-system/kube-apiserver-172-232-7-32" Aug 13 01:45:20.868985 kubelet[2418]: I0813 01:45:20.868674 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3eb295726c208b0c6250be9be257c0c4-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-7-32\" (UID: \"3eb295726c208b0c6250be9be257c0c4\") " pod="kube-system/kube-apiserver-172-232-7-32" Aug 13 01:45:20.869305 kubelet[2418]: I0813 01:45:20.869087 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/58f1974cc78effcb44ad126b43b9c686-flexvolume-dir\") pod \"kube-controller-manager-172-232-7-32\" (UID: \"58f1974cc78effcb44ad126b43b9c686\") " pod="kube-system/kube-controller-manager-172-232-7-32" Aug 13 01:45:20.869305 kubelet[2418]: I0813 01:45:20.869148 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58f1974cc78effcb44ad126b43b9c686-k8s-certs\") pod \"kube-controller-manager-172-232-7-32\" (UID: \"58f1974cc78effcb44ad126b43b9c686\") " pod="kube-system/kube-controller-manager-172-232-7-32" Aug 13 01:45:20.869305 kubelet[2418]: I0813 01:45:20.869199 2418 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58f1974cc78effcb44ad126b43b9c686-kubeconfig\") pod \"kube-controller-manager-172-232-7-32\" (UID: \"58f1974cc78effcb44ad126b43b9c686\") " pod="kube-system/kube-controller-manager-172-232-7-32" Aug 13 01:45:20.869547 kubelet[2418]: E0813 01:45:20.869510 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.7.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-7-32?timeout=10s\": dial tcp 172.232.7.32:6443: connect: connection refused" interval="400ms" Aug 13 01:45:21.067347 kubelet[2418]: I0813 01:45:21.067212 2418 kubelet_node_status.go:72] "Attempting to register node" node="172-232-7-32" Aug 13 01:45:21.067938 kubelet[2418]: E0813 01:45:21.067740 2418 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.232.7.32:6443/api/v1/nodes\": dial tcp 172.232.7.32:6443: connect: connection refused" node="172-232-7-32" Aug 13 01:45:21.133958 kubelet[2418]: E0813 01:45:21.133871 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:21.134980 containerd[1581]: time="2025-08-13T01:45:21.134940290Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-7-32,Uid:3eb295726c208b0c6250be9be257c0c4,Namespace:kube-system,Attempt:0,}" Aug 13 01:45:21.143538 kubelet[2418]: E0813 01:45:21.143458 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:21.144351 containerd[1581]: time="2025-08-13T01:45:21.144253948Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-7-32,Uid:58f1974cc78effcb44ad126b43b9c686,Namespace:kube-system,Attempt:0,}" Aug 13 01:45:21.150843 kubelet[2418]: E0813 01:45:21.150812 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:21.163289 containerd[1581]: time="2025-08-13T01:45:21.162905202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-7-32,Uid:3559a48a51661341b481ed8b441f4259,Namespace:kube-system,Attempt:0,}" Aug 13 01:45:21.202833 containerd[1581]: time="2025-08-13T01:45:21.202789062Z" level=info msg="connecting to shim b59d5d78996c9306677afb4f729f228413bb6fb705ac2b0e7119fc3dd10254d0" address="unix:///run/containerd/s/ababff68dfb89ebdde1589aacea8dd8d52cd55fa2c15e93485c47096e62b81bd" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:45:21.275212 kubelet[2418]: E0813 01:45:21.275149 2418 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.7.32:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-7-32?timeout=10s\": dial tcp 172.232.7.32:6443: connect: connection refused" interval="800ms" Aug 13 01:45:21.325200 containerd[1581]: time="2025-08-13T01:45:21.299928716Z" level=info msg="connecting to shim b1591f5c7393aa5b0130a100520cc2d96842dc297d548721a3bf3c91ab16530e" address="unix:///run/containerd/s/81bceac6430c72ab13796138a0b0b667165d00dffcc1e3093cce65f092741b2c" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:45:21.325200 containerd[1581]: time="2025-08-13T01:45:21.302771518Z" level=info msg="connecting to shim 2152561cf64a78798635fac9de73b7a364c842e3ccb3c66563a50464cc956eca" address="unix:///run/containerd/s/2530472bcfd9c38ff5dac769c2d26a38a67e8c072e49fa94d8b96421095626ad" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:45:21.354920 systemd[1]: Started cri-containerd-b59d5d78996c9306677afb4f729f228413bb6fb705ac2b0e7119fc3dd10254d0.scope - libcontainer container b59d5d78996c9306677afb4f729f228413bb6fb705ac2b0e7119fc3dd10254d0. Aug 13 01:45:21.413509 kubelet[2418]: E0813 01:45:21.404452 2418 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.232.7.32:6443/api/v1/namespaces/default/events\": dial tcp 172.232.7.32:6443: connect: connection refused" event="&Event{ObjectMeta:{172-232-7-32.185b303d88a8cc56 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-7-32,UID:172-232-7-32,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-7-32,},FirstTimestamp:2025-08-13 01:45:20.634842198 +0000 UTC m=+0.504388290,LastTimestamp:2025-08-13 01:45:20.634842198 +0000 UTC m=+0.504388290,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-7-32,}" Aug 13 01:45:21.435064 kubelet[2418]: W0813 01:45:21.434736 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.232.7.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.232.7.32:6443: connect: connection refused Aug 13 01:45:21.435064 kubelet[2418]: E0813 01:45:21.434825 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.232.7.32:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.7.32:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:45:21.452848 systemd[1]: Started cri-containerd-b1591f5c7393aa5b0130a100520cc2d96842dc297d548721a3bf3c91ab16530e.scope - libcontainer container b1591f5c7393aa5b0130a100520cc2d96842dc297d548721a3bf3c91ab16530e. Aug 13 01:45:21.497230 kubelet[2418]: I0813 01:45:21.497195 2418 kubelet_node_status.go:72] "Attempting to register node" node="172-232-7-32" Aug 13 01:45:21.498156 kubelet[2418]: E0813 01:45:21.498135 2418 kubelet_node_status.go:95] "Unable to register node with API server" err="Post \"https://172.232.7.32:6443/api/v1/nodes\": dial tcp 172.232.7.32:6443: connect: connection refused" node="172-232-7-32" Aug 13 01:45:21.498975 systemd[1]: Started cri-containerd-2152561cf64a78798635fac9de73b7a364c842e3ccb3c66563a50464cc956eca.scope - libcontainer container 2152561cf64a78798635fac9de73b7a364c842e3ccb3c66563a50464cc956eca. Aug 13 01:45:21.589379 containerd[1581]: time="2025-08-13T01:45:21.589246249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-7-32,Uid:3eb295726c208b0c6250be9be257c0c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"b59d5d78996c9306677afb4f729f228413bb6fb705ac2b0e7119fc3dd10254d0\"" Aug 13 01:45:21.591175 kubelet[2418]: E0813 01:45:21.591129 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:21.595199 containerd[1581]: time="2025-08-13T01:45:21.595148769Z" level=info msg="CreateContainer within sandbox \"b59d5d78996c9306677afb4f729f228413bb6fb705ac2b0e7119fc3dd10254d0\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:45:21.609455 containerd[1581]: time="2025-08-13T01:45:21.609404330Z" level=info msg="Container f93bd396a42fded84c05888550e0d3e56de7ffac199ac73e4cb2bd9480bb5f8f: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:21.618577 containerd[1581]: time="2025-08-13T01:45:21.618014556Z" level=info msg="CreateContainer within sandbox \"b59d5d78996c9306677afb4f729f228413bb6fb705ac2b0e7119fc3dd10254d0\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f93bd396a42fded84c05888550e0d3e56de7ffac199ac73e4cb2bd9480bb5f8f\"" Aug 13 01:45:21.618974 containerd[1581]: time="2025-08-13T01:45:21.618843180Z" level=info msg="StartContainer for \"f93bd396a42fded84c05888550e0d3e56de7ffac199ac73e4cb2bd9480bb5f8f\"" Aug 13 01:45:21.625818 containerd[1581]: time="2025-08-13T01:45:21.625775835Z" level=info msg="connecting to shim f93bd396a42fded84c05888550e0d3e56de7ffac199ac73e4cb2bd9480bb5f8f" address="unix:///run/containerd/s/ababff68dfb89ebdde1589aacea8dd8d52cd55fa2c15e93485c47096e62b81bd" protocol=ttrpc version=3 Aug 13 01:45:21.626793 containerd[1581]: time="2025-08-13T01:45:21.626757688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-7-32,Uid:58f1974cc78effcb44ad126b43b9c686,Namespace:kube-system,Attempt:0,} returns sandbox id \"b1591f5c7393aa5b0130a100520cc2d96842dc297d548721a3bf3c91ab16530e\"" Aug 13 01:45:21.629368 kubelet[2418]: E0813 01:45:21.629304 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:21.635482 containerd[1581]: time="2025-08-13T01:45:21.635443925Z" level=info msg="CreateContainer within sandbox \"b1591f5c7393aa5b0130a100520cc2d96842dc297d548721a3bf3c91ab16530e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:45:21.645461 containerd[1581]: time="2025-08-13T01:45:21.645407661Z" level=info msg="Container e7d4871ec4c28acf967e67818836249fe7ec772ffbb662a065732ef5f40e8d61: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:21.655593 containerd[1581]: time="2025-08-13T01:45:21.655444085Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-7-32,Uid:3559a48a51661341b481ed8b441f4259,Namespace:kube-system,Attempt:0,} returns sandbox id \"2152561cf64a78798635fac9de73b7a364c842e3ccb3c66563a50464cc956eca\"" Aug 13 01:45:21.662305 kubelet[2418]: E0813 01:45:21.661801 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:21.667831 systemd[1]: Started cri-containerd-f93bd396a42fded84c05888550e0d3e56de7ffac199ac73e4cb2bd9480bb5f8f.scope - libcontainer container f93bd396a42fded84c05888550e0d3e56de7ffac199ac73e4cb2bd9480bb5f8f. Aug 13 01:45:21.670176 containerd[1581]: time="2025-08-13T01:45:21.670102789Z" level=info msg="CreateContainer within sandbox \"2152561cf64a78798635fac9de73b7a364c842e3ccb3c66563a50464cc956eca\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:45:21.671392 containerd[1581]: time="2025-08-13T01:45:21.671349391Z" level=info msg="CreateContainer within sandbox \"b1591f5c7393aa5b0130a100520cc2d96842dc297d548721a3bf3c91ab16530e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"e7d4871ec4c28acf967e67818836249fe7ec772ffbb662a065732ef5f40e8d61\"" Aug 13 01:45:21.672508 containerd[1581]: time="2025-08-13T01:45:21.672451484Z" level=info msg="StartContainer for \"e7d4871ec4c28acf967e67818836249fe7ec772ffbb662a065732ef5f40e8d61\"" Aug 13 01:45:21.675973 containerd[1581]: time="2025-08-13T01:45:21.675930450Z" level=info msg="connecting to shim e7d4871ec4c28acf967e67818836249fe7ec772ffbb662a065732ef5f40e8d61" address="unix:///run/containerd/s/81bceac6430c72ab13796138a0b0b667165d00dffcc1e3093cce65f092741b2c" protocol=ttrpc version=3 Aug 13 01:45:21.683219 containerd[1581]: time="2025-08-13T01:45:21.683184789Z" level=info msg="Container 4ce912b94673bf80163c5ac21119231921f64401baf15cabef75fb9a088b5c57: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:21.694380 containerd[1581]: time="2025-08-13T01:45:21.694330398Z" level=info msg="CreateContainer within sandbox \"2152561cf64a78798635fac9de73b7a364c842e3ccb3c66563a50464cc956eca\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"4ce912b94673bf80163c5ac21119231921f64401baf15cabef75fb9a088b5c57\"" Aug 13 01:45:21.700705 containerd[1581]: time="2025-08-13T01:45:21.699240103Z" level=info msg="StartContainer for \"4ce912b94673bf80163c5ac21119231921f64401baf15cabef75fb9a088b5c57\"" Aug 13 01:45:21.700705 containerd[1581]: time="2025-08-13T01:45:21.700421847Z" level=info msg="connecting to shim 4ce912b94673bf80163c5ac21119231921f64401baf15cabef75fb9a088b5c57" address="unix:///run/containerd/s/2530472bcfd9c38ff5dac769c2d26a38a67e8c072e49fa94d8b96421095626ad" protocol=ttrpc version=3 Aug 13 01:45:21.720967 systemd[1]: Started cri-containerd-e7d4871ec4c28acf967e67818836249fe7ec772ffbb662a065732ef5f40e8d61.scope - libcontainer container e7d4871ec4c28acf967e67818836249fe7ec772ffbb662a065732ef5f40e8d61. Aug 13 01:45:21.740190 kubelet[2418]: W0813 01:45:21.740099 2418 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.232.7.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.232.7.32:6443: connect: connection refused Aug 13 01:45:21.740190 kubelet[2418]: E0813 01:45:21.740182 2418 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.232.7.32:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.7.32:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:45:21.740962 systemd[1]: Started cri-containerd-4ce912b94673bf80163c5ac21119231921f64401baf15cabef75fb9a088b5c57.scope - libcontainer container 4ce912b94673bf80163c5ac21119231921f64401baf15cabef75fb9a088b5c57. Aug 13 01:45:21.863795 containerd[1581]: time="2025-08-13T01:45:21.863574581Z" level=info msg="StartContainer for \"e7d4871ec4c28acf967e67818836249fe7ec772ffbb662a065732ef5f40e8d61\" returns successfully" Aug 13 01:45:21.873355 containerd[1581]: time="2025-08-13T01:45:21.873201570Z" level=info msg="StartContainer for \"f93bd396a42fded84c05888550e0d3e56de7ffac199ac73e4cb2bd9480bb5f8f\" returns successfully" Aug 13 01:45:21.877630 containerd[1581]: time="2025-08-13T01:45:21.877596442Z" level=info msg="StartContainer for \"4ce912b94673bf80163c5ac21119231921f64401baf15cabef75fb9a088b5c57\" returns successfully" Aug 13 01:45:22.302186 kubelet[2418]: I0813 01:45:22.302020 2418 kubelet_node_status.go:72] "Attempting to register node" node="172-232-7-32" Aug 13 01:45:22.744460 kubelet[2418]: E0813 01:45:22.744090 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:22.747895 kubelet[2418]: E0813 01:45:22.747780 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:22.751250 kubelet[2418]: E0813 01:45:22.751192 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:22.930798 update_engine[1548]: I20250813 01:45:22.930706 1548 update_attempter.cc:509] Updating boot flags... Aug 13 01:45:23.753870 kubelet[2418]: E0813 01:45:23.753830 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:23.754278 kubelet[2418]: E0813 01:45:23.754137 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:23.754550 kubelet[2418]: E0813 01:45:23.754526 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:24.185141 kubelet[2418]: E0813 01:45:24.184990 2418 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-232-7-32\" not found" node="172-232-7-32" Aug 13 01:45:24.342382 kubelet[2418]: I0813 01:45:24.341846 2418 kubelet_node_status.go:75] "Successfully registered node" node="172-232-7-32" Aug 13 01:45:24.650364 kubelet[2418]: I0813 01:45:24.650317 2418 apiserver.go:52] "Watching apiserver" Aug 13 01:45:24.667796 kubelet[2418]: I0813 01:45:24.667753 2418 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:45:24.758917 kubelet[2418]: E0813 01:45:24.758849 2418 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-232-7-32\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-232-7-32" Aug 13 01:45:24.759456 kubelet[2418]: E0813 01:45:24.759035 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:24.988794 kubelet[2418]: E0813 01:45:24.988750 2418 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-172-232-7-32\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-232-7-32" Aug 13 01:45:24.988967 kubelet[2418]: E0813 01:45:24.988930 2418 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:26.570733 systemd[1]: Reload requested from client PID 2702 ('systemctl') (unit session-9.scope)... Aug 13 01:45:26.570757 systemd[1]: Reloading... Aug 13 01:45:26.717818 zram_generator::config[2748]: No configuration found. Aug 13 01:45:26.828245 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:45:26.961193 systemd[1]: Reloading finished in 390 ms. Aug 13 01:45:27.004758 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:27.026293 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:45:27.026732 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:27.026843 systemd[1]: kubelet.service: Consumed 1.144s CPU time, 131.4M memory peak. Aug 13 01:45:27.029911 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:27.226154 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:27.237090 (kubelet)[2796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:45:27.320588 kubelet[2796]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:45:27.320588 kubelet[2796]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Aug 13 01:45:27.320588 kubelet[2796]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:45:27.322689 kubelet[2796]: I0813 01:45:27.322323 2796 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:45:27.337454 kubelet[2796]: I0813 01:45:27.337404 2796 server.go:491] "Kubelet version" kubeletVersion="v1.31.8" Aug 13 01:45:27.337454 kubelet[2796]: I0813 01:45:27.337439 2796 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:45:27.338281 kubelet[2796]: I0813 01:45:27.337972 2796 server.go:934] "Client rotation is on, will bootstrap in background" Aug 13 01:45:27.340005 kubelet[2796]: I0813 01:45:27.339962 2796 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 01:45:27.343360 kubelet[2796]: I0813 01:45:27.342944 2796 dynamic_cafile_content.go:160] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:45:27.351072 kubelet[2796]: I0813 01:45:27.351016 2796 server.go:1431] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:45:27.357982 kubelet[2796]: I0813 01:45:27.357953 2796 server.go:749] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:45:27.358675 kubelet[2796]: I0813 01:45:27.358113 2796 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Aug 13 01:45:27.358675 kubelet[2796]: I0813 01:45:27.358237 2796 container_manager_linux.go:264] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:45:27.358675 kubelet[2796]: I0813 01:45:27.358265 2796 container_manager_linux.go:269] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-7-32","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:45:27.358675 kubelet[2796]: I0813 01:45:27.358460 2796 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:45:27.358885 kubelet[2796]: I0813 01:45:27.358469 2796 container_manager_linux.go:300] "Creating device plugin manager" Aug 13 01:45:27.358885 kubelet[2796]: I0813 01:45:27.358498 2796 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:45:27.358885 kubelet[2796]: I0813 01:45:27.358601 2796 kubelet.go:408] "Attempting to sync node with API server" Aug 13 01:45:27.358885 kubelet[2796]: I0813 01:45:27.358613 2796 kubelet.go:303] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:45:27.359980 kubelet[2796]: I0813 01:45:27.359686 2796 kubelet.go:314] "Adding apiserver pod source" Aug 13 01:45:27.359980 kubelet[2796]: I0813 01:45:27.359712 2796 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:45:27.362854 kubelet[2796]: I0813 01:45:27.362797 2796 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:45:27.363393 kubelet[2796]: I0813 01:45:27.363348 2796 kubelet.go:837] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:45:27.366249 kubelet[2796]: I0813 01:45:27.365463 2796 server.go:1274] "Started kubelet" Aug 13 01:45:27.374102 kubelet[2796]: I0813 01:45:27.373360 2796 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:45:27.379897 kubelet[2796]: I0813 01:45:27.379247 2796 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:45:27.380051 kubelet[2796]: I0813 01:45:27.379961 2796 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:45:27.381733 kubelet[2796]: I0813 01:45:27.381475 2796 server.go:449] "Adding debug handlers to kubelet server" Aug 13 01:45:27.383739 kubelet[2796]: I0813 01:45:27.383294 2796 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:45:27.383739 kubelet[2796]: I0813 01:45:27.383562 2796 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:45:27.388930 kubelet[2796]: I0813 01:45:27.388792 2796 volume_manager.go:289] "Starting Kubelet Volume Manager" Aug 13 01:45:27.391826 kubelet[2796]: I0813 01:45:27.391794 2796 desired_state_of_world_populator.go:147] "Desired state populator starts to run" Aug 13 01:45:27.392051 kubelet[2796]: I0813 01:45:27.392040 2796 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:45:27.393340 kubelet[2796]: I0813 01:45:27.393053 2796 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:45:27.393340 kubelet[2796]: I0813 01:45:27.393153 2796 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:45:27.394642 kubelet[2796]: I0813 01:45:27.394614 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:45:27.395176 kubelet[2796]: E0813 01:45:27.395125 2796 kubelet.go:1478] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:45:27.396501 kubelet[2796]: I0813 01:45:27.396236 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:45:27.396501 kubelet[2796]: I0813 01:45:27.396257 2796 status_manager.go:217] "Starting to sync pod status with apiserver" Aug 13 01:45:27.396501 kubelet[2796]: I0813 01:45:27.396274 2796 kubelet.go:2321] "Starting kubelet main sync loop" Aug 13 01:45:27.396501 kubelet[2796]: E0813 01:45:27.396316 2796 kubelet.go:2345] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:45:27.399461 kubelet[2796]: I0813 01:45:27.399438 2796 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:45:27.470104 kubelet[2796]: I0813 01:45:27.470057 2796 cpu_manager.go:214] "Starting CPU manager" policy="none" Aug 13 01:45:27.470104 kubelet[2796]: I0813 01:45:27.470084 2796 cpu_manager.go:215] "Reconciling" reconcilePeriod="10s" Aug 13 01:45:27.470104 kubelet[2796]: I0813 01:45:27.470105 2796 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:45:27.470344 kubelet[2796]: I0813 01:45:27.470275 2796 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:45:27.470344 kubelet[2796]: I0813 01:45:27.470287 2796 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:45:27.470344 kubelet[2796]: I0813 01:45:27.470309 2796 policy_none.go:49] "None policy: Start" Aug 13 01:45:27.471132 kubelet[2796]: I0813 01:45:27.471108 2796 memory_manager.go:170] "Starting memorymanager" policy="None" Aug 13 01:45:27.471181 kubelet[2796]: I0813 01:45:27.471133 2796 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:45:27.471305 kubelet[2796]: I0813 01:45:27.471277 2796 state_mem.go:75] "Updated machine memory state" Aug 13 01:45:27.477698 kubelet[2796]: I0813 01:45:27.477084 2796 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:45:27.477698 kubelet[2796]: I0813 01:45:27.477259 2796 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:45:27.477698 kubelet[2796]: I0813 01:45:27.477271 2796 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:45:27.478363 kubelet[2796]: I0813 01:45:27.478167 2796 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:45:27.594673 kubelet[2796]: I0813 01:45:27.592841 2796 kubelet_node_status.go:72] "Attempting to register node" node="172-232-7-32" Aug 13 01:45:27.594916 kubelet[2796]: I0813 01:45:27.594897 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/3eb295726c208b0c6250be9be257c0c4-ca-certs\") pod \"kube-apiserver-172-232-7-32\" (UID: \"3eb295726c208b0c6250be9be257c0c4\") " pod="kube-system/kube-apiserver-172-232-7-32" Aug 13 01:45:27.594999 kubelet[2796]: I0813 01:45:27.594980 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/3eb295726c208b0c6250be9be257c0c4-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-7-32\" (UID: \"3eb295726c208b0c6250be9be257c0c4\") " pod="kube-system/kube-apiserver-172-232-7-32" Aug 13 01:45:27.595069 kubelet[2796]: I0813 01:45:27.595048 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/58f1974cc78effcb44ad126b43b9c686-ca-certs\") pod \"kube-controller-manager-172-232-7-32\" (UID: \"58f1974cc78effcb44ad126b43b9c686\") " pod="kube-system/kube-controller-manager-172-232-7-32" Aug 13 01:45:27.595133 kubelet[2796]: I0813 01:45:27.595073 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/58f1974cc78effcb44ad126b43b9c686-k8s-certs\") pod \"kube-controller-manager-172-232-7-32\" (UID: \"58f1974cc78effcb44ad126b43b9c686\") " pod="kube-system/kube-controller-manager-172-232-7-32" Aug 13 01:45:27.595133 kubelet[2796]: I0813 01:45:27.595091 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/58f1974cc78effcb44ad126b43b9c686-kubeconfig\") pod \"kube-controller-manager-172-232-7-32\" (UID: \"58f1974cc78effcb44ad126b43b9c686\") " pod="kube-system/kube-controller-manager-172-232-7-32" Aug 13 01:45:27.595133 kubelet[2796]: I0813 01:45:27.595108 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/58f1974cc78effcb44ad126b43b9c686-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-7-32\" (UID: \"58f1974cc78effcb44ad126b43b9c686\") " pod="kube-system/kube-controller-manager-172-232-7-32" Aug 13 01:45:27.595133 kubelet[2796]: I0813 01:45:27.595125 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/3559a48a51661341b481ed8b441f4259-kubeconfig\") pod \"kube-scheduler-172-232-7-32\" (UID: \"3559a48a51661341b481ed8b441f4259\") " pod="kube-system/kube-scheduler-172-232-7-32" Aug 13 01:45:27.595304 kubelet[2796]: I0813 01:45:27.595139 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/3eb295726c208b0c6250be9be257c0c4-k8s-certs\") pod \"kube-apiserver-172-232-7-32\" (UID: \"3eb295726c208b0c6250be9be257c0c4\") " pod="kube-system/kube-apiserver-172-232-7-32" Aug 13 01:45:27.595304 kubelet[2796]: I0813 01:45:27.595156 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/58f1974cc78effcb44ad126b43b9c686-flexvolume-dir\") pod \"kube-controller-manager-172-232-7-32\" (UID: \"58f1974cc78effcb44ad126b43b9c686\") " pod="kube-system/kube-controller-manager-172-232-7-32" Aug 13 01:45:27.601189 kubelet[2796]: I0813 01:45:27.601121 2796 kubelet_node_status.go:111] "Node was previously registered" node="172-232-7-32" Aug 13 01:45:27.601189 kubelet[2796]: I0813 01:45:27.601192 2796 kubelet_node_status.go:75] "Successfully registered node" node="172-232-7-32" Aug 13 01:45:27.811512 kubelet[2796]: E0813 01:45:27.808221 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:27.811512 kubelet[2796]: E0813 01:45:27.808862 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:27.811512 kubelet[2796]: E0813 01:45:27.809012 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:28.373686 kubelet[2796]: I0813 01:45:28.372748 2796 apiserver.go:52] "Watching apiserver" Aug 13 01:45:28.392240 kubelet[2796]: I0813 01:45:28.392164 2796 desired_state_of_world_populator.go:155] "Finished populating initial desired state of world" Aug 13 01:45:28.438499 kubelet[2796]: E0813 01:45:28.438464 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:28.459069 kubelet[2796]: E0813 01:45:28.439437 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:28.462823 kubelet[2796]: E0813 01:45:28.462723 2796 kubelet.go:1915] "Failed creating a mirror pod for" err="pods \"kube-apiserver-172-232-7-32\" already exists" pod="kube-system/kube-apiserver-172-232-7-32" Aug 13 01:45:28.465026 kubelet[2796]: E0813 01:45:28.464989 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:28.537143 kubelet[2796]: I0813 01:45:28.537022 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-232-7-32" podStartSLOduration=1.536994739 podStartE2EDuration="1.536994739s" podCreationTimestamp="2025-08-13 01:45:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:45:28.502850682 +0000 UTC m=+1.257419609" watchObservedRunningTime="2025-08-13 01:45:28.536994739 +0000 UTC m=+1.291563666" Aug 13 01:45:28.548997 kubelet[2796]: I0813 01:45:28.548778 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-232-7-32" podStartSLOduration=1.5487093280000002 podStartE2EDuration="1.548709328s" podCreationTimestamp="2025-08-13 01:45:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:45:28.538109001 +0000 UTC m=+1.292677948" watchObservedRunningTime="2025-08-13 01:45:28.548709328 +0000 UTC m=+1.303278265" Aug 13 01:45:28.562994 kubelet[2796]: I0813 01:45:28.562911 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-232-7-32" podStartSLOduration=1.562886335 podStartE2EDuration="1.562886335s" podCreationTimestamp="2025-08-13 01:45:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:45:28.549846664 +0000 UTC m=+1.304415591" watchObservedRunningTime="2025-08-13 01:45:28.562886335 +0000 UTC m=+1.317455272" Aug 13 01:45:29.439551 kubelet[2796]: E0813 01:45:29.439495 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:29.440190 kubelet[2796]: E0813 01:45:29.439919 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:31.012307 kubelet[2796]: E0813 01:45:31.012266 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:31.444096 kubelet[2796]: E0813 01:45:31.443927 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:31.912749 kubelet[2796]: I0813 01:45:31.912631 2796 kuberuntime_manager.go:1635] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:45:31.913632 containerd[1581]: time="2025-08-13T01:45:31.913561976Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:45:31.914171 kubelet[2796]: I0813 01:45:31.913923 2796 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:45:32.978484 systemd[1]: Created slice kubepods-besteffort-pod9cdf404a_9180_4dce_bb4f_bb6e1151a9fe.slice - libcontainer container kubepods-besteffort-pod9cdf404a_9180_4dce_bb4f_bb6e1151a9fe.slice. Aug 13 01:45:33.004545 systemd[1]: Created slice kubepods-besteffort-pod70675166_4c0d_40c2_b604_57f05c828709.slice - libcontainer container kubepods-besteffort-pod70675166_4c0d_40c2_b604_57f05c828709.slice. Aug 13 01:45:33.031376 kubelet[2796]: I0813 01:45:33.031316 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/70675166-4c0d-40c2-b604-57f05c828709-kube-proxy\") pod \"kube-proxy-dmp9l\" (UID: \"70675166-4c0d-40c2-b604-57f05c828709\") " pod="kube-system/kube-proxy-dmp9l" Aug 13 01:45:33.032238 kubelet[2796]: I0813 01:45:33.031610 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/70675166-4c0d-40c2-b604-57f05c828709-lib-modules\") pod \"kube-proxy-dmp9l\" (UID: \"70675166-4c0d-40c2-b604-57f05c828709\") " pod="kube-system/kube-proxy-dmp9l" Aug 13 01:45:33.032238 kubelet[2796]: I0813 01:45:33.031755 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbcsn\" (UniqueName: \"kubernetes.io/projected/70675166-4c0d-40c2-b604-57f05c828709-kube-api-access-jbcsn\") pod \"kube-proxy-dmp9l\" (UID: \"70675166-4c0d-40c2-b604-57f05c828709\") " pod="kube-system/kube-proxy-dmp9l" Aug 13 01:45:33.032238 kubelet[2796]: I0813 01:45:33.031786 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9cdf404a-9180-4dce-bb4f-bb6e1151a9fe-var-lib-calico\") pod \"tigera-operator-5bf8dfcb4-v8hlv\" (UID: \"9cdf404a-9180-4dce-bb4f-bb6e1151a9fe\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-v8hlv" Aug 13 01:45:33.032238 kubelet[2796]: I0813 01:45:33.031807 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/70675166-4c0d-40c2-b604-57f05c828709-xtables-lock\") pod \"kube-proxy-dmp9l\" (UID: \"70675166-4c0d-40c2-b604-57f05c828709\") " pod="kube-system/kube-proxy-dmp9l" Aug 13 01:45:33.032238 kubelet[2796]: I0813 01:45:33.031826 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrczw\" (UniqueName: \"kubernetes.io/projected/9cdf404a-9180-4dce-bb4f-bb6e1151a9fe-kube-api-access-zrczw\") pod \"tigera-operator-5bf8dfcb4-v8hlv\" (UID: \"9cdf404a-9180-4dce-bb4f-bb6e1151a9fe\") " pod="tigera-operator/tigera-operator-5bf8dfcb4-v8hlv" Aug 13 01:45:33.288841 containerd[1581]: time="2025-08-13T01:45:33.288693108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-v8hlv,Uid:9cdf404a-9180-4dce-bb4f-bb6e1151a9fe,Namespace:tigera-operator,Attempt:0,}" Aug 13 01:45:33.310371 kubelet[2796]: E0813 01:45:33.309225 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:33.311790 containerd[1581]: time="2025-08-13T01:45:33.311364039Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dmp9l,Uid:70675166-4c0d-40c2-b604-57f05c828709,Namespace:kube-system,Attempt:0,}" Aug 13 01:45:33.350049 containerd[1581]: time="2025-08-13T01:45:33.349898655Z" level=info msg="connecting to shim 19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2" address="unix:///run/containerd/s/5ba87e323feab35e759594bb5f3decfc4ddedb3d8ef2328cbe43b8d93b4bf8a4" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:45:33.356100 containerd[1581]: time="2025-08-13T01:45:33.356033766Z" level=info msg="connecting to shim c1108b0c291143acad9fcb77f0f795ae937a2619843d6b22a0649f46b63f7e41" address="unix:///run/containerd/s/f16b30f8896a950c2c727dafe52598da9e0121b4ed437955d884f652aec00160" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:45:33.461144 systemd[1]: Started cri-containerd-19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2.scope - libcontainer container 19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2. Aug 13 01:45:33.484236 systemd[1]: Started cri-containerd-c1108b0c291143acad9fcb77f0f795ae937a2619843d6b22a0649f46b63f7e41.scope - libcontainer container c1108b0c291143acad9fcb77f0f795ae937a2619843d6b22a0649f46b63f7e41. Aug 13 01:45:33.626508 containerd[1581]: time="2025-08-13T01:45:33.625749883Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-dmp9l,Uid:70675166-4c0d-40c2-b604-57f05c828709,Namespace:kube-system,Attempt:0,} returns sandbox id \"c1108b0c291143acad9fcb77f0f795ae937a2619843d6b22a0649f46b63f7e41\"" Aug 13 01:45:33.627271 containerd[1581]: time="2025-08-13T01:45:33.627110141Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-5bf8dfcb4-v8hlv,Uid:9cdf404a-9180-4dce-bb4f-bb6e1151a9fe,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2\"" Aug 13 01:45:33.627424 kubelet[2796]: E0813 01:45:33.627108 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:33.629610 containerd[1581]: time="2025-08-13T01:45:33.629593831Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 01:45:33.632868 containerd[1581]: time="2025-08-13T01:45:33.632832672Z" level=info msg="CreateContainer within sandbox \"c1108b0c291143acad9fcb77f0f795ae937a2619843d6b22a0649f46b63f7e41\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:45:33.641928 containerd[1581]: time="2025-08-13T01:45:33.641807164Z" level=info msg="Container b1cae60ebb3f0645d1c55aa8f38ad1162e9ad70e089b4cfc78b6491a8037a084: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:33.648356 containerd[1581]: time="2025-08-13T01:45:33.648300167Z" level=info msg="CreateContainer within sandbox \"c1108b0c291143acad9fcb77f0f795ae937a2619843d6b22a0649f46b63f7e41\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"b1cae60ebb3f0645d1c55aa8f38ad1162e9ad70e089b4cfc78b6491a8037a084\"" Aug 13 01:45:33.649434 containerd[1581]: time="2025-08-13T01:45:33.649380494Z" level=info msg="StartContainer for \"b1cae60ebb3f0645d1c55aa8f38ad1162e9ad70e089b4cfc78b6491a8037a084\"" Aug 13 01:45:33.650935 containerd[1581]: time="2025-08-13T01:45:33.650858119Z" level=info msg="connecting to shim b1cae60ebb3f0645d1c55aa8f38ad1162e9ad70e089b4cfc78b6491a8037a084" address="unix:///run/containerd/s/f16b30f8896a950c2c727dafe52598da9e0121b4ed437955d884f652aec00160" protocol=ttrpc version=3 Aug 13 01:45:33.678873 systemd[1]: Started cri-containerd-b1cae60ebb3f0645d1c55aa8f38ad1162e9ad70e089b4cfc78b6491a8037a084.scope - libcontainer container b1cae60ebb3f0645d1c55aa8f38ad1162e9ad70e089b4cfc78b6491a8037a084. Aug 13 01:45:33.755378 containerd[1581]: time="2025-08-13T01:45:33.754844319Z" level=info msg="StartContainer for \"b1cae60ebb3f0645d1c55aa8f38ad1162e9ad70e089b4cfc78b6491a8037a084\" returns successfully" Aug 13 01:45:34.459265 kubelet[2796]: E0813 01:45:34.458832 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:35.027927 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2793326466.mount: Deactivated successfully. Aug 13 01:45:35.959962 kubelet[2796]: E0813 01:45:35.959810 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:36.017851 kubelet[2796]: I0813 01:45:36.016714 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-dmp9l" podStartSLOduration=4.01668128 podStartE2EDuration="4.01668128s" podCreationTimestamp="2025-08-13 01:45:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:45:34.482122867 +0000 UTC m=+7.236691804" watchObservedRunningTime="2025-08-13 01:45:36.01668128 +0000 UTC m=+8.771250237" Aug 13 01:45:36.489838 kubelet[2796]: E0813 01:45:36.489351 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:37.157294 containerd[1581]: time="2025-08-13T01:45:37.157225856Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:37.170590 containerd[1581]: time="2025-08-13T01:45:37.170477542Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 01:45:37.178682 containerd[1581]: time="2025-08-13T01:45:37.176818724Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:37.181405 containerd[1581]: time="2025-08-13T01:45:37.181343611Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:37.182972 containerd[1581]: time="2025-08-13T01:45:37.182924016Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 3.553213279s" Aug 13 01:45:37.182972 containerd[1581]: time="2025-08-13T01:45:37.182973082Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:45:37.189843 containerd[1581]: time="2025-08-13T01:45:37.189736067Z" level=info msg="CreateContainer within sandbox \"19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 01:45:37.249361 containerd[1581]: time="2025-08-13T01:45:37.249287473Z" level=info msg="Container 50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:37.255373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3839568058.mount: Deactivated successfully. Aug 13 01:45:37.290877 containerd[1581]: time="2025-08-13T01:45:37.290784313Z" level=info msg="CreateContainer within sandbox \"19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac\"" Aug 13 01:45:37.294147 containerd[1581]: time="2025-08-13T01:45:37.294065168Z" level=info msg="StartContainer for \"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac\"" Aug 13 01:45:37.296252 containerd[1581]: time="2025-08-13T01:45:37.296204392Z" level=info msg="connecting to shim 50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac" address="unix:///run/containerd/s/5ba87e323feab35e759594bb5f3decfc4ddedb3d8ef2328cbe43b8d93b4bf8a4" protocol=ttrpc version=3 Aug 13 01:45:37.420168 systemd[1]: Started cri-containerd-50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac.scope - libcontainer container 50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac. Aug 13 01:45:37.595573 containerd[1581]: time="2025-08-13T01:45:37.595490642Z" level=info msg="StartContainer for \"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac\" returns successfully" Aug 13 01:45:38.859574 kubelet[2796]: E0813 01:45:38.859501 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:38.872832 kubelet[2796]: I0813 01:45:38.872353 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-5bf8dfcb4-v8hlv" podStartSLOduration=3.315270593 podStartE2EDuration="6.872330597s" podCreationTimestamp="2025-08-13 01:45:32 +0000 UTC" firstStartedPulling="2025-08-13 01:45:33.629196544 +0000 UTC m=+6.383765471" lastFinishedPulling="2025-08-13 01:45:37.186256538 +0000 UTC m=+9.940825475" observedRunningTime="2025-08-13 01:45:38.512097888 +0000 UTC m=+11.266666815" watchObservedRunningTime="2025-08-13 01:45:38.872330597 +0000 UTC m=+11.626899524" Aug 13 01:45:45.042903 sudo[1858]: pam_unix(sudo:session): session closed for user root Aug 13 01:45:45.096321 sshd[1857]: Connection closed by 147.75.109.163 port 42924 Aug 13 01:45:45.095623 sshd-session[1855]: pam_unix(sshd:session): session closed for user core Aug 13 01:45:45.103921 systemd[1]: sshd@8-172.232.7.32:22-147.75.109.163:42924.service: Deactivated successfully. Aug 13 01:45:45.108360 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:45:45.109127 systemd[1]: session-9.scope: Consumed 6.546s CPU time, 223.7M memory peak. Aug 13 01:45:45.113104 systemd-logind[1544]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:45:45.119044 systemd-logind[1544]: Removed session 9. Aug 13 01:45:49.005220 systemd[1]: Created slice kubepods-besteffort-podbf7dc0d3_6ff8_4c0e_929e_6b31c9f35674.slice - libcontainer container kubepods-besteffort-podbf7dc0d3_6ff8_4c0e_929e_6b31c9f35674.slice. Aug 13 01:45:49.143401 kubelet[2796]: I0813 01:45:49.143298 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674-tigera-ca-bundle\") pod \"calico-typha-bf6ccb678-cdfdr\" (UID: \"bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674\") " pod="calico-system/calico-typha-bf6ccb678-cdfdr" Aug 13 01:45:49.144301 kubelet[2796]: I0813 01:45:49.143797 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674-typha-certs\") pod \"calico-typha-bf6ccb678-cdfdr\" (UID: \"bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674\") " pod="calico-system/calico-typha-bf6ccb678-cdfdr" Aug 13 01:45:49.144301 kubelet[2796]: I0813 01:45:49.143866 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v9gdw\" (UniqueName: \"kubernetes.io/projected/bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674-kube-api-access-v9gdw\") pod \"calico-typha-bf6ccb678-cdfdr\" (UID: \"bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674\") " pod="calico-system/calico-typha-bf6ccb678-cdfdr" Aug 13 01:45:49.338519 kubelet[2796]: E0813 01:45:49.338066 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:49.339879 containerd[1581]: time="2025-08-13T01:45:49.339809590Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bf6ccb678-cdfdr,Uid:bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674,Namespace:calico-system,Attempt:0,}" Aug 13 01:45:49.397502 systemd[1]: Created slice kubepods-besteffort-pod416d9de4_5101_44c9_b974_0fedf790aa67.slice - libcontainer container kubepods-besteffort-pod416d9de4_5101_44c9_b974_0fedf790aa67.slice. Aug 13 01:45:49.425820 containerd[1581]: time="2025-08-13T01:45:49.424433045Z" level=info msg="connecting to shim 23bcf076e353ce2533a5798733e41d1d0302670aad3fef154009a71648b5ca9b" address="unix:///run/containerd/s/40eff3ebfe9d98a0c314d0989158a75a1a0ce422244153905317dbfd68349063" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:45:49.444950 kubelet[2796]: I0813 01:45:49.444896 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/416d9de4-5101-44c9-b974-0fedf790aa67-var-lib-calico\") pod \"calico-node-8j6cb\" (UID: \"416d9de4-5101-44c9-b974-0fedf790aa67\") " pod="calico-system/calico-node-8j6cb" Aug 13 01:45:49.444950 kubelet[2796]: I0813 01:45:49.444940 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/416d9de4-5101-44c9-b974-0fedf790aa67-cni-bin-dir\") pod \"calico-node-8j6cb\" (UID: \"416d9de4-5101-44c9-b974-0fedf790aa67\") " pod="calico-system/calico-node-8j6cb" Aug 13 01:45:49.444950 kubelet[2796]: I0813 01:45:49.444958 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/416d9de4-5101-44c9-b974-0fedf790aa67-cni-log-dir\") pod \"calico-node-8j6cb\" (UID: \"416d9de4-5101-44c9-b974-0fedf790aa67\") " pod="calico-system/calico-node-8j6cb" Aug 13 01:45:49.445137 kubelet[2796]: I0813 01:45:49.444982 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/416d9de4-5101-44c9-b974-0fedf790aa67-flexvol-driver-host\") pod \"calico-node-8j6cb\" (UID: \"416d9de4-5101-44c9-b974-0fedf790aa67\") " pod="calico-system/calico-node-8j6cb" Aug 13 01:45:49.445137 kubelet[2796]: I0813 01:45:49.445004 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/416d9de4-5101-44c9-b974-0fedf790aa67-node-certs\") pod \"calico-node-8j6cb\" (UID: \"416d9de4-5101-44c9-b974-0fedf790aa67\") " pod="calico-system/calico-node-8j6cb" Aug 13 01:45:49.445137 kubelet[2796]: I0813 01:45:49.445021 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/416d9de4-5101-44c9-b974-0fedf790aa67-xtables-lock\") pod \"calico-node-8j6cb\" (UID: \"416d9de4-5101-44c9-b974-0fedf790aa67\") " pod="calico-system/calico-node-8j6cb" Aug 13 01:45:49.445137 kubelet[2796]: I0813 01:45:49.445041 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/416d9de4-5101-44c9-b974-0fedf790aa67-tigera-ca-bundle\") pod \"calico-node-8j6cb\" (UID: \"416d9de4-5101-44c9-b974-0fedf790aa67\") " pod="calico-system/calico-node-8j6cb" Aug 13 01:45:49.445137 kubelet[2796]: I0813 01:45:49.445058 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p5j4\" (UniqueName: \"kubernetes.io/projected/416d9de4-5101-44c9-b974-0fedf790aa67-kube-api-access-7p5j4\") pod \"calico-node-8j6cb\" (UID: \"416d9de4-5101-44c9-b974-0fedf790aa67\") " pod="calico-system/calico-node-8j6cb" Aug 13 01:45:49.445252 kubelet[2796]: I0813 01:45:49.445072 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/416d9de4-5101-44c9-b974-0fedf790aa67-policysync\") pod \"calico-node-8j6cb\" (UID: \"416d9de4-5101-44c9-b974-0fedf790aa67\") " pod="calico-system/calico-node-8j6cb" Aug 13 01:45:49.445252 kubelet[2796]: I0813 01:45:49.445089 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/416d9de4-5101-44c9-b974-0fedf790aa67-var-run-calico\") pod \"calico-node-8j6cb\" (UID: \"416d9de4-5101-44c9-b974-0fedf790aa67\") " pod="calico-system/calico-node-8j6cb" Aug 13 01:45:49.445252 kubelet[2796]: I0813 01:45:49.445103 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/416d9de4-5101-44c9-b974-0fedf790aa67-cni-net-dir\") pod \"calico-node-8j6cb\" (UID: \"416d9de4-5101-44c9-b974-0fedf790aa67\") " pod="calico-system/calico-node-8j6cb" Aug 13 01:45:49.445252 kubelet[2796]: I0813 01:45:49.445122 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/416d9de4-5101-44c9-b974-0fedf790aa67-lib-modules\") pod \"calico-node-8j6cb\" (UID: \"416d9de4-5101-44c9-b974-0fedf790aa67\") " pod="calico-system/calico-node-8j6cb" Aug 13 01:45:49.467206 systemd[1]: Started cri-containerd-23bcf076e353ce2533a5798733e41d1d0302670aad3fef154009a71648b5ca9b.scope - libcontainer container 23bcf076e353ce2533a5798733e41d1d0302670aad3fef154009a71648b5ca9b. Aug 13 01:45:49.591935 kubelet[2796]: E0813 01:45:49.591437 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.591935 kubelet[2796]: W0813 01:45:49.591490 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.591935 kubelet[2796]: E0813 01:45:49.591531 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.606731 kubelet[2796]: E0813 01:45:49.606680 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.606982 kubelet[2796]: W0813 01:45:49.606937 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.607178 kubelet[2796]: E0813 01:45:49.607128 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.607431 containerd[1581]: time="2025-08-13T01:45:49.607389398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-bf6ccb678-cdfdr,Uid:bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674,Namespace:calico-system,Attempt:0,} returns sandbox id \"23bcf076e353ce2533a5798733e41d1d0302670aad3fef154009a71648b5ca9b\"" Aug 13 01:45:49.609623 kubelet[2796]: E0813 01:45:49.609575 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:49.612157 containerd[1581]: time="2025-08-13T01:45:49.611931647Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 01:45:49.649806 kubelet[2796]: E0813 01:45:49.649451 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bk2p6" podUID="0e8898c7-a3f5-4010-bb1f-d756673c29b2" Aug 13 01:45:49.708377 containerd[1581]: time="2025-08-13T01:45:49.708272899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8j6cb,Uid:416d9de4-5101-44c9-b974-0fedf790aa67,Namespace:calico-system,Attempt:0,}" Aug 13 01:45:49.749006 kubelet[2796]: E0813 01:45:49.748874 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.749006 kubelet[2796]: W0813 01:45:49.748905 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.749006 kubelet[2796]: E0813 01:45:49.748939 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.749568 kubelet[2796]: E0813 01:45:49.749510 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.749568 kubelet[2796]: W0813 01:45:49.749521 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.749568 kubelet[2796]: E0813 01:45:49.749531 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.749988 kubelet[2796]: E0813 01:45:49.749909 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.749988 kubelet[2796]: W0813 01:45:49.749925 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.749988 kubelet[2796]: E0813 01:45:49.749938 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.750316 kubelet[2796]: E0813 01:45:49.750239 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.750316 kubelet[2796]: W0813 01:45:49.750252 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.750316 kubelet[2796]: E0813 01:45:49.750263 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.750621 kubelet[2796]: E0813 01:45:49.750561 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.750621 kubelet[2796]: W0813 01:45:49.750573 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.750621 kubelet[2796]: E0813 01:45:49.750582 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.750966 kubelet[2796]: E0813 01:45:49.750912 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.750966 kubelet[2796]: W0813 01:45:49.750923 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.750966 kubelet[2796]: E0813 01:45:49.750932 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.751282 kubelet[2796]: E0813 01:45:49.751218 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.751282 kubelet[2796]: W0813 01:45:49.751231 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.751282 kubelet[2796]: E0813 01:45:49.751239 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.751571 kubelet[2796]: E0813 01:45:49.751559 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.751663 kubelet[2796]: W0813 01:45:49.751614 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.751663 kubelet[2796]: E0813 01:45:49.751626 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.753884 kubelet[2796]: E0813 01:45:49.753823 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.753884 kubelet[2796]: W0813 01:45:49.753836 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.753884 kubelet[2796]: E0813 01:45:49.753846 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.754185 kubelet[2796]: E0813 01:45:49.754120 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.754185 kubelet[2796]: W0813 01:45:49.754132 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.754185 kubelet[2796]: E0813 01:45:49.754142 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.754502 kubelet[2796]: E0813 01:45:49.754426 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.754502 kubelet[2796]: W0813 01:45:49.754440 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.754502 kubelet[2796]: E0813 01:45:49.754454 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.754826 kubelet[2796]: E0813 01:45:49.754783 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.754826 kubelet[2796]: W0813 01:45:49.754798 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.754826 kubelet[2796]: E0813 01:45:49.754807 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.755224 kubelet[2796]: E0813 01:45:49.755156 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.755224 kubelet[2796]: W0813 01:45:49.755172 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.755224 kubelet[2796]: E0813 01:45:49.755183 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.755552 kubelet[2796]: E0813 01:45:49.755495 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.755552 kubelet[2796]: W0813 01:45:49.755507 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.755552 kubelet[2796]: E0813 01:45:49.755516 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.755895 kubelet[2796]: E0813 01:45:49.755813 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.755895 kubelet[2796]: W0813 01:45:49.755825 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.755895 kubelet[2796]: E0813 01:45:49.755833 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.756201 kubelet[2796]: E0813 01:45:49.756092 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.756201 kubelet[2796]: W0813 01:45:49.756105 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.756201 kubelet[2796]: E0813 01:45:49.756114 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.757247 kubelet[2796]: E0813 01:45:49.757132 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.757403 kubelet[2796]: W0813 01:45:49.757361 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.757492 kubelet[2796]: E0813 01:45:49.757415 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.758077 kubelet[2796]: E0813 01:45:49.757993 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.758077 kubelet[2796]: W0813 01:45:49.758067 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.758135 kubelet[2796]: E0813 01:45:49.758081 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.758858 kubelet[2796]: E0813 01:45:49.758825 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.758858 kubelet[2796]: W0813 01:45:49.758844 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.758858 kubelet[2796]: E0813 01:45:49.758856 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.761069 kubelet[2796]: E0813 01:45:49.760990 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.761112 kubelet[2796]: W0813 01:45:49.761094 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.761612 kubelet[2796]: E0813 01:45:49.761140 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.762760 kubelet[2796]: E0813 01:45:49.762731 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.762760 kubelet[2796]: W0813 01:45:49.762755 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.764746 kubelet[2796]: E0813 01:45:49.762768 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.765333 kubelet[2796]: I0813 01:45:49.764904 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tg4l9\" (UniqueName: \"kubernetes.io/projected/0e8898c7-a3f5-4010-bb1f-d756673c29b2-kube-api-access-tg4l9\") pod \"csi-node-driver-bk2p6\" (UID: \"0e8898c7-a3f5-4010-bb1f-d756673c29b2\") " pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:45:49.765685 kubelet[2796]: E0813 01:45:49.765615 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.766018 kubelet[2796]: W0813 01:45:49.765985 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.766140 kubelet[2796]: E0813 01:45:49.766108 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.769061 kubelet[2796]: E0813 01:45:49.768608 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.769061 kubelet[2796]: W0813 01:45:49.768745 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.769061 kubelet[2796]: E0813 01:45:49.768836 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.769847 kubelet[2796]: E0813 01:45:49.769555 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.769847 kubelet[2796]: W0813 01:45:49.769569 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.769847 kubelet[2796]: E0813 01:45:49.769585 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.769847 kubelet[2796]: I0813 01:45:49.769633 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/0e8898c7-a3f5-4010-bb1f-d756673c29b2-kubelet-dir\") pod \"csi-node-driver-bk2p6\" (UID: \"0e8898c7-a3f5-4010-bb1f-d756673c29b2\") " pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:45:49.770926 kubelet[2796]: E0813 01:45:49.770815 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.771190 kubelet[2796]: W0813 01:45:49.771117 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.771488 kubelet[2796]: E0813 01:45:49.771392 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.772747 kubelet[2796]: I0813 01:45:49.772690 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/0e8898c7-a3f5-4010-bb1f-d756673c29b2-registration-dir\") pod \"csi-node-driver-bk2p6\" (UID: \"0e8898c7-a3f5-4010-bb1f-d756673c29b2\") " pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:45:49.772910 kubelet[2796]: E0813 01:45:49.772871 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.772910 kubelet[2796]: W0813 01:45:49.772898 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.773785 kubelet[2796]: E0813 01:45:49.773721 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.774306 kubelet[2796]: E0813 01:45:49.774272 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.774306 kubelet[2796]: W0813 01:45:49.774294 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.774795 kubelet[2796]: E0813 01:45:49.774692 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.774884 kubelet[2796]: E0813 01:45:49.774855 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.774884 kubelet[2796]: W0813 01:45:49.774877 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.775831 kubelet[2796]: E0813 01:45:49.775799 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.775869 kubelet[2796]: I0813 01:45:49.775849 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/0e8898c7-a3f5-4010-bb1f-d756673c29b2-socket-dir\") pod \"csi-node-driver-bk2p6\" (UID: \"0e8898c7-a3f5-4010-bb1f-d756673c29b2\") " pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:45:49.776784 kubelet[2796]: E0813 01:45:49.776752 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.776784 kubelet[2796]: W0813 01:45:49.776775 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.776874 kubelet[2796]: E0813 01:45:49.776792 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.777218 kubelet[2796]: E0813 01:45:49.777185 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.777218 kubelet[2796]: W0813 01:45:49.777206 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.777352 kubelet[2796]: E0813 01:45:49.777300 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.777571 kubelet[2796]: E0813 01:45:49.777541 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.777571 kubelet[2796]: W0813 01:45:49.777564 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.777839 kubelet[2796]: E0813 01:45:49.777809 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.777881 kubelet[2796]: I0813 01:45:49.777843 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/0e8898c7-a3f5-4010-bb1f-d756673c29b2-varrun\") pod \"csi-node-driver-bk2p6\" (UID: \"0e8898c7-a3f5-4010-bb1f-d756673c29b2\") " pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:45:49.779801 kubelet[2796]: E0813 01:45:49.779768 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.779801 kubelet[2796]: W0813 01:45:49.779793 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.779884 kubelet[2796]: E0813 01:45:49.779814 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.780075 kubelet[2796]: E0813 01:45:49.780046 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.780075 kubelet[2796]: W0813 01:45:49.780066 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.780178 kubelet[2796]: E0813 01:45:49.780150 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.780348 kubelet[2796]: E0813 01:45:49.780322 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.780348 kubelet[2796]: W0813 01:45:49.780341 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.780401 kubelet[2796]: E0813 01:45:49.780355 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.780560 kubelet[2796]: E0813 01:45:49.780529 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.780560 kubelet[2796]: W0813 01:45:49.780551 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.780560 kubelet[2796]: E0813 01:45:49.780561 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.781334 containerd[1581]: time="2025-08-13T01:45:49.781275365Z" level=info msg="connecting to shim 0f435f25441e44df4f853443af31ae924eb3be47f3acfdb6f44191e270c281aa" address="unix:///run/containerd/s/dc3839d04e2486c075d052b3345cd3c69314ac334e1df61c3c6a200c95ea44c9" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:45:49.865941 systemd[1]: Started cri-containerd-0f435f25441e44df4f853443af31ae924eb3be47f3acfdb6f44191e270c281aa.scope - libcontainer container 0f435f25441e44df4f853443af31ae924eb3be47f3acfdb6f44191e270c281aa. Aug 13 01:45:49.878511 kubelet[2796]: E0813 01:45:49.878457 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.878511 kubelet[2796]: W0813 01:45:49.878490 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.878511 kubelet[2796]: E0813 01:45:49.878514 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.879065 kubelet[2796]: E0813 01:45:49.878813 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.879065 kubelet[2796]: W0813 01:45:49.878821 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.879065 kubelet[2796]: E0813 01:45:49.878853 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.879065 kubelet[2796]: E0813 01:45:49.879076 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.879065 kubelet[2796]: W0813 01:45:49.879085 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.879541 kubelet[2796]: E0813 01:45:49.879109 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.879541 kubelet[2796]: E0813 01:45:49.879339 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.879541 kubelet[2796]: W0813 01:45:49.879348 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.879541 kubelet[2796]: E0813 01:45:49.879370 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.879863 kubelet[2796]: E0813 01:45:49.879580 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.879863 kubelet[2796]: W0813 01:45:49.879588 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.879863 kubelet[2796]: E0813 01:45:49.879613 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.879863 kubelet[2796]: E0813 01:45:49.879817 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.879863 kubelet[2796]: W0813 01:45:49.879826 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.879863 kubelet[2796]: E0813 01:45:49.879837 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.880889 kubelet[2796]: E0813 01:45:49.880704 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.881026 kubelet[2796]: W0813 01:45:49.880756 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.881676 kubelet[2796]: E0813 01:45:49.881348 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.881676 kubelet[2796]: W0813 01:45:49.881365 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.881676 kubelet[2796]: E0813 01:45:49.880991 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.881676 kubelet[2796]: E0813 01:45:49.881377 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.881676 kubelet[2796]: E0813 01:45:49.881597 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.881676 kubelet[2796]: W0813 01:45:49.881607 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.881676 kubelet[2796]: E0813 01:45:49.881616 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.881863 kubelet[2796]: E0813 01:45:49.881848 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.881863 kubelet[2796]: W0813 01:45:49.881857 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.881906 kubelet[2796]: E0813 01:45:49.881868 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.882137 kubelet[2796]: E0813 01:45:49.882079 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.882137 kubelet[2796]: W0813 01:45:49.882095 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.882799 kubelet[2796]: E0813 01:45:49.882726 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.882940 kubelet[2796]: E0813 01:45:49.882894 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.883521 kubelet[2796]: W0813 01:45:49.883174 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.884778 kubelet[2796]: E0813 01:45:49.883986 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.884942 kubelet[2796]: E0813 01:45:49.884930 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.885015 kubelet[2796]: W0813 01:45:49.885004 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.885126 kubelet[2796]: E0813 01:45:49.885094 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.885442 kubelet[2796]: E0813 01:45:49.885408 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.885442 kubelet[2796]: W0813 01:45:49.885421 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.885625 kubelet[2796]: E0813 01:45:49.885571 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.886484 kubelet[2796]: E0813 01:45:49.886459 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.886701 kubelet[2796]: W0813 01:45:49.886627 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.887063 kubelet[2796]: E0813 01:45:49.887035 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.887839 kubelet[2796]: E0813 01:45:49.887784 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.888018 kubelet[2796]: W0813 01:45:49.887930 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.888327 kubelet[2796]: E0813 01:45:49.888300 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.888911 kubelet[2796]: E0813 01:45:49.888879 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.889044 kubelet[2796]: W0813 01:45:49.888977 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.889125 kubelet[2796]: E0813 01:45:49.889107 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.889490 kubelet[2796]: E0813 01:45:49.889479 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.889588 kubelet[2796]: W0813 01:45:49.889547 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.889768 kubelet[2796]: E0813 01:45:49.889727 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.890465 kubelet[2796]: E0813 01:45:49.890384 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.890465 kubelet[2796]: W0813 01:45:49.890450 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.890789 kubelet[2796]: E0813 01:45:49.890687 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.891244 kubelet[2796]: E0813 01:45:49.891170 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.891589 kubelet[2796]: W0813 01:45:49.891562 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.891764 kubelet[2796]: E0813 01:45:49.891711 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.892753 kubelet[2796]: E0813 01:45:49.892738 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.892838 kubelet[2796]: W0813 01:45:49.892811 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.893042 kubelet[2796]: E0813 01:45:49.893004 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.893205 kubelet[2796]: E0813 01:45:49.893174 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.893205 kubelet[2796]: W0813 01:45:49.893191 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.893369 kubelet[2796]: E0813 01:45:49.893342 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.893604 kubelet[2796]: E0813 01:45:49.893577 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.893604 kubelet[2796]: W0813 01:45:49.893589 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.893903 kubelet[2796]: E0813 01:45:49.893866 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.894393 kubelet[2796]: E0813 01:45:49.894368 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.894393 kubelet[2796]: W0813 01:45:49.894380 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.894775 kubelet[2796]: E0813 01:45:49.894610 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.895032 kubelet[2796]: E0813 01:45:49.894987 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.895189 kubelet[2796]: W0813 01:45:49.895118 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.895314 kubelet[2796]: E0813 01:45:49.895260 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.908893 kubelet[2796]: E0813 01:45:49.908836 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:49.908893 kubelet[2796]: W0813 01:45:49.908894 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:49.909080 kubelet[2796]: E0813 01:45:49.908924 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:49.949949 containerd[1581]: time="2025-08-13T01:45:49.949879201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-8j6cb,Uid:416d9de4-5101-44c9-b974-0fedf790aa67,Namespace:calico-system,Attempt:0,} returns sandbox id \"0f435f25441e44df4f853443af31ae924eb3be47f3acfdb6f44191e270c281aa\"" Aug 13 01:45:50.628082 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount24849690.mount: Deactivated successfully. Aug 13 01:45:51.398357 kubelet[2796]: E0813 01:45:51.398290 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bk2p6" podUID="0e8898c7-a3f5-4010-bb1f-d756673c29b2" Aug 13 01:45:51.886499 containerd[1581]: time="2025-08-13T01:45:51.886018944Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:51.887392 containerd[1581]: time="2025-08-13T01:45:51.887358490Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 01:45:51.887869 containerd[1581]: time="2025-08-13T01:45:51.887837028Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:51.890488 containerd[1581]: time="2025-08-13T01:45:51.890419413Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:51.891249 containerd[1581]: time="2025-08-13T01:45:51.891021910Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.278906988s" Aug 13 01:45:51.891249 containerd[1581]: time="2025-08-13T01:45:51.891066433Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 01:45:51.893427 containerd[1581]: time="2025-08-13T01:45:51.893341004Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 01:45:51.911815 containerd[1581]: time="2025-08-13T01:45:51.911743812Z" level=info msg="CreateContainer within sandbox \"23bcf076e353ce2533a5798733e41d1d0302670aad3fef154009a71648b5ca9b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 01:45:51.921682 containerd[1581]: time="2025-08-13T01:45:51.921036068Z" level=info msg="Container aa2fb53120ec119cb35f012392ba89c8d1f174a797ce54886ca5092789c1107f: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:51.926870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount127557409.mount: Deactivated successfully. Aug 13 01:45:51.932923 containerd[1581]: time="2025-08-13T01:45:51.932854304Z" level=info msg="CreateContainer within sandbox \"23bcf076e353ce2533a5798733e41d1d0302670aad3fef154009a71648b5ca9b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"aa2fb53120ec119cb35f012392ba89c8d1f174a797ce54886ca5092789c1107f\"" Aug 13 01:45:51.934065 containerd[1581]: time="2025-08-13T01:45:51.933997985Z" level=info msg="StartContainer for \"aa2fb53120ec119cb35f012392ba89c8d1f174a797ce54886ca5092789c1107f\"" Aug 13 01:45:51.935805 containerd[1581]: time="2025-08-13T01:45:51.935765535Z" level=info msg="connecting to shim aa2fb53120ec119cb35f012392ba89c8d1f174a797ce54886ca5092789c1107f" address="unix:///run/containerd/s/40eff3ebfe9d98a0c314d0989158a75a1a0ce422244153905317dbfd68349063" protocol=ttrpc version=3 Aug 13 01:45:52.025071 systemd[1]: Started cri-containerd-aa2fb53120ec119cb35f012392ba89c8d1f174a797ce54886ca5092789c1107f.scope - libcontainer container aa2fb53120ec119cb35f012392ba89c8d1f174a797ce54886ca5092789c1107f. Aug 13 01:45:52.165719 containerd[1581]: time="2025-08-13T01:45:52.165080986Z" level=info msg="StartContainer for \"aa2fb53120ec119cb35f012392ba89c8d1f174a797ce54886ca5092789c1107f\" returns successfully" Aug 13 01:45:52.635429 kubelet[2796]: E0813 01:45:52.635374 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:52.652139 kubelet[2796]: I0813 01:45:52.652081 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-bf6ccb678-cdfdr" podStartSLOduration=2.370748882 podStartE2EDuration="4.652062785s" podCreationTimestamp="2025-08-13 01:45:48 +0000 UTC" firstStartedPulling="2025-08-13 01:45:49.611451936 +0000 UTC m=+22.366020863" lastFinishedPulling="2025-08-13 01:45:51.892765839 +0000 UTC m=+24.647334766" observedRunningTime="2025-08-13 01:45:52.648475818 +0000 UTC m=+25.403044745" watchObservedRunningTime="2025-08-13 01:45:52.652062785 +0000 UTC m=+25.406631712" Aug 13 01:45:52.718225 kubelet[2796]: E0813 01:45:52.718171 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.718767 kubelet[2796]: W0813 01:45:52.718319 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.718924 kubelet[2796]: E0813 01:45:52.718821 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.719249 kubelet[2796]: E0813 01:45:52.719133 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.719249 kubelet[2796]: W0813 01:45:52.719240 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.719410 kubelet[2796]: E0813 01:45:52.719290 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.719806 kubelet[2796]: E0813 01:45:52.719779 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.719806 kubelet[2796]: W0813 01:45:52.719803 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.719994 kubelet[2796]: E0813 01:45:52.719834 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.720375 kubelet[2796]: E0813 01:45:52.720300 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.720375 kubelet[2796]: W0813 01:45:52.720333 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.720375 kubelet[2796]: E0813 01:45:52.720345 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.720916 kubelet[2796]: E0813 01:45:52.720839 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.720916 kubelet[2796]: W0813 01:45:52.720874 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.720916 kubelet[2796]: E0813 01:45:52.720884 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.721308 kubelet[2796]: E0813 01:45:52.721256 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.721436 kubelet[2796]: W0813 01:45:52.721375 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.721436 kubelet[2796]: E0813 01:45:52.721392 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.721901 kubelet[2796]: E0813 01:45:52.721877 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.721901 kubelet[2796]: W0813 01:45:52.721896 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.721959 kubelet[2796]: E0813 01:45:52.721905 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.722406 kubelet[2796]: E0813 01:45:52.722376 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.722406 kubelet[2796]: W0813 01:45:52.722392 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.722406 kubelet[2796]: E0813 01:45:52.722400 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.722953 kubelet[2796]: E0813 01:45:52.722867 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.722953 kubelet[2796]: W0813 01:45:52.722882 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.722953 kubelet[2796]: E0813 01:45:52.722891 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.723296 kubelet[2796]: E0813 01:45:52.723240 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.723296 kubelet[2796]: W0813 01:45:52.723254 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.723296 kubelet[2796]: E0813 01:45:52.723262 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.723499 kubelet[2796]: E0813 01:45:52.723464 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.723499 kubelet[2796]: W0813 01:45:52.723477 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.723499 kubelet[2796]: E0813 01:45:52.723486 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.723700 kubelet[2796]: E0813 01:45:52.723640 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.723700 kubelet[2796]: W0813 01:45:52.723692 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.723758 kubelet[2796]: E0813 01:45:52.723706 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.723988 kubelet[2796]: E0813 01:45:52.723904 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.723988 kubelet[2796]: W0813 01:45:52.723954 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.723988 kubelet[2796]: E0813 01:45:52.723963 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.724224 kubelet[2796]: E0813 01:45:52.724169 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.724263 kubelet[2796]: W0813 01:45:52.724252 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.724291 kubelet[2796]: E0813 01:45:52.724263 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.724726 kubelet[2796]: E0813 01:45:52.724682 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.724726 kubelet[2796]: W0813 01:45:52.724695 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.724726 kubelet[2796]: E0813 01:45:52.724704 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.759800 containerd[1581]: time="2025-08-13T01:45:52.759725323Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:52.761158 containerd[1581]: time="2025-08-13T01:45:52.760676946Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 01:45:52.762406 containerd[1581]: time="2025-08-13T01:45:52.762020581Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:52.771508 containerd[1581]: time="2025-08-13T01:45:52.771358652Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:52.772601 containerd[1581]: time="2025-08-13T01:45:52.772568067Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 879.19519ms" Aug 13 01:45:52.772716 containerd[1581]: time="2025-08-13T01:45:52.772672374Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 01:45:52.777898 containerd[1581]: time="2025-08-13T01:45:52.777848805Z" level=info msg="CreateContainer within sandbox \"0f435f25441e44df4f853443af31ae924eb3be47f3acfdb6f44191e270c281aa\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 01:45:52.787816 containerd[1581]: time="2025-08-13T01:45:52.787728959Z" level=info msg="Container 8bc5e030ffef182ba8e3559351f2e5d9a7486f9ffa9aed8a712f9ba1e4d486cc: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:52.790694 kubelet[2796]: E0813 01:45:52.790566 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.790813 kubelet[2796]: W0813 01:45:52.790696 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.790813 kubelet[2796]: E0813 01:45:52.790723 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.792208 kubelet[2796]: E0813 01:45:52.791714 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.792208 kubelet[2796]: W0813 01:45:52.791728 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.792315 kubelet[2796]: E0813 01:45:52.792246 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.792688 kubelet[2796]: E0813 01:45:52.792432 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.792688 kubelet[2796]: W0813 01:45:52.792446 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.792688 kubelet[2796]: E0813 01:45:52.792493 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.793328 kubelet[2796]: E0813 01:45:52.793279 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.793328 kubelet[2796]: W0813 01:45:52.793297 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.793435 kubelet[2796]: E0813 01:45:52.793417 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.793869 kubelet[2796]: E0813 01:45:52.793689 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.793869 kubelet[2796]: W0813 01:45:52.793700 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.793869 kubelet[2796]: E0813 01:45:52.793832 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.794191 kubelet[2796]: E0813 01:45:52.794167 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.794191 kubelet[2796]: W0813 01:45:52.794180 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.794376 kubelet[2796]: E0813 01:45:52.794353 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.794911 kubelet[2796]: E0813 01:45:52.794752 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.794911 kubelet[2796]: W0813 01:45:52.794772 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.794911 kubelet[2796]: E0813 01:45:52.794841 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.795829 kubelet[2796]: E0813 01:45:52.795807 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.795829 kubelet[2796]: W0813 01:45:52.795822 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.795914 kubelet[2796]: E0813 01:45:52.795835 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.796412 kubelet[2796]: E0813 01:45:52.796362 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.796412 kubelet[2796]: W0813 01:45:52.796380 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.796412 kubelet[2796]: E0813 01:45:52.796389 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.798300 kubelet[2796]: E0813 01:45:52.798153 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.798300 kubelet[2796]: W0813 01:45:52.798172 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.798300 kubelet[2796]: E0813 01:45:52.798198 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.798747 kubelet[2796]: E0813 01:45:52.798434 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.798747 kubelet[2796]: W0813 01:45:52.798446 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.798747 kubelet[2796]: E0813 01:45:52.798483 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.799108 kubelet[2796]: E0813 01:45:52.798992 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.799108 kubelet[2796]: W0813 01:45:52.799008 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.799329 kubelet[2796]: E0813 01:45:52.799276 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.799592 kubelet[2796]: E0813 01:45:52.799448 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.799592 kubelet[2796]: W0813 01:45:52.799464 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.799592 kubelet[2796]: E0813 01:45:52.799511 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.800019 kubelet[2796]: E0813 01:45:52.799975 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.800019 kubelet[2796]: W0813 01:45:52.799995 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.800573 kubelet[2796]: E0813 01:45:52.800151 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.800919 kubelet[2796]: E0813 01:45:52.800902 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.800998 kubelet[2796]: W0813 01:45:52.800982 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.801179 kubelet[2796]: E0813 01:45:52.801160 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.801469 kubelet[2796]: E0813 01:45:52.801453 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.801552 kubelet[2796]: W0813 01:45:52.801536 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.801636 kubelet[2796]: E0813 01:45:52.801618 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.802355 kubelet[2796]: E0813 01:45:52.801988 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.802355 kubelet[2796]: W0813 01:45:52.802003 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.802355 kubelet[2796]: E0813 01:45:52.802016 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.802738 kubelet[2796]: E0813 01:45:52.802720 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:52.802816 kubelet[2796]: W0813 01:45:52.802800 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:52.802908 kubelet[2796]: E0813 01:45:52.802891 2796 plugins.go:691] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:52.805401 containerd[1581]: time="2025-08-13T01:45:52.805347881Z" level=info msg="CreateContainer within sandbox \"0f435f25441e44df4f853443af31ae924eb3be47f3acfdb6f44191e270c281aa\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"8bc5e030ffef182ba8e3559351f2e5d9a7486f9ffa9aed8a712f9ba1e4d486cc\"" Aug 13 01:45:52.807196 containerd[1581]: time="2025-08-13T01:45:52.807171062Z" level=info msg="StartContainer for \"8bc5e030ffef182ba8e3559351f2e5d9a7486f9ffa9aed8a712f9ba1e4d486cc\"" Aug 13 01:45:52.811950 containerd[1581]: time="2025-08-13T01:45:52.811635438Z" level=info msg="connecting to shim 8bc5e030ffef182ba8e3559351f2e5d9a7486f9ffa9aed8a712f9ba1e4d486cc" address="unix:///run/containerd/s/dc3839d04e2486c075d052b3345cd3c69314ac334e1df61c3c6a200c95ea44c9" protocol=ttrpc version=3 Aug 13 01:45:52.880995 systemd[1]: Started cri-containerd-8bc5e030ffef182ba8e3559351f2e5d9a7486f9ffa9aed8a712f9ba1e4d486cc.scope - libcontainer container 8bc5e030ffef182ba8e3559351f2e5d9a7486f9ffa9aed8a712f9ba1e4d486cc. Aug 13 01:45:53.029755 containerd[1581]: time="2025-08-13T01:45:53.029673914Z" level=info msg="StartContainer for \"8bc5e030ffef182ba8e3559351f2e5d9a7486f9ffa9aed8a712f9ba1e4d486cc\" returns successfully" Aug 13 01:45:53.069886 systemd[1]: cri-containerd-8bc5e030ffef182ba8e3559351f2e5d9a7486f9ffa9aed8a712f9ba1e4d486cc.scope: Deactivated successfully. Aug 13 01:45:53.081058 containerd[1581]: time="2025-08-13T01:45:53.080906766Z" level=info msg="received exit event container_id:\"8bc5e030ffef182ba8e3559351f2e5d9a7486f9ffa9aed8a712f9ba1e4d486cc\" id:\"8bc5e030ffef182ba8e3559351f2e5d9a7486f9ffa9aed8a712f9ba1e4d486cc\" pid:3468 exited_at:{seconds:1755049553 nanos:79781361}" Aug 13 01:45:53.081436 containerd[1581]: time="2025-08-13T01:45:53.081409224Z" level=info msg="TaskExit event in podsandbox handler container_id:\"8bc5e030ffef182ba8e3559351f2e5d9a7486f9ffa9aed8a712f9ba1e4d486cc\" id:\"8bc5e030ffef182ba8e3559351f2e5d9a7486f9ffa9aed8a712f9ba1e4d486cc\" pid:3468 exited_at:{seconds:1755049553 nanos:79781361}" Aug 13 01:45:53.143632 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bc5e030ffef182ba8e3559351f2e5d9a7486f9ffa9aed8a712f9ba1e4d486cc-rootfs.mount: Deactivated successfully. Aug 13 01:45:53.397858 kubelet[2796]: E0813 01:45:53.396706 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bk2p6" podUID="0e8898c7-a3f5-4010-bb1f-d756673c29b2" Aug 13 01:45:53.638983 kubelet[2796]: I0813 01:45:53.638943 2796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:45:53.640124 kubelet[2796]: E0813 01:45:53.639302 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:53.641449 containerd[1581]: time="2025-08-13T01:45:53.641393603Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 01:45:55.397974 kubelet[2796]: E0813 01:45:55.397890 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bk2p6" podUID="0e8898c7-a3f5-4010-bb1f-d756673c29b2" Aug 13 01:45:57.299078 containerd[1581]: time="2025-08-13T01:45:57.299009697Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:57.299910 containerd[1581]: time="2025-08-13T01:45:57.299875827Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 01:45:57.300552 containerd[1581]: time="2025-08-13T01:45:57.300478219Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:57.302164 containerd[1581]: time="2025-08-13T01:45:57.302114452Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:57.303168 containerd[1581]: time="2025-08-13T01:45:57.302822932Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 3.661387226s" Aug 13 01:45:57.303168 containerd[1581]: time="2025-08-13T01:45:57.302858104Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 01:45:57.307260 containerd[1581]: time="2025-08-13T01:45:57.307123791Z" level=info msg="CreateContainer within sandbox \"0f435f25441e44df4f853443af31ae924eb3be47f3acfdb6f44191e270c281aa\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 01:45:57.322681 containerd[1581]: time="2025-08-13T01:45:57.318562817Z" level=info msg="Container 635d8665e056f22dd281bd036fbe6221e69ca1cfa3ade1e0a779d9def417f44f: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:57.328665 containerd[1581]: time="2025-08-13T01:45:57.328616766Z" level=info msg="CreateContainer within sandbox \"0f435f25441e44df4f853443af31ae924eb3be47f3acfdb6f44191e270c281aa\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"635d8665e056f22dd281bd036fbe6221e69ca1cfa3ade1e0a779d9def417f44f\"" Aug 13 01:45:57.329623 containerd[1581]: time="2025-08-13T01:45:57.329578224Z" level=info msg="StartContainer for \"635d8665e056f22dd281bd036fbe6221e69ca1cfa3ade1e0a779d9def417f44f\"" Aug 13 01:45:57.331565 containerd[1581]: time="2025-08-13T01:45:57.331533170Z" level=info msg="connecting to shim 635d8665e056f22dd281bd036fbe6221e69ca1cfa3ade1e0a779d9def417f44f" address="unix:///run/containerd/s/dc3839d04e2486c075d052b3345cd3c69314ac334e1df61c3c6a200c95ea44c9" protocol=ttrpc version=3 Aug 13 01:45:57.405722 kubelet[2796]: E0813 01:45:57.401020 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bk2p6" podUID="0e8898c7-a3f5-4010-bb1f-d756673c29b2" Aug 13 01:45:57.410942 systemd[1]: Started cri-containerd-635d8665e056f22dd281bd036fbe6221e69ca1cfa3ade1e0a779d9def417f44f.scope - libcontainer container 635d8665e056f22dd281bd036fbe6221e69ca1cfa3ade1e0a779d9def417f44f. Aug 13 01:45:57.542641 containerd[1581]: time="2025-08-13T01:45:57.542574483Z" level=info msg="StartContainer for \"635d8665e056f22dd281bd036fbe6221e69ca1cfa3ade1e0a779d9def417f44f\" returns successfully" Aug 13 01:45:59.396936 kubelet[2796]: E0813 01:45:59.396865 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bk2p6" podUID="0e8898c7-a3f5-4010-bb1f-d756673c29b2" Aug 13 01:45:59.527833 systemd[1]: cri-containerd-635d8665e056f22dd281bd036fbe6221e69ca1cfa3ade1e0a779d9def417f44f.scope: Deactivated successfully. Aug 13 01:45:59.528792 containerd[1581]: time="2025-08-13T01:45:59.528403993Z" level=info msg="received exit event container_id:\"635d8665e056f22dd281bd036fbe6221e69ca1cfa3ade1e0a779d9def417f44f\" id:\"635d8665e056f22dd281bd036fbe6221e69ca1cfa3ade1e0a779d9def417f44f\" pid:3524 exited_at:{seconds:1755049559 nanos:528158607}" Aug 13 01:45:59.529169 systemd[1]: cri-containerd-635d8665e056f22dd281bd036fbe6221e69ca1cfa3ade1e0a779d9def417f44f.scope: Consumed 2.093s CPU time, 192.2M memory peak, 171.2M written to disk. Aug 13 01:45:59.530305 containerd[1581]: time="2025-08-13T01:45:59.529397521Z" level=info msg="TaskExit event in podsandbox handler container_id:\"635d8665e056f22dd281bd036fbe6221e69ca1cfa3ade1e0a779d9def417f44f\" id:\"635d8665e056f22dd281bd036fbe6221e69ca1cfa3ade1e0a779d9def417f44f\" pid:3524 exited_at:{seconds:1755049559 nanos:528158607}" Aug 13 01:45:59.557719 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-635d8665e056f22dd281bd036fbe6221e69ca1cfa3ade1e0a779d9def417f44f-rootfs.mount: Deactivated successfully. Aug 13 01:45:59.624696 kubelet[2796]: I0813 01:45:59.624428 2796 kubelet_node_status.go:488] "Fast updating node status as it just became ready" Aug 13 01:45:59.658183 kubelet[2796]: I0813 01:45:59.658012 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cbf6d4b0-f3bc-4a92-9977-6d91de60b65f-config-volume\") pod \"coredns-7c65d6cfc9-6vrr8\" (UID: \"cbf6d4b0-f3bc-4a92-9977-6d91de60b65f\") " pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:45:59.658183 kubelet[2796]: I0813 01:45:59.658064 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-98tpk\" (UniqueName: \"kubernetes.io/projected/981696e3-42b0-4ae8-b44b-fa439a03a402-kube-api-access-98tpk\") pod \"coredns-7c65d6cfc9-djvw6\" (UID: \"981696e3-42b0-4ae8-b44b-fa439a03a402\") " pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:45:59.658183 kubelet[2796]: I0813 01:45:59.658094 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/981696e3-42b0-4ae8-b44b-fa439a03a402-config-volume\") pod \"coredns-7c65d6cfc9-djvw6\" (UID: \"981696e3-42b0-4ae8-b44b-fa439a03a402\") " pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:45:59.658183 kubelet[2796]: I0813 01:45:59.658118 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gsskp\" (UniqueName: \"kubernetes.io/projected/4720bedc-4719-4a57-b2ff-e5b21f7acb7f-kube-api-access-gsskp\") pod \"calico-kube-controllers-86d5dd9ff6-b6gw7\" (UID: \"4720bedc-4719-4a57-b2ff-e5b21f7acb7f\") " pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:45:59.658183 kubelet[2796]: I0813 01:45:59.658143 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p9jmz\" (UniqueName: \"kubernetes.io/projected/cbf6d4b0-f3bc-4a92-9977-6d91de60b65f-kube-api-access-p9jmz\") pod \"coredns-7c65d6cfc9-6vrr8\" (UID: \"cbf6d4b0-f3bc-4a92-9977-6d91de60b65f\") " pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:45:59.659522 kubelet[2796]: I0813 01:45:59.658173 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4720bedc-4719-4a57-b2ff-e5b21f7acb7f-tigera-ca-bundle\") pod \"calico-kube-controllers-86d5dd9ff6-b6gw7\" (UID: \"4720bedc-4719-4a57-b2ff-e5b21f7acb7f\") " pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:45:59.666891 systemd[1]: Created slice kubepods-burstable-podcbf6d4b0_f3bc_4a92_9977_6d91de60b65f.slice - libcontainer container kubepods-burstable-podcbf6d4b0_f3bc_4a92_9977_6d91de60b65f.slice. Aug 13 01:45:59.699755 containerd[1581]: time="2025-08-13T01:45:59.699418950Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:45:59.700537 systemd[1]: Created slice kubepods-burstable-pod981696e3_42b0_4ae8_b44b_fa439a03a402.slice - libcontainer container kubepods-burstable-pod981696e3_42b0_4ae8_b44b_fa439a03a402.slice. Aug 13 01:45:59.712619 systemd[1]: Created slice kubepods-besteffort-pod44880a0b_db2b_45f1_8f5f_5e98e509b622.slice - libcontainer container kubepods-besteffort-pod44880a0b_db2b_45f1_8f5f_5e98e509b622.slice. Aug 13 01:45:59.726241 systemd[1]: Created slice kubepods-besteffort-pod4720bedc_4719_4a57_b2ff_e5b21f7acb7f.slice - libcontainer container kubepods-besteffort-pod4720bedc_4719_4a57_b2ff_e5b21f7acb7f.slice. Aug 13 01:45:59.738230 systemd[1]: Created slice kubepods-besteffort-podfed0c3bb_07be_41e1_8995_c70aed034b09.slice - libcontainer container kubepods-besteffort-podfed0c3bb_07be_41e1_8995_c70aed034b09.slice. Aug 13 01:45:59.748109 systemd[1]: Created slice kubepods-besteffort-pod6e63eb69_081c_4c75_b7ea_ada4cdfe7284.slice - libcontainer container kubepods-besteffort-pod6e63eb69_081c_4c75_b7ea_ada4cdfe7284.slice. Aug 13 01:45:59.756194 systemd[1]: Created slice kubepods-besteffort-podc05b1f7d_c0f2_4f36_a6d9_678b3f1bf594.slice - libcontainer container kubepods-besteffort-podc05b1f7d_c0f2_4f36_a6d9_678b3f1bf594.slice. Aug 13 01:45:59.859921 kubelet[2796]: I0813 01:45:59.859817 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fed0c3bb-07be-41e1-8995-c70aed034b09-calico-apiserver-certs\") pod \"calico-apiserver-6c6d74d74b-hjrvm\" (UID: \"fed0c3bb-07be-41e1-8995-c70aed034b09\") " pod="calico-apiserver/calico-apiserver-6c6d74d74b-hjrvm" Aug 13 01:45:59.859921 kubelet[2796]: I0813 01:45:59.859896 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jc9k\" (UniqueName: \"kubernetes.io/projected/c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594-kube-api-access-6jc9k\") pod \"whisker-c5ff669b8-2nc8l\" (UID: \"c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594\") " pod="calico-system/whisker-c5ff669b8-2nc8l" Aug 13 01:45:59.859921 kubelet[2796]: I0813 01:45:59.859925 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44880a0b-db2b-45f1-8f5f-5e98e509b622-goldmane-ca-bundle\") pod \"goldmane-58fd7646b9-jzjd4\" (UID: \"44880a0b-db2b-45f1-8f5f-5e98e509b622\") " pod="calico-system/goldmane-58fd7646b9-jzjd4" Aug 13 01:45:59.860271 kubelet[2796]: I0813 01:45:59.859978 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594-whisker-backend-key-pair\") pod \"whisker-c5ff669b8-2nc8l\" (UID: \"c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594\") " pod="calico-system/whisker-c5ff669b8-2nc8l" Aug 13 01:45:59.860271 kubelet[2796]: I0813 01:45:59.860009 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lttnn\" (UniqueName: \"kubernetes.io/projected/44880a0b-db2b-45f1-8f5f-5e98e509b622-kube-api-access-lttnn\") pod \"goldmane-58fd7646b9-jzjd4\" (UID: \"44880a0b-db2b-45f1-8f5f-5e98e509b622\") " pod="calico-system/goldmane-58fd7646b9-jzjd4" Aug 13 01:45:59.860271 kubelet[2796]: I0813 01:45:59.860043 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/44880a0b-db2b-45f1-8f5f-5e98e509b622-goldmane-key-pair\") pod \"goldmane-58fd7646b9-jzjd4\" (UID: \"44880a0b-db2b-45f1-8f5f-5e98e509b622\") " pod="calico-system/goldmane-58fd7646b9-jzjd4" Aug 13 01:45:59.860271 kubelet[2796]: I0813 01:45:59.860072 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594-whisker-ca-bundle\") pod \"whisker-c5ff669b8-2nc8l\" (UID: \"c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594\") " pod="calico-system/whisker-c5ff669b8-2nc8l" Aug 13 01:45:59.860271 kubelet[2796]: I0813 01:45:59.860147 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bv6tn\" (UniqueName: \"kubernetes.io/projected/6e63eb69-081c-4c75-b7ea-ada4cdfe7284-kube-api-access-bv6tn\") pod \"calico-apiserver-6c6d74d74b-kvt2f\" (UID: \"6e63eb69-081c-4c75-b7ea-ada4cdfe7284\") " pod="calico-apiserver/calico-apiserver-6c6d74d74b-kvt2f" Aug 13 01:45:59.860493 kubelet[2796]: I0813 01:45:59.860174 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-292hn\" (UniqueName: \"kubernetes.io/projected/fed0c3bb-07be-41e1-8995-c70aed034b09-kube-api-access-292hn\") pod \"calico-apiserver-6c6d74d74b-hjrvm\" (UID: \"fed0c3bb-07be-41e1-8995-c70aed034b09\") " pod="calico-apiserver/calico-apiserver-6c6d74d74b-hjrvm" Aug 13 01:45:59.860493 kubelet[2796]: I0813 01:45:59.860218 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44880a0b-db2b-45f1-8f5f-5e98e509b622-config\") pod \"goldmane-58fd7646b9-jzjd4\" (UID: \"44880a0b-db2b-45f1-8f5f-5e98e509b622\") " pod="calico-system/goldmane-58fd7646b9-jzjd4" Aug 13 01:45:59.860493 kubelet[2796]: I0813 01:45:59.860244 2796 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6e63eb69-081c-4c75-b7ea-ada4cdfe7284-calico-apiserver-certs\") pod \"calico-apiserver-6c6d74d74b-kvt2f\" (UID: \"6e63eb69-081c-4c75-b7ea-ada4cdfe7284\") " pod="calico-apiserver/calico-apiserver-6c6d74d74b-kvt2f" Aug 13 01:45:59.984840 kubelet[2796]: E0813 01:45:59.981989 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:45:59.993406 containerd[1581]: time="2025-08-13T01:45:59.993360535Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vrr8,Uid:cbf6d4b0-f3bc-4a92-9977-6d91de60b65f,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:00.014034 kubelet[2796]: E0813 01:46:00.013993 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:46:00.016002 containerd[1581]: time="2025-08-13T01:46:00.015961904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:00.020055 containerd[1581]: time="2025-08-13T01:46:00.020027463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-jzjd4,Uid:44880a0b-db2b-45f1-8f5f-5e98e509b622,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:00.037352 containerd[1581]: time="2025-08-13T01:46:00.037019544Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:00.044695 containerd[1581]: time="2025-08-13T01:46:00.044478676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6d74d74b-hjrvm,Uid:fed0c3bb-07be-41e1-8995-c70aed034b09,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:46:00.055958 containerd[1581]: time="2025-08-13T01:46:00.055907100Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6d74d74b-kvt2f,Uid:6e63eb69-081c-4c75-b7ea-ada4cdfe7284,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:46:00.064954 containerd[1581]: time="2025-08-13T01:46:00.064912015Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c5ff669b8-2nc8l,Uid:c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:00.301312 containerd[1581]: time="2025-08-13T01:46:00.299937442Z" level=error msg="Failed to destroy network for sandbox \"70d3fd29fc81ad0794a8c09fb81d4b71731836f86b29bb32a050680cd6c3e3fa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.309058 containerd[1581]: time="2025-08-13T01:46:00.308973809Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"70d3fd29fc81ad0794a8c09fb81d4b71731836f86b29bb32a050680cd6c3e3fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.309511 kubelet[2796]: E0813 01:46:00.309420 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70d3fd29fc81ad0794a8c09fb81d4b71731836f86b29bb32a050680cd6c3e3fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.309630 kubelet[2796]: E0813 01:46:00.309571 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70d3fd29fc81ad0794a8c09fb81d4b71731836f86b29bb32a050680cd6c3e3fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:46:00.309723 kubelet[2796]: E0813 01:46:00.309632 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"70d3fd29fc81ad0794a8c09fb81d4b71731836f86b29bb32a050680cd6c3e3fa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:46:00.310892 kubelet[2796]: E0813 01:46:00.310828 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"70d3fd29fc81ad0794a8c09fb81d4b71731836f86b29bb32a050680cd6c3e3fa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" podUID="4720bedc-4719-4a57-b2ff-e5b21f7acb7f" Aug 13 01:46:00.340392 containerd[1581]: time="2025-08-13T01:46:00.340295566Z" level=error msg="Failed to destroy network for sandbox \"cb203a722c95d4112e8b9a3005725ad2df00ebf6e3b34d14383f146c9ebae7dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.350726 containerd[1581]: time="2025-08-13T01:46:00.350667670Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-jzjd4,Uid:44880a0b-db2b-45f1-8f5f-5e98e509b622,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb203a722c95d4112e8b9a3005725ad2df00ebf6e3b34d14383f146c9ebae7dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.364602 kubelet[2796]: E0813 01:46:00.364506 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb203a722c95d4112e8b9a3005725ad2df00ebf6e3b34d14383f146c9ebae7dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.365250 kubelet[2796]: E0813 01:46:00.364642 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb203a722c95d4112e8b9a3005725ad2df00ebf6e3b34d14383f146c9ebae7dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-jzjd4" Aug 13 01:46:00.365250 kubelet[2796]: E0813 01:46:00.364739 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb203a722c95d4112e8b9a3005725ad2df00ebf6e3b34d14383f146c9ebae7dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-jzjd4" Aug 13 01:46:00.365250 kubelet[2796]: E0813 01:46:00.364929 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-jzjd4_calico-system(44880a0b-db2b-45f1-8f5f-5e98e509b622)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-jzjd4_calico-system(44880a0b-db2b-45f1-8f5f-5e98e509b622)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb203a722c95d4112e8b9a3005725ad2df00ebf6e3b34d14383f146c9ebae7dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-jzjd4" podUID="44880a0b-db2b-45f1-8f5f-5e98e509b622" Aug 13 01:46:00.382353 containerd[1581]: time="2025-08-13T01:46:00.382189570Z" level=error msg="Failed to destroy network for sandbox \"5a592e63ed47f59414b1910c85dc1e25e7df6e51ac7091744a61e79ca56b385a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.400174 containerd[1581]: time="2025-08-13T01:46:00.397694633Z" level=error msg="Failed to destroy network for sandbox \"43f7a3ab119534204a69da6865e645d4250af4b31ecee47a4d48d5cf96c1a412\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.401988 containerd[1581]: time="2025-08-13T01:46:00.401928992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vrr8,Uid:cbf6d4b0-f3bc-4a92-9977-6d91de60b65f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a592e63ed47f59414b1910c85dc1e25e7df6e51ac7091744a61e79ca56b385a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.402771 kubelet[2796]: E0813 01:46:00.402698 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a592e63ed47f59414b1910c85dc1e25e7df6e51ac7091744a61e79ca56b385a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.403160 kubelet[2796]: E0813 01:46:00.402831 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a592e63ed47f59414b1910c85dc1e25e7df6e51ac7091744a61e79ca56b385a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:46:00.403160 kubelet[2796]: E0813 01:46:00.402892 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5a592e63ed47f59414b1910c85dc1e25e7df6e51ac7091744a61e79ca56b385a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:46:00.403741 kubelet[2796]: E0813 01:46:00.403001 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6vrr8_kube-system(cbf6d4b0-f3bc-4a92-9977-6d91de60b65f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6vrr8_kube-system(cbf6d4b0-f3bc-4a92-9977-6d91de60b65f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5a592e63ed47f59414b1910c85dc1e25e7df6e51ac7091744a61e79ca56b385a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6vrr8" podUID="cbf6d4b0-f3bc-4a92-9977-6d91de60b65f" Aug 13 01:46:00.415297 containerd[1581]: time="2025-08-13T01:46:00.415226660Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43f7a3ab119534204a69da6865e645d4250af4b31ecee47a4d48d5cf96c1a412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.416021 kubelet[2796]: E0813 01:46:00.415927 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43f7a3ab119534204a69da6865e645d4250af4b31ecee47a4d48d5cf96c1a412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.416123 kubelet[2796]: E0813 01:46:00.416082 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43f7a3ab119534204a69da6865e645d4250af4b31ecee47a4d48d5cf96c1a412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:46:00.416257 kubelet[2796]: E0813 01:46:00.416131 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43f7a3ab119534204a69da6865e645d4250af4b31ecee47a4d48d5cf96c1a412\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:46:00.416396 kubelet[2796]: E0813 01:46:00.416221 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-djvw6_kube-system(981696e3-42b0-4ae8-b44b-fa439a03a402)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-djvw6_kube-system(981696e3-42b0-4ae8-b44b-fa439a03a402)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43f7a3ab119534204a69da6865e645d4250af4b31ecee47a4d48d5cf96c1a412\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-djvw6" podUID="981696e3-42b0-4ae8-b44b-fa439a03a402" Aug 13 01:46:00.467531 containerd[1581]: time="2025-08-13T01:46:00.467391972Z" level=error msg="Failed to destroy network for sandbox \"953df5c3d77096ca2dc224b851c59dbdb59e2f87dcf274c9731504d2d82b8c04\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.474339 containerd[1581]: time="2025-08-13T01:46:00.473997248Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c5ff669b8-2nc8l,Uid:c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"953df5c3d77096ca2dc224b851c59dbdb59e2f87dcf274c9731504d2d82b8c04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.475870 kubelet[2796]: E0813 01:46:00.475768 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"953df5c3d77096ca2dc224b851c59dbdb59e2f87dcf274c9731504d2d82b8c04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.475870 kubelet[2796]: E0813 01:46:00.475853 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"953df5c3d77096ca2dc224b851c59dbdb59e2f87dcf274c9731504d2d82b8c04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c5ff669b8-2nc8l" Aug 13 01:46:00.476061 kubelet[2796]: E0813 01:46:00.475876 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"953df5c3d77096ca2dc224b851c59dbdb59e2f87dcf274c9731504d2d82b8c04\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c5ff669b8-2nc8l" Aug 13 01:46:00.476061 kubelet[2796]: E0813 01:46:00.475926 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c5ff669b8-2nc8l_calico-system(c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c5ff669b8-2nc8l_calico-system(c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"953df5c3d77096ca2dc224b851c59dbdb59e2f87dcf274c9731504d2d82b8c04\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c5ff669b8-2nc8l" podUID="c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594" Aug 13 01:46:00.484951 containerd[1581]: time="2025-08-13T01:46:00.484850004Z" level=error msg="Failed to destroy network for sandbox \"bf55bb44df8bc9c17bbb31e6c9b8fd742e7f419e6b4264931df32cf5e9be908c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.486141 containerd[1581]: time="2025-08-13T01:46:00.486093836Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6d74d74b-hjrvm,Uid:fed0c3bb-07be-41e1-8995-c70aed034b09,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf55bb44df8bc9c17bbb31e6c9b8fd742e7f419e6b4264931df32cf5e9be908c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.486400 kubelet[2796]: E0813 01:46:00.486347 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf55bb44df8bc9c17bbb31e6c9b8fd742e7f419e6b4264931df32cf5e9be908c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.486477 kubelet[2796]: E0813 01:46:00.486425 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf55bb44df8bc9c17bbb31e6c9b8fd742e7f419e6b4264931df32cf5e9be908c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6d74d74b-hjrvm" Aug 13 01:46:00.486477 kubelet[2796]: E0813 01:46:00.486451 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf55bb44df8bc9c17bbb31e6c9b8fd742e7f419e6b4264931df32cf5e9be908c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6d74d74b-hjrvm" Aug 13 01:46:00.486543 kubelet[2796]: E0813 01:46:00.486504 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c6d74d74b-hjrvm_calico-apiserver(fed0c3bb-07be-41e1-8995-c70aed034b09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c6d74d74b-hjrvm_calico-apiserver(fed0c3bb-07be-41e1-8995-c70aed034b09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf55bb44df8bc9c17bbb31e6c9b8fd742e7f419e6b4264931df32cf5e9be908c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6d74d74b-hjrvm" podUID="fed0c3bb-07be-41e1-8995-c70aed034b09" Aug 13 01:46:00.509791 containerd[1581]: time="2025-08-13T01:46:00.509720645Z" level=error msg="Failed to destroy network for sandbox \"fd277417dfaf160160147ac33714ca1debeb1918df117c6f2ba9092a15e9eac3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.510917 containerd[1581]: time="2025-08-13T01:46:00.510875512Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6d74d74b-kvt2f,Uid:6e63eb69-081c-4c75-b7ea-ada4cdfe7284,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd277417dfaf160160147ac33714ca1debeb1918df117c6f2ba9092a15e9eac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.511311 kubelet[2796]: E0813 01:46:00.511223 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd277417dfaf160160147ac33714ca1debeb1918df117c6f2ba9092a15e9eac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:00.511404 kubelet[2796]: E0813 01:46:00.511360 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd277417dfaf160160147ac33714ca1debeb1918df117c6f2ba9092a15e9eac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6d74d74b-kvt2f" Aug 13 01:46:00.511451 kubelet[2796]: E0813 01:46:00.511426 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fd277417dfaf160160147ac33714ca1debeb1918df117c6f2ba9092a15e9eac3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6d74d74b-kvt2f" Aug 13 01:46:00.511543 kubelet[2796]: E0813 01:46:00.511501 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c6d74d74b-kvt2f_calico-apiserver(6e63eb69-081c-4c75-b7ea-ada4cdfe7284)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c6d74d74b-kvt2f_calico-apiserver(6e63eb69-081c-4c75-b7ea-ada4cdfe7284)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fd277417dfaf160160147ac33714ca1debeb1918df117c6f2ba9092a15e9eac3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6d74d74b-kvt2f" podUID="6e63eb69-081c-4c75-b7ea-ada4cdfe7284" Aug 13 01:46:01.427035 systemd[1]: Created slice kubepods-besteffort-pod0e8898c7_a3f5_4010_bb1f_d756673c29b2.slice - libcontainer container kubepods-besteffort-pod0e8898c7_a3f5_4010_bb1f_d756673c29b2.slice. Aug 13 01:46:01.432382 containerd[1581]: time="2025-08-13T01:46:01.431427991Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:01.632740 containerd[1581]: time="2025-08-13T01:46:01.632639158Z" level=error msg="Failed to destroy network for sandbox \"f672c12058fdf4730f1ce86ca2118a658ef5031e47088f72aae71d731f53084b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:01.640565 systemd[1]: run-netns-cni\x2d46d826f2\x2df59a\x2d336f\x2d1e1c\x2d62b8b32bb10c.mount: Deactivated successfully. Aug 13 01:46:01.646219 containerd[1581]: time="2025-08-13T01:46:01.646143454Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f672c12058fdf4730f1ce86ca2118a658ef5031e47088f72aae71d731f53084b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:01.648349 kubelet[2796]: E0813 01:46:01.647404 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f672c12058fdf4730f1ce86ca2118a658ef5031e47088f72aae71d731f53084b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:01.648349 kubelet[2796]: E0813 01:46:01.647501 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f672c12058fdf4730f1ce86ca2118a658ef5031e47088f72aae71d731f53084b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:46:01.648349 kubelet[2796]: E0813 01:46:01.647566 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f672c12058fdf4730f1ce86ca2118a658ef5031e47088f72aae71d731f53084b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:46:01.648887 kubelet[2796]: E0813 01:46:01.647626 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f672c12058fdf4730f1ce86ca2118a658ef5031e47088f72aae71d731f53084b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bk2p6" podUID="0e8898c7-a3f5-4010-bb1f-d756673c29b2" Aug 13 01:46:04.701584 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1676718683.mount: Deactivated successfully. Aug 13 01:46:04.705127 containerd[1581]: time="2025-08-13T01:46:04.705035472Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1676718683: write /var/lib/containerd/tmpmounts/containerd-mount1676718683/usr/bin/calico-node: no space left on device" Aug 13 01:46:04.705519 containerd[1581]: time="2025-08-13T01:46:04.705126948Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:46:04.705555 kubelet[2796]: E0813 01:46:04.705435 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1676718683: write /var/lib/containerd/tmpmounts/containerd-mount1676718683/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:46:04.705555 kubelet[2796]: E0813 01:46:04.705508 2796 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1676718683: write /var/lib/containerd/tmpmounts/containerd-mount1676718683/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:46:04.707141 kubelet[2796]: E0813 01:46:04.706984 2796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7p5j4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-8j6cb_calico-system(416d9de4-5101-44c9-b974-0fedf790aa67): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1676718683: write /var/lib/containerd/tmpmounts/containerd-mount1676718683/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:46:04.725059 kubelet[2796]: E0813 01:46:04.708279 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1676718683: write /var/lib/containerd/tmpmounts/containerd-mount1676718683/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-8j6cb" podUID="416d9de4-5101-44c9-b974-0fedf790aa67" Aug 13 01:46:07.659860 kubelet[2796]: I0813 01:46:07.659744 2796 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:07.659860 kubelet[2796]: I0813 01:46:07.659854 2796 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:46:07.662518 kubelet[2796]: I0813 01:46:07.662487 2796 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:46:07.688087 kubelet[2796]: I0813 01:46:07.688048 2796 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:07.688279 kubelet[2796]: I0813 01:46:07.688189 2796 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-6c6d74d74b-kvt2f","calico-system/whisker-c5ff669b8-2nc8l","calico-system/goldmane-58fd7646b9-jzjd4","calico-apiserver/calico-apiserver-6c6d74d74b-hjrvm","calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7","kube-system/coredns-7c65d6cfc9-6vrr8","kube-system/coredns-7c65d6cfc9-djvw6","calico-system/calico-node-8j6cb","calico-system/csi-node-driver-bk2p6","tigera-operator/tigera-operator-5bf8dfcb4-v8hlv","calico-system/calico-typha-bf6ccb678-cdfdr","kube-system/kube-controller-manager-172-232-7-32","kube-system/kube-proxy-dmp9l","kube-system/kube-apiserver-172-232-7-32","kube-system/kube-scheduler-172-232-7-32"] Aug 13 01:46:07.695292 kubelet[2796]: I0813 01:46:07.695173 2796 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-6c6d74d74b-kvt2f" Aug 13 01:46:07.695292 kubelet[2796]: I0813 01:46:07.695211 2796 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-6c6d74d74b-kvt2f"] Aug 13 01:46:07.719099 kubelet[2796]: I0813 01:46:07.718219 2796 kubelet.go:2306] "Pod admission denied" podUID="a8d2cc94-3796-4ac6-9d15-571724c086f8" pod="calico-apiserver/calico-apiserver-6c6d74d74b-sht7l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:07.751308 kubelet[2796]: I0813 01:46:07.751232 2796 kubelet.go:2306] "Pod admission denied" podUID="6153ab3b-49e8-40cc-8b79-08614af966d5" pod="calico-apiserver/calico-apiserver-6c6d74d74b-2l2vw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:07.777001 kubelet[2796]: I0813 01:46:07.776938 2796 kubelet.go:2306] "Pod admission denied" podUID="4386ca9d-42a5-4190-b672-927f4ccd05d1" pod="calico-apiserver/calico-apiserver-6c6d74d74b-8t2h9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:07.810934 kubelet[2796]: I0813 01:46:07.810873 2796 kubelet.go:2306] "Pod admission denied" podUID="d6edeca5-673c-4dc0-aad1-10b3adc37f34" pod="calico-apiserver/calico-apiserver-6c6d74d74b-lk446" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:07.819761 kubelet[2796]: I0813 01:46:07.819478 2796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bv6tn\" (UniqueName: \"kubernetes.io/projected/6e63eb69-081c-4c75-b7ea-ada4cdfe7284-kube-api-access-bv6tn\") pod \"6e63eb69-081c-4c75-b7ea-ada4cdfe7284\" (UID: \"6e63eb69-081c-4c75-b7ea-ada4cdfe7284\") " Aug 13 01:46:07.819761 kubelet[2796]: I0813 01:46:07.819527 2796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6e63eb69-081c-4c75-b7ea-ada4cdfe7284-calico-apiserver-certs\") pod \"6e63eb69-081c-4c75-b7ea-ada4cdfe7284\" (UID: \"6e63eb69-081c-4c75-b7ea-ada4cdfe7284\") " Aug 13 01:46:07.832629 systemd[1]: var-lib-kubelet-pods-6e63eb69\x2d081c\x2d4c75\x2db7ea\x2dada4cdfe7284-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:46:07.835123 kubelet[2796]: I0813 01:46:07.834947 2796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6e63eb69-081c-4c75-b7ea-ada4cdfe7284-kube-api-access-bv6tn" (OuterVolumeSpecName: "kube-api-access-bv6tn") pod "6e63eb69-081c-4c75-b7ea-ada4cdfe7284" (UID: "6e63eb69-081c-4c75-b7ea-ada4cdfe7284"). InnerVolumeSpecName "kube-api-access-bv6tn". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:46:07.835123 kubelet[2796]: I0813 01:46:07.835091 2796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6e63eb69-081c-4c75-b7ea-ada4cdfe7284-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "6e63eb69-081c-4c75-b7ea-ada4cdfe7284" (UID: "6e63eb69-081c-4c75-b7ea-ada4cdfe7284"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:46:07.840050 systemd[1]: var-lib-kubelet-pods-6e63eb69\x2d081c\x2d4c75\x2db7ea\x2dada4cdfe7284-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dbv6tn.mount: Deactivated successfully. Aug 13 01:46:07.854117 kubelet[2796]: I0813 01:46:07.854041 2796 kubelet.go:2306] "Pod admission denied" podUID="11b257cd-f30c-4aca-9c3d-fc50284e757b" pod="calico-apiserver/calico-apiserver-6c6d74d74b-btjtk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:07.876474 kubelet[2796]: I0813 01:46:07.876416 2796 kubelet.go:2306] "Pod admission denied" podUID="690fcbc4-1ce7-4f23-b4ab-99dac37f5923" pod="calico-apiserver/calico-apiserver-6c6d74d74b-vmqpd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:07.906148 kubelet[2796]: I0813 01:46:07.905417 2796 kubelet.go:2306] "Pod admission denied" podUID="f0b1f26d-3276-4738-ad91-cf6f685def51" pod="calico-apiserver/calico-apiserver-6c6d74d74b-wjzzs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:07.920489 kubelet[2796]: I0813 01:46:07.920357 2796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bv6tn\" (UniqueName: \"kubernetes.io/projected/6e63eb69-081c-4c75-b7ea-ada4cdfe7284-kube-api-access-bv6tn\") on node \"172-232-7-32\" DevicePath \"\"" Aug 13 01:46:07.920735 kubelet[2796]: I0813 01:46:07.920714 2796 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/6e63eb69-081c-4c75-b7ea-ada4cdfe7284-calico-apiserver-certs\") on node \"172-232-7-32\" DevicePath \"\"" Aug 13 01:46:07.930522 kubelet[2796]: I0813 01:46:07.930470 2796 kubelet.go:2306] "Pod admission denied" podUID="2083f1f9-23ed-4fae-8e92-c0405affb573" pod="calico-apiserver/calico-apiserver-6c6d74d74b-mfwbl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:07.957276 kubelet[2796]: I0813 01:46:07.957216 2796 kubelet.go:2306] "Pod admission denied" podUID="9588214e-84be-4aab-a426-8dc3bdcae333" pod="calico-apiserver/calico-apiserver-6c6d74d74b-sbtb6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:07.993212 kubelet[2796]: I0813 01:46:07.993148 2796 kubelet.go:2306] "Pod admission denied" podUID="471afe55-a252-48ea-b750-d08c27c2d147" pod="calico-apiserver/calico-apiserver-6c6d74d74b-6jw8n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:08.041764 systemd[1]: Removed slice kubepods-besteffort-pod6e63eb69_081c_4c75_b7ea_ada4cdfe7284.slice - libcontainer container kubepods-besteffort-pod6e63eb69_081c_4c75_b7ea_ada4cdfe7284.slice. Aug 13 01:46:08.118362 kubelet[2796]: I0813 01:46:08.118290 2796 kubelet.go:2306] "Pod admission denied" podUID="c690e00d-8dc3-4bb4-82f1-9b4fdb8b4d15" pod="calico-apiserver/calico-apiserver-6c6d74d74b-6kdmp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:08.269208 kubelet[2796]: I0813 01:46:08.269146 2796 kubelet.go:2306] "Pod admission denied" podUID="e5dee1eb-7deb-4822-aafc-55c035e309c7" pod="calico-apiserver/calico-apiserver-6c6d74d74b-gpfrp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:08.419974 kubelet[2796]: I0813 01:46:08.419910 2796 kubelet.go:2306] "Pod admission denied" podUID="a7c12a5f-f093-45d3-aa79-181b6abf29bf" pod="calico-apiserver/calico-apiserver-6c6d74d74b-px4m7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:08.696307 kubelet[2796]: I0813 01:46:08.696125 2796 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-6c6d74d74b-kvt2f"] Aug 13 01:46:09.724814 kubelet[2796]: I0813 01:46:09.724273 2796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:46:09.724814 kubelet[2796]: E0813 01:46:09.724800 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:46:09.740067 kubelet[2796]: E0813 01:46:09.740004 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:46:11.398322 containerd[1581]: time="2025-08-13T01:46:11.398248698Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c5ff669b8-2nc8l,Uid:c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:11.453496 containerd[1581]: time="2025-08-13T01:46:11.453424696Z" level=error msg="Failed to destroy network for sandbox \"5d5b393c6bba7b30dfef42d7fa52429b69cb755e67dcee017d6c2ac82d4fd35a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:11.455690 containerd[1581]: time="2025-08-13T01:46:11.455537258Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-c5ff669b8-2nc8l,Uid:c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d5b393c6bba7b30dfef42d7fa52429b69cb755e67dcee017d6c2ac82d4fd35a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:11.456471 kubelet[2796]: E0813 01:46:11.456263 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d5b393c6bba7b30dfef42d7fa52429b69cb755e67dcee017d6c2ac82d4fd35a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:11.457041 kubelet[2796]: E0813 01:46:11.456683 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d5b393c6bba7b30dfef42d7fa52429b69cb755e67dcee017d6c2ac82d4fd35a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c5ff669b8-2nc8l" Aug 13 01:46:11.457041 kubelet[2796]: E0813 01:46:11.456775 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5d5b393c6bba7b30dfef42d7fa52429b69cb755e67dcee017d6c2ac82d4fd35a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-c5ff669b8-2nc8l" Aug 13 01:46:11.457699 kubelet[2796]: E0813 01:46:11.457147 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-c5ff669b8-2nc8l_calico-system(c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-c5ff669b8-2nc8l_calico-system(c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5d5b393c6bba7b30dfef42d7fa52429b69cb755e67dcee017d6c2ac82d4fd35a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-c5ff669b8-2nc8l" podUID="c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594" Aug 13 01:46:11.459548 systemd[1]: run-netns-cni\x2d11534ba5\x2dad5e\x2db96c\x2dfccb\x2db35f82fd3693.mount: Deactivated successfully. Aug 13 01:46:12.398798 containerd[1581]: time="2025-08-13T01:46:12.398506174Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6d74d74b-hjrvm,Uid:fed0c3bb-07be-41e1-8995-c70aed034b09,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:46:12.399593 containerd[1581]: time="2025-08-13T01:46:12.398658202Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:12.399593 containerd[1581]: time="2025-08-13T01:46:12.398739597Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:12.487747 containerd[1581]: time="2025-08-13T01:46:12.487615985Z" level=error msg="Failed to destroy network for sandbox \"ca9800a5a55a3d6fc7b48ba112b1672e68e021bf2591cd9cbbed357d0ad90168\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:12.491711 containerd[1581]: time="2025-08-13T01:46:12.488835745Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6c6d74d74b-hjrvm,Uid:fed0c3bb-07be-41e1-8995-c70aed034b09,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca9800a5a55a3d6fc7b48ba112b1672e68e021bf2591cd9cbbed357d0ad90168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:12.491582 systemd[1]: run-netns-cni\x2dc9dcef0d\x2d5b8f\x2de848\x2d532d\x2d92dc56834eda.mount: Deactivated successfully. Aug 13 01:46:12.492146 kubelet[2796]: E0813 01:46:12.489136 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca9800a5a55a3d6fc7b48ba112b1672e68e021bf2591cd9cbbed357d0ad90168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:12.492146 kubelet[2796]: E0813 01:46:12.489229 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca9800a5a55a3d6fc7b48ba112b1672e68e021bf2591cd9cbbed357d0ad90168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6d74d74b-hjrvm" Aug 13 01:46:12.492146 kubelet[2796]: E0813 01:46:12.489267 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca9800a5a55a3d6fc7b48ba112b1672e68e021bf2591cd9cbbed357d0ad90168\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6c6d74d74b-hjrvm" Aug 13 01:46:12.492146 kubelet[2796]: E0813 01:46:12.489333 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6c6d74d74b-hjrvm_calico-apiserver(fed0c3bb-07be-41e1-8995-c70aed034b09)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6c6d74d74b-hjrvm_calico-apiserver(fed0c3bb-07be-41e1-8995-c70aed034b09)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca9800a5a55a3d6fc7b48ba112b1672e68e021bf2591cd9cbbed357d0ad90168\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6c6d74d74b-hjrvm" podUID="fed0c3bb-07be-41e1-8995-c70aed034b09" Aug 13 01:46:12.519872 containerd[1581]: time="2025-08-13T01:46:12.519797867Z" level=error msg="Failed to destroy network for sandbox \"f1fb6d4da6903da48a4215de98691cf6018ff4034e6cb4f2438276dc6d7cd8b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:12.526748 containerd[1581]: time="2025-08-13T01:46:12.521469502Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1fb6d4da6903da48a4215de98691cf6018ff4034e6cb4f2438276dc6d7cd8b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:12.526915 kubelet[2796]: E0813 01:46:12.524793 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1fb6d4da6903da48a4215de98691cf6018ff4034e6cb4f2438276dc6d7cd8b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:12.526915 kubelet[2796]: E0813 01:46:12.524853 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1fb6d4da6903da48a4215de98691cf6018ff4034e6cb4f2438276dc6d7cd8b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:46:12.526915 kubelet[2796]: E0813 01:46:12.524876 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f1fb6d4da6903da48a4215de98691cf6018ff4034e6cb4f2438276dc6d7cd8b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:46:12.526915 kubelet[2796]: E0813 01:46:12.524926 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f1fb6d4da6903da48a4215de98691cf6018ff4034e6cb4f2438276dc6d7cd8b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bk2p6" podUID="0e8898c7-a3f5-4010-bb1f-d756673c29b2" Aug 13 01:46:12.528463 systemd[1]: run-netns-cni\x2d42aaedc6\x2de8bd\x2d5ebe\x2de4ff\x2ddcc3b89d7e88.mount: Deactivated successfully. Aug 13 01:46:12.531911 containerd[1581]: time="2025-08-13T01:46:12.531211086Z" level=error msg="Failed to destroy network for sandbox \"2dd8ffad5c5bb87dfe6eb4c4acb982fa52df77cb9de32cf98e609bd8ea413415\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:12.533206 containerd[1581]: time="2025-08-13T01:46:12.533166219Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dd8ffad5c5bb87dfe6eb4c4acb982fa52df77cb9de32cf98e609bd8ea413415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:12.533578 kubelet[2796]: E0813 01:46:12.533545 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dd8ffad5c5bb87dfe6eb4c4acb982fa52df77cb9de32cf98e609bd8ea413415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:12.533779 kubelet[2796]: E0813 01:46:12.533722 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dd8ffad5c5bb87dfe6eb4c4acb982fa52df77cb9de32cf98e609bd8ea413415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:46:12.533865 kubelet[2796]: E0813 01:46:12.533848 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2dd8ffad5c5bb87dfe6eb4c4acb982fa52df77cb9de32cf98e609bd8ea413415\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:46:12.534004 kubelet[2796]: E0813 01:46:12.533966 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2dd8ffad5c5bb87dfe6eb4c4acb982fa52df77cb9de32cf98e609bd8ea413415\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" podUID="4720bedc-4719-4a57-b2ff-e5b21f7acb7f" Aug 13 01:46:13.398679 kubelet[2796]: E0813 01:46:13.398004 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:46:13.399960 containerd[1581]: time="2025-08-13T01:46:13.399321236Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vrr8,Uid:cbf6d4b0-f3bc-4a92-9977-6d91de60b65f,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:13.402667 containerd[1581]: time="2025-08-13T01:46:13.402541379Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-jzjd4,Uid:44880a0b-db2b-45f1-8f5f-5e98e509b622,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:13.402908 kubelet[2796]: E0813 01:46:13.402757 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:46:13.403516 containerd[1581]: time="2025-08-13T01:46:13.403485661Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:13.404527 systemd[1]: run-netns-cni\x2d24cf1cd8\x2d3039\x2dcf03\x2d221b\x2d6d3ff583d86b.mount: Deactivated successfully. Aug 13 01:46:13.492682 containerd[1581]: time="2025-08-13T01:46:13.489863146Z" level=error msg="Failed to destroy network for sandbox \"cbefedf0a2693d8c18df135270faee4f3f44df0a27c66205ef6110390403b5f2\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:13.495585 containerd[1581]: time="2025-08-13T01:46:13.495048229Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-jzjd4,Uid:44880a0b-db2b-45f1-8f5f-5e98e509b622,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbefedf0a2693d8c18df135270faee4f3f44df0a27c66205ef6110390403b5f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:13.499429 kubelet[2796]: E0813 01:46:13.496932 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbefedf0a2693d8c18df135270faee4f3f44df0a27c66205ef6110390403b5f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:13.499429 kubelet[2796]: E0813 01:46:13.497044 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbefedf0a2693d8c18df135270faee4f3f44df0a27c66205ef6110390403b5f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-jzjd4" Aug 13 01:46:13.499429 kubelet[2796]: E0813 01:46:13.497070 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbefedf0a2693d8c18df135270faee4f3f44df0a27c66205ef6110390403b5f2\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-jzjd4" Aug 13 01:46:13.499429 kubelet[2796]: E0813 01:46:13.497141 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-jzjd4_calico-system(44880a0b-db2b-45f1-8f5f-5e98e509b622)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-jzjd4_calico-system(44880a0b-db2b-45f1-8f5f-5e98e509b622)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbefedf0a2693d8c18df135270faee4f3f44df0a27c66205ef6110390403b5f2\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-jzjd4" podUID="44880a0b-db2b-45f1-8f5f-5e98e509b622" Aug 13 01:46:13.497136 systemd[1]: run-netns-cni\x2d8fcc750c\x2df87d\x2dae55\x2dd93a\x2df9e04291bb11.mount: Deactivated successfully. Aug 13 01:46:13.516268 containerd[1581]: time="2025-08-13T01:46:13.516215824Z" level=error msg="Failed to destroy network for sandbox \"1afcb4c4028c0cec14983b38350ecdd04e46a091964325830082827a4871c9e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:13.518668 containerd[1581]: time="2025-08-13T01:46:13.517707498Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vrr8,Uid:cbf6d4b0-f3bc-4a92-9977-6d91de60b65f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1afcb4c4028c0cec14983b38350ecdd04e46a091964325830082827a4871c9e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:13.518764 kubelet[2796]: E0813 01:46:13.518102 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1afcb4c4028c0cec14983b38350ecdd04e46a091964325830082827a4871c9e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:13.518764 kubelet[2796]: E0813 01:46:13.518175 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1afcb4c4028c0cec14983b38350ecdd04e46a091964325830082827a4871c9e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:46:13.518764 kubelet[2796]: E0813 01:46:13.518200 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1afcb4c4028c0cec14983b38350ecdd04e46a091964325830082827a4871c9e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:46:13.518764 kubelet[2796]: E0813 01:46:13.518256 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6vrr8_kube-system(cbf6d4b0-f3bc-4a92-9977-6d91de60b65f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6vrr8_kube-system(cbf6d4b0-f3bc-4a92-9977-6d91de60b65f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1afcb4c4028c0cec14983b38350ecdd04e46a091964325830082827a4871c9e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6vrr8" podUID="cbf6d4b0-f3bc-4a92-9977-6d91de60b65f" Aug 13 01:46:13.523670 containerd[1581]: time="2025-08-13T01:46:13.523617501Z" level=error msg="Failed to destroy network for sandbox \"db525ef75f6ba24727b1245c93d9c8e2c076ee122cd1f566df4506d37a29371d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:13.524524 containerd[1581]: time="2025-08-13T01:46:13.524476200Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"db525ef75f6ba24727b1245c93d9c8e2c076ee122cd1f566df4506d37a29371d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:13.524727 kubelet[2796]: E0813 01:46:13.524699 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db525ef75f6ba24727b1245c93d9c8e2c076ee122cd1f566df4506d37a29371d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:13.526337 kubelet[2796]: E0813 01:46:13.524859 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db525ef75f6ba24727b1245c93d9c8e2c076ee122cd1f566df4506d37a29371d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:46:13.526337 kubelet[2796]: E0813 01:46:13.524888 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"db525ef75f6ba24727b1245c93d9c8e2c076ee122cd1f566df4506d37a29371d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:46:13.526337 kubelet[2796]: E0813 01:46:13.524929 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-djvw6_kube-system(981696e3-42b0-4ae8-b44b-fa439a03a402)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-djvw6_kube-system(981696e3-42b0-4ae8-b44b-fa439a03a402)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"db525ef75f6ba24727b1245c93d9c8e2c076ee122cd1f566df4506d37a29371d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-djvw6" podUID="981696e3-42b0-4ae8-b44b-fa439a03a402" Aug 13 01:46:14.403755 systemd[1]: run-netns-cni\x2d4fdf4ae7\x2d250f\x2db091\x2d678a\x2df6af8341a4ee.mount: Deactivated successfully. Aug 13 01:46:14.403891 systemd[1]: run-netns-cni\x2d1532eb43\x2d2b92\x2dd4e7\x2d5cf0\x2d9ccdc95f6395.mount: Deactivated successfully. Aug 13 01:46:18.399959 containerd[1581]: time="2025-08-13T01:46:18.399894905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:46:18.735704 kubelet[2796]: I0813 01:46:18.735669 2796 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:18.737362 kubelet[2796]: I0813 01:46:18.736213 2796 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:46:18.740820 kubelet[2796]: I0813 01:46:18.740808 2796 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:46:18.762003 kubelet[2796]: I0813 01:46:18.761966 2796 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:18.762365 kubelet[2796]: I0813 01:46:18.762339 2796 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-6c6d74d74b-hjrvm","calico-system/goldmane-58fd7646b9-jzjd4","calico-system/whisker-c5ff669b8-2nc8l","kube-system/coredns-7c65d6cfc9-6vrr8","calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7","kube-system/coredns-7c65d6cfc9-djvw6","calico-system/csi-node-driver-bk2p6","calico-system/calico-node-8j6cb","tigera-operator/tigera-operator-5bf8dfcb4-v8hlv","calico-system/calico-typha-bf6ccb678-cdfdr","kube-system/kube-controller-manager-172-232-7-32","kube-system/kube-proxy-dmp9l","kube-system/kube-apiserver-172-232-7-32","kube-system/kube-scheduler-172-232-7-32"] Aug 13 01:46:18.768324 kubelet[2796]: I0813 01:46:18.768266 2796 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-6c6d74d74b-hjrvm" Aug 13 01:46:18.768324 kubelet[2796]: I0813 01:46:18.768290 2796 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-6c6d74d74b-hjrvm"] Aug 13 01:46:18.798028 kubelet[2796]: I0813 01:46:18.797946 2796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fed0c3bb-07be-41e1-8995-c70aed034b09-calico-apiserver-certs\") pod \"fed0c3bb-07be-41e1-8995-c70aed034b09\" (UID: \"fed0c3bb-07be-41e1-8995-c70aed034b09\") " Aug 13 01:46:18.798775 kubelet[2796]: I0813 01:46:18.798717 2796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-292hn\" (UniqueName: \"kubernetes.io/projected/fed0c3bb-07be-41e1-8995-c70aed034b09-kube-api-access-292hn\") pod \"fed0c3bb-07be-41e1-8995-c70aed034b09\" (UID: \"fed0c3bb-07be-41e1-8995-c70aed034b09\") " Aug 13 01:46:18.808665 systemd[1]: var-lib-kubelet-pods-fed0c3bb\x2d07be\x2d41e1\x2d8995\x2dc70aed034b09-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:46:18.812841 kubelet[2796]: I0813 01:46:18.812782 2796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fed0c3bb-07be-41e1-8995-c70aed034b09-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "fed0c3bb-07be-41e1-8995-c70aed034b09" (UID: "fed0c3bb-07be-41e1-8995-c70aed034b09"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:46:18.815512 systemd[1]: var-lib-kubelet-pods-fed0c3bb\x2d07be\x2d41e1\x2d8995\x2dc70aed034b09-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d292hn.mount: Deactivated successfully. Aug 13 01:46:18.819834 kubelet[2796]: I0813 01:46:18.819703 2796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fed0c3bb-07be-41e1-8995-c70aed034b09-kube-api-access-292hn" (OuterVolumeSpecName: "kube-api-access-292hn") pod "fed0c3bb-07be-41e1-8995-c70aed034b09" (UID: "fed0c3bb-07be-41e1-8995-c70aed034b09"). InnerVolumeSpecName "kube-api-access-292hn". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:46:18.900123 kubelet[2796]: I0813 01:46:18.900054 2796 reconciler_common.go:293] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/fed0c3bb-07be-41e1-8995-c70aed034b09-calico-apiserver-certs\") on node \"172-232-7-32\" DevicePath \"\"" Aug 13 01:46:18.900123 kubelet[2796]: I0813 01:46:18.900104 2796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-292hn\" (UniqueName: \"kubernetes.io/projected/fed0c3bb-07be-41e1-8995-c70aed034b09-kube-api-access-292hn\") on node \"172-232-7-32\" DevicePath \"\"" Aug 13 01:46:19.411703 systemd[1]: Removed slice kubepods-besteffort-podfed0c3bb_07be_41e1_8995_c70aed034b09.slice - libcontainer container kubepods-besteffort-podfed0c3bb_07be_41e1_8995_c70aed034b09.slice. Aug 13 01:46:20.768559 kubelet[2796]: I0813 01:46:20.768394 2796 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-6c6d74d74b-hjrvm"] Aug 13 01:46:20.779491 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1083394911.mount: Deactivated successfully. Aug 13 01:46:20.783015 containerd[1581]: time="2025-08-13T01:46:20.782965394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1083394911: write /var/lib/containerd/tmpmounts/containerd-mount1083394911/usr/bin/calico-node: no space left on device" Aug 13 01:46:20.783698 containerd[1581]: time="2025-08-13T01:46:20.783210710Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:46:20.783735 kubelet[2796]: E0813 01:46:20.783432 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1083394911: write /var/lib/containerd/tmpmounts/containerd-mount1083394911/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:46:20.783735 kubelet[2796]: E0813 01:46:20.783486 2796 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1083394911: write /var/lib/containerd/tmpmounts/containerd-mount1083394911/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:46:20.783874 kubelet[2796]: E0813 01:46:20.783735 2796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7p5j4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-8j6cb_calico-system(416d9de4-5101-44c9-b974-0fedf790aa67): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1083394911: write /var/lib/containerd/tmpmounts/containerd-mount1083394911/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:46:20.786078 kubelet[2796]: E0813 01:46:20.786018 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1083394911: write /var/lib/containerd/tmpmounts/containerd-mount1083394911/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-8j6cb" podUID="416d9de4-5101-44c9-b974-0fedf790aa67" Aug 13 01:46:20.791464 kubelet[2796]: I0813 01:46:20.791406 2796 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:20.791464 kubelet[2796]: I0813 01:46:20.791446 2796 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:46:20.795222 kubelet[2796]: I0813 01:46:20.795197 2796 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:46:20.809768 kubelet[2796]: I0813 01:46:20.809740 2796 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:20.810450 kubelet[2796]: I0813 01:46:20.810420 2796 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/whisker-c5ff669b8-2nc8l","calico-system/goldmane-58fd7646b9-jzjd4","calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7","kube-system/coredns-7c65d6cfc9-6vrr8","kube-system/coredns-7c65d6cfc9-djvw6","calico-system/csi-node-driver-bk2p6","calico-system/calico-node-8j6cb","tigera-operator/tigera-operator-5bf8dfcb4-v8hlv","calico-system/calico-typha-bf6ccb678-cdfdr","kube-system/kube-controller-manager-172-232-7-32","kube-system/kube-proxy-dmp9l","kube-system/kube-apiserver-172-232-7-32","kube-system/kube-scheduler-172-232-7-32"] Aug 13 01:46:20.815640 kubelet[2796]: I0813 01:46:20.815584 2796 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-system/whisker-c5ff669b8-2nc8l" Aug 13 01:46:20.815640 kubelet[2796]: I0813 01:46:20.815603 2796 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/whisker-c5ff669b8-2nc8l"] Aug 13 01:46:20.915928 kubelet[2796]: I0813 01:46:20.914720 2796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594-whisker-ca-bundle\") pod \"c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594\" (UID: \"c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594\") " Aug 13 01:46:20.915928 kubelet[2796]: I0813 01:46:20.914790 2796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jc9k\" (UniqueName: \"kubernetes.io/projected/c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594-kube-api-access-6jc9k\") pod \"c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594\" (UID: \"c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594\") " Aug 13 01:46:20.915928 kubelet[2796]: I0813 01:46:20.914820 2796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594-whisker-backend-key-pair\") pod \"c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594\" (UID: \"c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594\") " Aug 13 01:46:20.915928 kubelet[2796]: I0813 01:46:20.915786 2796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594" (UID: "c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:46:20.926661 systemd[1]: var-lib-kubelet-pods-c05b1f7d\x2dc0f2\x2d4f36\x2da6d9\x2d678b3f1bf594-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d6jc9k.mount: Deactivated successfully. Aug 13 01:46:20.930884 kubelet[2796]: I0813 01:46:20.929964 2796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594-kube-api-access-6jc9k" (OuterVolumeSpecName: "kube-api-access-6jc9k") pod "c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594" (UID: "c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594"). InnerVolumeSpecName "kube-api-access-6jc9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:46:20.932077 systemd[1]: var-lib-kubelet-pods-c05b1f7d\x2dc0f2\x2d4f36\x2da6d9\x2d678b3f1bf594-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:46:20.935916 kubelet[2796]: I0813 01:46:20.935870 2796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594" (UID: "c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:46:21.016471 kubelet[2796]: I0813 01:46:21.016397 2796 reconciler_common.go:293] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594-whisker-ca-bundle\") on node \"172-232-7-32\" DevicePath \"\"" Aug 13 01:46:21.016471 kubelet[2796]: I0813 01:46:21.016452 2796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6jc9k\" (UniqueName: \"kubernetes.io/projected/c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594-kube-api-access-6jc9k\") on node \"172-232-7-32\" DevicePath \"\"" Aug 13 01:46:21.016471 kubelet[2796]: I0813 01:46:21.016469 2796 reconciler_common.go:293] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/c05b1f7d-c0f2-4f36-a6d9-678b3f1bf594-whisker-backend-key-pair\") on node \"172-232-7-32\" DevicePath \"\"" Aug 13 01:46:21.410447 systemd[1]: Removed slice kubepods-besteffort-podc05b1f7d_c0f2_4f36_a6d9_678b3f1bf594.slice - libcontainer container kubepods-besteffort-podc05b1f7d_c0f2_4f36_a6d9_678b3f1bf594.slice. Aug 13 01:46:21.816139 kubelet[2796]: I0813 01:46:21.816058 2796 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-system/whisker-c5ff669b8-2nc8l"] Aug 13 01:46:26.398315 containerd[1581]: time="2025-08-13T01:46:26.397952688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-jzjd4,Uid:44880a0b-db2b-45f1-8f5f-5e98e509b622,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:26.398315 containerd[1581]: time="2025-08-13T01:46:26.397959278Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:26.473186 containerd[1581]: time="2025-08-13T01:46:26.473112544Z" level=error msg="Failed to destroy network for sandbox \"e4feab009d239ea4435465d1c19b772358ba21e8079be4f852c0aeed94fada37\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:26.476476 containerd[1581]: time="2025-08-13T01:46:26.475082685Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4feab009d239ea4435465d1c19b772358ba21e8079be4f852c0aeed94fada37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:26.476593 kubelet[2796]: E0813 01:46:26.475486 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4feab009d239ea4435465d1c19b772358ba21e8079be4f852c0aeed94fada37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:26.476593 kubelet[2796]: E0813 01:46:26.475564 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4feab009d239ea4435465d1c19b772358ba21e8079be4f852c0aeed94fada37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:46:26.476593 kubelet[2796]: E0813 01:46:26.475589 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e4feab009d239ea4435465d1c19b772358ba21e8079be4f852c0aeed94fada37\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:46:26.476593 kubelet[2796]: E0813 01:46:26.475639 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e4feab009d239ea4435465d1c19b772358ba21e8079be4f852c0aeed94fada37\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" podUID="4720bedc-4719-4a57-b2ff-e5b21f7acb7f" Aug 13 01:46:26.479908 systemd[1]: run-netns-cni\x2d8679faf7\x2d3b49\x2d44fe\x2daab6\x2dd60f87498839.mount: Deactivated successfully. Aug 13 01:46:26.482501 containerd[1581]: time="2025-08-13T01:46:26.482426179Z" level=error msg="Failed to destroy network for sandbox \"2a2648ec8cdfee776889e36fe4468a75e0ec4f8c368dfbdd0984475a5a91b26b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:26.485147 containerd[1581]: time="2025-08-13T01:46:26.483484026Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-58fd7646b9-jzjd4,Uid:44880a0b-db2b-45f1-8f5f-5e98e509b622,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a2648ec8cdfee776889e36fe4468a75e0ec4f8c368dfbdd0984475a5a91b26b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:26.485224 kubelet[2796]: E0813 01:46:26.483886 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a2648ec8cdfee776889e36fe4468a75e0ec4f8c368dfbdd0984475a5a91b26b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:26.485224 kubelet[2796]: E0813 01:46:26.483934 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a2648ec8cdfee776889e36fe4468a75e0ec4f8c368dfbdd0984475a5a91b26b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-jzjd4" Aug 13 01:46:26.485224 kubelet[2796]: E0813 01:46:26.483954 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2a2648ec8cdfee776889e36fe4468a75e0ec4f8c368dfbdd0984475a5a91b26b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-58fd7646b9-jzjd4" Aug 13 01:46:26.485224 kubelet[2796]: E0813 01:46:26.483997 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-58fd7646b9-jzjd4_calico-system(44880a0b-db2b-45f1-8f5f-5e98e509b622)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-58fd7646b9-jzjd4_calico-system(44880a0b-db2b-45f1-8f5f-5e98e509b622)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2a2648ec8cdfee776889e36fe4468a75e0ec4f8c368dfbdd0984475a5a91b26b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-58fd7646b9-jzjd4" podUID="44880a0b-db2b-45f1-8f5f-5e98e509b622" Aug 13 01:46:26.486926 systemd[1]: run-netns-cni\x2d19c33e07\x2da9a4\x2d7543\x2d1d67\x2d3de3b152bf53.mount: Deactivated successfully. Aug 13 01:46:27.399737 kubelet[2796]: E0813 01:46:27.398802 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:46:27.399904 containerd[1581]: time="2025-08-13T01:46:27.399047224Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:27.400968 containerd[1581]: time="2025-08-13T01:46:27.400679822Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vrr8,Uid:cbf6d4b0-f3bc-4a92-9977-6d91de60b65f,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:27.481209 containerd[1581]: time="2025-08-13T01:46:27.481122155Z" level=error msg="Failed to destroy network for sandbox \"6da044336e6b795adefb281a8bfd4aa5c47db174754ea22875f8fb9f5397c31a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:27.486463 systemd[1]: run-netns-cni\x2d403cf25c\x2d219c\x2da13f\x2dd732\x2d64c7ab0ce3b3.mount: Deactivated successfully. Aug 13 01:46:27.489390 containerd[1581]: time="2025-08-13T01:46:27.489326495Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6da044336e6b795adefb281a8bfd4aa5c47db174754ea22875f8fb9f5397c31a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:27.490582 kubelet[2796]: E0813 01:46:27.489591 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6da044336e6b795adefb281a8bfd4aa5c47db174754ea22875f8fb9f5397c31a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:27.490582 kubelet[2796]: E0813 01:46:27.489755 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6da044336e6b795adefb281a8bfd4aa5c47db174754ea22875f8fb9f5397c31a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:46:27.490582 kubelet[2796]: E0813 01:46:27.489808 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6da044336e6b795adefb281a8bfd4aa5c47db174754ea22875f8fb9f5397c31a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:46:27.490582 kubelet[2796]: E0813 01:46:27.489873 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6da044336e6b795adefb281a8bfd4aa5c47db174754ea22875f8fb9f5397c31a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bk2p6" podUID="0e8898c7-a3f5-4010-bb1f-d756673c29b2" Aug 13 01:46:27.500628 containerd[1581]: time="2025-08-13T01:46:27.500578999Z" level=error msg="Failed to destroy network for sandbox \"502721adcac1f9cf35cdacfe4c80afd857f644ef05ca44d95b3c0ce00e646b19\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:27.502804 systemd[1]: run-netns-cni\x2d16619375\x2d596b\x2d7507\x2d28a4\x2d6e81a2316220.mount: Deactivated successfully. Aug 13 01:46:27.505222 containerd[1581]: time="2025-08-13T01:46:27.504721842Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vrr8,Uid:cbf6d4b0-f3bc-4a92-9977-6d91de60b65f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"502721adcac1f9cf35cdacfe4c80afd857f644ef05ca44d95b3c0ce00e646b19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:27.505551 kubelet[2796]: E0813 01:46:27.505507 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"502721adcac1f9cf35cdacfe4c80afd857f644ef05ca44d95b3c0ce00e646b19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:27.505632 kubelet[2796]: E0813 01:46:27.505571 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"502721adcac1f9cf35cdacfe4c80afd857f644ef05ca44d95b3c0ce00e646b19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:46:27.505632 kubelet[2796]: E0813 01:46:27.505599 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"502721adcac1f9cf35cdacfe4c80afd857f644ef05ca44d95b3c0ce00e646b19\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:46:27.506754 kubelet[2796]: E0813 01:46:27.505693 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6vrr8_kube-system(cbf6d4b0-f3bc-4a92-9977-6d91de60b65f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6vrr8_kube-system(cbf6d4b0-f3bc-4a92-9977-6d91de60b65f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"502721adcac1f9cf35cdacfe4c80afd857f644ef05ca44d95b3c0ce00e646b19\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6vrr8" podUID="cbf6d4b0-f3bc-4a92-9977-6d91de60b65f" Aug 13 01:46:28.397768 kubelet[2796]: E0813 01:46:28.397506 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:46:28.398623 containerd[1581]: time="2025-08-13T01:46:28.398261928Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:28.464148 containerd[1581]: time="2025-08-13T01:46:28.464020349Z" level=error msg="Failed to destroy network for sandbox \"3bf783dab922b11713d5c08122f2927636feeb5c7f808386e38d2ffdcb56f82a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:28.467371 containerd[1581]: time="2025-08-13T01:46:28.467322991Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bf783dab922b11713d5c08122f2927636feeb5c7f808386e38d2ffdcb56f82a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:28.468789 kubelet[2796]: E0813 01:46:28.468710 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bf783dab922b11713d5c08122f2927636feeb5c7f808386e38d2ffdcb56f82a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:28.468789 kubelet[2796]: E0813 01:46:28.468779 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bf783dab922b11713d5c08122f2927636feeb5c7f808386e38d2ffdcb56f82a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:46:28.469081 kubelet[2796]: E0813 01:46:28.468804 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3bf783dab922b11713d5c08122f2927636feeb5c7f808386e38d2ffdcb56f82a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:46:28.469081 kubelet[2796]: E0813 01:46:28.468855 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-djvw6_kube-system(981696e3-42b0-4ae8-b44b-fa439a03a402)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-djvw6_kube-system(981696e3-42b0-4ae8-b44b-fa439a03a402)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3bf783dab922b11713d5c08122f2927636feeb5c7f808386e38d2ffdcb56f82a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-djvw6" podUID="981696e3-42b0-4ae8-b44b-fa439a03a402" Aug 13 01:46:28.470626 systemd[1]: run-netns-cni\x2ddef019d3\x2dc64a\x2dbca0\x2d3625\x2d095daed34d52.mount: Deactivated successfully. Aug 13 01:46:31.843173 kubelet[2796]: I0813 01:46:31.843137 2796 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:31.843173 kubelet[2796]: I0813 01:46:31.843181 2796 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:46:31.845674 kubelet[2796]: I0813 01:46:31.845627 2796 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:46:31.860228 kubelet[2796]: I0813 01:46:31.860185 2796 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:31.860529 kubelet[2796]: I0813 01:46:31.860477 2796 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/goldmane-58fd7646b9-jzjd4","calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7","kube-system/coredns-7c65d6cfc9-6vrr8","kube-system/coredns-7c65d6cfc9-djvw6","calico-system/csi-node-driver-bk2p6","calico-system/calico-node-8j6cb","tigera-operator/tigera-operator-5bf8dfcb4-v8hlv","calico-system/calico-typha-bf6ccb678-cdfdr","kube-system/kube-controller-manager-172-232-7-32","kube-system/kube-proxy-dmp9l","kube-system/kube-apiserver-172-232-7-32","kube-system/kube-scheduler-172-232-7-32"] Aug 13 01:46:31.869452 kubelet[2796]: I0813 01:46:31.869414 2796 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="calico-system/goldmane-58fd7646b9-jzjd4" Aug 13 01:46:31.869452 kubelet[2796]: I0813 01:46:31.869450 2796 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/goldmane-58fd7646b9-jzjd4"] Aug 13 01:46:31.988312 kubelet[2796]: I0813 01:46:31.988244 2796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44880a0b-db2b-45f1-8f5f-5e98e509b622-goldmane-ca-bundle\") pod \"44880a0b-db2b-45f1-8f5f-5e98e509b622\" (UID: \"44880a0b-db2b-45f1-8f5f-5e98e509b622\") " Aug 13 01:46:31.988312 kubelet[2796]: I0813 01:46:31.988298 2796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/44880a0b-db2b-45f1-8f5f-5e98e509b622-goldmane-key-pair\") pod \"44880a0b-db2b-45f1-8f5f-5e98e509b622\" (UID: \"44880a0b-db2b-45f1-8f5f-5e98e509b622\") " Aug 13 01:46:31.988312 kubelet[2796]: I0813 01:46:31.988329 2796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lttnn\" (UniqueName: \"kubernetes.io/projected/44880a0b-db2b-45f1-8f5f-5e98e509b622-kube-api-access-lttnn\") pod \"44880a0b-db2b-45f1-8f5f-5e98e509b622\" (UID: \"44880a0b-db2b-45f1-8f5f-5e98e509b622\") " Aug 13 01:46:31.988578 kubelet[2796]: I0813 01:46:31.988350 2796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44880a0b-db2b-45f1-8f5f-5e98e509b622-config\") pod \"44880a0b-db2b-45f1-8f5f-5e98e509b622\" (UID: \"44880a0b-db2b-45f1-8f5f-5e98e509b622\") " Aug 13 01:46:31.990035 kubelet[2796]: I0813 01:46:31.989909 2796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44880a0b-db2b-45f1-8f5f-5e98e509b622-config" (OuterVolumeSpecName: "config") pod "44880a0b-db2b-45f1-8f5f-5e98e509b622" (UID: "44880a0b-db2b-45f1-8f5f-5e98e509b622"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:46:31.990235 kubelet[2796]: I0813 01:46:31.990216 2796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/44880a0b-db2b-45f1-8f5f-5e98e509b622-goldmane-ca-bundle" (OuterVolumeSpecName: "goldmane-ca-bundle") pod "44880a0b-db2b-45f1-8f5f-5e98e509b622" (UID: "44880a0b-db2b-45f1-8f5f-5e98e509b622"). InnerVolumeSpecName "goldmane-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Aug 13 01:46:31.993850 kubelet[2796]: I0813 01:46:31.993816 2796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/44880a0b-db2b-45f1-8f5f-5e98e509b622-goldmane-key-pair" (OuterVolumeSpecName: "goldmane-key-pair") pod "44880a0b-db2b-45f1-8f5f-5e98e509b622" (UID: "44880a0b-db2b-45f1-8f5f-5e98e509b622"). InnerVolumeSpecName "goldmane-key-pair". PluginName "kubernetes.io/secret", VolumeGidValue "" Aug 13 01:46:31.994197 kubelet[2796]: I0813 01:46:31.994166 2796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44880a0b-db2b-45f1-8f5f-5e98e509b622-kube-api-access-lttnn" (OuterVolumeSpecName: "kube-api-access-lttnn") pod "44880a0b-db2b-45f1-8f5f-5e98e509b622" (UID: "44880a0b-db2b-45f1-8f5f-5e98e509b622"). InnerVolumeSpecName "kube-api-access-lttnn". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:46:31.995666 systemd[1]: var-lib-kubelet-pods-44880a0b\x2ddb2b\x2d45f1\x2d8f5f\x2d5e98e509b622-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dlttnn.mount: Deactivated successfully. Aug 13 01:46:31.995788 systemd[1]: var-lib-kubelet-pods-44880a0b\x2ddb2b\x2d45f1\x2d8f5f\x2d5e98e509b622-volumes-kubernetes.io\x7esecret-goldmane\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:46:32.088640 kubelet[2796]: I0813 01:46:32.088591 2796 reconciler_common.go:293] "Volume detached for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/44880a0b-db2b-45f1-8f5f-5e98e509b622-goldmane-ca-bundle\") on node \"172-232-7-32\" DevicePath \"\"" Aug 13 01:46:32.088640 kubelet[2796]: I0813 01:46:32.088634 2796 reconciler_common.go:293] "Volume detached for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/44880a0b-db2b-45f1-8f5f-5e98e509b622-goldmane-key-pair\") on node \"172-232-7-32\" DevicePath \"\"" Aug 13 01:46:32.088640 kubelet[2796]: I0813 01:46:32.088667 2796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lttnn\" (UniqueName: \"kubernetes.io/projected/44880a0b-db2b-45f1-8f5f-5e98e509b622-kube-api-access-lttnn\") on node \"172-232-7-32\" DevicePath \"\"" Aug 13 01:46:32.088923 kubelet[2796]: I0813 01:46:32.088681 2796 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/44880a0b-db2b-45f1-8f5f-5e98e509b622-config\") on node \"172-232-7-32\" DevicePath \"\"" Aug 13 01:46:32.799495 systemd[1]: Removed slice kubepods-besteffort-pod44880a0b_db2b_45f1_8f5f_5e98e509b622.slice - libcontainer container kubepods-besteffort-pod44880a0b_db2b_45f1_8f5f_5e98e509b622.slice. Aug 13 01:46:32.869982 kubelet[2796]: I0813 01:46:32.869915 2796 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["calico-system/goldmane-58fd7646b9-jzjd4"] Aug 13 01:46:36.399987 kubelet[2796]: E0813 01:46:36.399923 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-8j6cb" podUID="416d9de4-5101-44c9-b974-0fedf790aa67" Aug 13 01:46:39.398372 kubelet[2796]: E0813 01:46:39.398313 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:46:39.399974 containerd[1581]: time="2025-08-13T01:46:39.399033081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:39.460295 containerd[1581]: time="2025-08-13T01:46:39.460201250Z" level=error msg="Failed to destroy network for sandbox \"720aa79b82def2934e9aecb56b473baf5abd3ffa383255fad6073b004850380e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:39.462874 systemd[1]: run-netns-cni\x2d338335e5\x2d264a\x2d0174\x2d0b36\x2d9033900f3e97.mount: Deactivated successfully. Aug 13 01:46:39.465053 containerd[1581]: time="2025-08-13T01:46:39.465011750Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"720aa79b82def2934e9aecb56b473baf5abd3ffa383255fad6073b004850380e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:39.465449 kubelet[2796]: E0813 01:46:39.465296 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"720aa79b82def2934e9aecb56b473baf5abd3ffa383255fad6073b004850380e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:39.465449 kubelet[2796]: E0813 01:46:39.465395 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"720aa79b82def2934e9aecb56b473baf5abd3ffa383255fad6073b004850380e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:46:39.465449 kubelet[2796]: E0813 01:46:39.465420 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"720aa79b82def2934e9aecb56b473baf5abd3ffa383255fad6073b004850380e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:46:39.465633 kubelet[2796]: E0813 01:46:39.465496 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"720aa79b82def2934e9aecb56b473baf5abd3ffa383255fad6073b004850380e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bk2p6" podUID="0e8898c7-a3f5-4010-bb1f-d756673c29b2" Aug 13 01:46:40.397997 kubelet[2796]: E0813 01:46:40.397933 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:46:40.399506 containerd[1581]: time="2025-08-13T01:46:40.399173455Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vrr8,Uid:cbf6d4b0-f3bc-4a92-9977-6d91de60b65f,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:40.399506 containerd[1581]: time="2025-08-13T01:46:40.399274924Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:40.479744 containerd[1581]: time="2025-08-13T01:46:40.479638819Z" level=error msg="Failed to destroy network for sandbox \"e0b33df430f03f81f11fdcbf76889a4cad0373c8287be5df949f9e8cedf87f62\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:40.483452 containerd[1581]: time="2025-08-13T01:46:40.483413271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0b33df430f03f81f11fdcbf76889a4cad0373c8287be5df949f9e8cedf87f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:40.484518 kubelet[2796]: E0813 01:46:40.483950 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0b33df430f03f81f11fdcbf76889a4cad0373c8287be5df949f9e8cedf87f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:40.484518 kubelet[2796]: E0813 01:46:40.484038 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0b33df430f03f81f11fdcbf76889a4cad0373c8287be5df949f9e8cedf87f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:46:40.484518 kubelet[2796]: E0813 01:46:40.484070 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e0b33df430f03f81f11fdcbf76889a4cad0373c8287be5df949f9e8cedf87f62\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:46:40.484518 kubelet[2796]: E0813 01:46:40.484125 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e0b33df430f03f81f11fdcbf76889a4cad0373c8287be5df949f9e8cedf87f62\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" podUID="4720bedc-4719-4a57-b2ff-e5b21f7acb7f" Aug 13 01:46:40.486177 systemd[1]: run-netns-cni\x2d1215fa6c\x2dfab8\x2d3a6f\x2d9530\x2d4e01812c38cf.mount: Deactivated successfully. Aug 13 01:46:40.491484 containerd[1581]: time="2025-08-13T01:46:40.491432287Z" level=error msg="Failed to destroy network for sandbox \"23f6118898836bf7cdecb2cf7f668f15ea7483b6b3ba1f8f3a7068616e1dff61\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:40.494395 systemd[1]: run-netns-cni\x2dceed82ac\x2dfdda\x2d42a4\x2d4a85\x2da4bc9a82a825.mount: Deactivated successfully. Aug 13 01:46:40.495856 containerd[1581]: time="2025-08-13T01:46:40.495823189Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vrr8,Uid:cbf6d4b0-f3bc-4a92-9977-6d91de60b65f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"23f6118898836bf7cdecb2cf7f668f15ea7483b6b3ba1f8f3a7068616e1dff61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:40.496080 kubelet[2796]: E0813 01:46:40.496043 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23f6118898836bf7cdecb2cf7f668f15ea7483b6b3ba1f8f3a7068616e1dff61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:40.496167 kubelet[2796]: E0813 01:46:40.496109 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23f6118898836bf7cdecb2cf7f668f15ea7483b6b3ba1f8f3a7068616e1dff61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:46:40.496167 kubelet[2796]: E0813 01:46:40.496132 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"23f6118898836bf7cdecb2cf7f668f15ea7483b6b3ba1f8f3a7068616e1dff61\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:46:40.496335 kubelet[2796]: E0813 01:46:40.496181 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6vrr8_kube-system(cbf6d4b0-f3bc-4a92-9977-6d91de60b65f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6vrr8_kube-system(cbf6d4b0-f3bc-4a92-9977-6d91de60b65f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"23f6118898836bf7cdecb2cf7f668f15ea7483b6b3ba1f8f3a7068616e1dff61\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6vrr8" podUID="cbf6d4b0-f3bc-4a92-9977-6d91de60b65f" Aug 13 01:46:41.397685 kubelet[2796]: E0813 01:46:41.397575 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:46:41.399674 containerd[1581]: time="2025-08-13T01:46:41.398868707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:41.461864 containerd[1581]: time="2025-08-13T01:46:41.461787365Z" level=error msg="Failed to destroy network for sandbox \"3d6bbdad4a60c13de3785b3c3b2a5e9c8eb3f9976f540117f24a0b54f9e6a3a0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:41.464281 systemd[1]: run-netns-cni\x2d0dbdb9ba\x2d7b34\x2ddadc\x2d36f7\x2dc1a9e21dc7fe.mount: Deactivated successfully. Aug 13 01:46:41.466487 containerd[1581]: time="2025-08-13T01:46:41.465858758Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d6bbdad4a60c13de3785b3c3b2a5e9c8eb3f9976f540117f24a0b54f9e6a3a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:41.466822 kubelet[2796]: E0813 01:46:41.466768 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d6bbdad4a60c13de3785b3c3b2a5e9c8eb3f9976f540117f24a0b54f9e6a3a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:41.466912 kubelet[2796]: E0813 01:46:41.466895 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d6bbdad4a60c13de3785b3c3b2a5e9c8eb3f9976f540117f24a0b54f9e6a3a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:46:41.466952 kubelet[2796]: E0813 01:46:41.466919 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3d6bbdad4a60c13de3785b3c3b2a5e9c8eb3f9976f540117f24a0b54f9e6a3a0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:46:41.467004 kubelet[2796]: E0813 01:46:41.466972 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-djvw6_kube-system(981696e3-42b0-4ae8-b44b-fa439a03a402)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-djvw6_kube-system(981696e3-42b0-4ae8-b44b-fa439a03a402)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3d6bbdad4a60c13de3785b3c3b2a5e9c8eb3f9976f540117f24a0b54f9e6a3a0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-djvw6" podUID="981696e3-42b0-4ae8-b44b-fa439a03a402" Aug 13 01:46:42.901615 kubelet[2796]: I0813 01:46:42.901555 2796 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:42.901615 kubelet[2796]: I0813 01:46:42.901628 2796 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:46:42.903624 kubelet[2796]: I0813 01:46:42.903570 2796 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:46:42.923672 kubelet[2796]: I0813 01:46:42.923604 2796 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:42.923836 kubelet[2796]: I0813 01:46:42.923791 2796 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-djvw6","calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7","kube-system/coredns-7c65d6cfc9-6vrr8","calico-system/calico-node-8j6cb","calico-system/csi-node-driver-bk2p6","tigera-operator/tigera-operator-5bf8dfcb4-v8hlv","calico-system/calico-typha-bf6ccb678-cdfdr","kube-system/kube-controller-manager-172-232-7-32","kube-system/kube-proxy-dmp9l","kube-system/kube-apiserver-172-232-7-32","kube-system/kube-scheduler-172-232-7-32"] Aug 13 01:46:42.923908 kubelet[2796]: E0813 01:46:42.923861 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:46:42.923908 kubelet[2796]: E0813 01:46:42.923874 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:46:42.923908 kubelet[2796]: E0813 01:46:42.923882 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:46:42.923908 kubelet[2796]: E0813 01:46:42.923891 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-8j6cb" Aug 13 01:46:42.924012 kubelet[2796]: E0813 01:46:42.923920 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:46:42.925294 containerd[1581]: time="2025-08-13T01:46:42.925250727Z" level=info msg="StopContainer for \"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac\" with timeout 2 (s)" Aug 13 01:46:42.926013 containerd[1581]: time="2025-08-13T01:46:42.925993860Z" level=info msg="Stop container \"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac\" with signal terminated" Aug 13 01:46:43.005252 systemd[1]: cri-containerd-50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac.scope: Deactivated successfully. Aug 13 01:46:43.005619 systemd[1]: cri-containerd-50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac.scope: Consumed 5.478s CPU time, 81.4M memory peak. Aug 13 01:46:43.008423 containerd[1581]: time="2025-08-13T01:46:43.008383068Z" level=info msg="TaskExit event in podsandbox handler container_id:\"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac\" id:\"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac\" pid:3118 exited_at:{seconds:1755049603 nanos:7901812}" Aug 13 01:46:43.008517 containerd[1581]: time="2025-08-13T01:46:43.008383378Z" level=info msg="received exit event container_id:\"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac\" id:\"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac\" pid:3118 exited_at:{seconds:1755049603 nanos:7901812}" Aug 13 01:46:43.036302 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac-rootfs.mount: Deactivated successfully. Aug 13 01:46:43.043682 containerd[1581]: time="2025-08-13T01:46:43.043534746Z" level=info msg="StopContainer for \"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac\" returns successfully" Aug 13 01:46:43.044553 containerd[1581]: time="2025-08-13T01:46:43.044523138Z" level=info msg="StopPodSandbox for \"19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2\"" Aug 13 01:46:43.044609 containerd[1581]: time="2025-08-13T01:46:43.044583908Z" level=info msg="Container to stop \"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:46:43.052475 systemd[1]: cri-containerd-19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2.scope: Deactivated successfully. Aug 13 01:46:43.054262 containerd[1581]: time="2025-08-13T01:46:43.054169385Z" level=info msg="TaskExit event in podsandbox handler container_id:\"19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2\" id:\"19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2\" pid:2906 exit_status:137 exited_at:{seconds:1755049603 nanos:53555150}" Aug 13 01:46:43.086563 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2-rootfs.mount: Deactivated successfully. Aug 13 01:46:43.090754 containerd[1581]: time="2025-08-13T01:46:43.090713083Z" level=info msg="shim disconnected" id=19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2 namespace=k8s.io Aug 13 01:46:43.090754 containerd[1581]: time="2025-08-13T01:46:43.090752812Z" level=warning msg="cleaning up after shim disconnected" id=19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2 namespace=k8s.io Aug 13 01:46:43.090877 containerd[1581]: time="2025-08-13T01:46:43.090762572Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:46:43.106665 containerd[1581]: time="2025-08-13T01:46:43.104911171Z" level=info msg="received exit event sandbox_id:\"19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2\" exit_status:137 exited_at:{seconds:1755049603 nanos:53555150}" Aug 13 01:46:43.107086 containerd[1581]: time="2025-08-13T01:46:43.106924463Z" level=info msg="TearDown network for sandbox \"19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2\" successfully" Aug 13 01:46:43.107086 containerd[1581]: time="2025-08-13T01:46:43.106946813Z" level=info msg="StopPodSandbox for \"19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2\" returns successfully" Aug 13 01:46:43.107626 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2-shm.mount: Deactivated successfully. Aug 13 01:46:43.113676 kubelet[2796]: I0813 01:46:43.113618 2796 eviction_manager.go:616] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-5bf8dfcb4-v8hlv" Aug 13 01:46:43.113811 kubelet[2796]: I0813 01:46:43.113790 2796 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-5bf8dfcb4-v8hlv"] Aug 13 01:46:43.142344 kubelet[2796]: I0813 01:46:43.142223 2796 kubelet.go:2306] "Pod admission denied" podUID="6c695840-d54c-47b9-a2e1-c4e17e8032a1" pod="tigera-operator/tigera-operator-5bf8dfcb4-5w86h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.162260 kubelet[2796]: I0813 01:46:43.162066 2796 kubelet.go:2306] "Pod admission denied" podUID="cc4f394d-b347-4ad4-b305-be69b5cb9f5b" pod="tigera-operator/tigera-operator-5bf8dfcb4-np9w4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.189261 kubelet[2796]: I0813 01:46:43.188862 2796 kubelet.go:2306] "Pod admission denied" podUID="d03f359d-48b8-4dd0-bf35-2a85bcff1ae2" pod="tigera-operator/tigera-operator-5bf8dfcb4-wpr89" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.216692 kubelet[2796]: I0813 01:46:43.215795 2796 kubelet.go:2306] "Pod admission denied" podUID="e1cde7e1-17f5-4940-bdeb-eef4d2b8051d" pod="tigera-operator/tigera-operator-5bf8dfcb4-45br4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.243042 kubelet[2796]: I0813 01:46:43.242620 2796 kubelet.go:2306] "Pod admission denied" podUID="bef468ef-a5f7-4797-b392-9f09c185b1fb" pod="tigera-operator/tigera-operator-5bf8dfcb4-tn88f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.260980 kubelet[2796]: I0813 01:46:43.260851 2796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9cdf404a-9180-4dce-bb4f-bb6e1151a9fe-var-lib-calico\") pod \"9cdf404a-9180-4dce-bb4f-bb6e1151a9fe\" (UID: \"9cdf404a-9180-4dce-bb4f-bb6e1151a9fe\") " Aug 13 01:46:43.261734 kubelet[2796]: I0813 01:46:43.261431 2796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9cdf404a-9180-4dce-bb4f-bb6e1151a9fe-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "9cdf404a-9180-4dce-bb4f-bb6e1151a9fe" (UID: "9cdf404a-9180-4dce-bb4f-bb6e1151a9fe"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGidValue "" Aug 13 01:46:43.261987 kubelet[2796]: I0813 01:46:43.261329 2796 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrczw\" (UniqueName: \"kubernetes.io/projected/9cdf404a-9180-4dce-bb4f-bb6e1151a9fe-kube-api-access-zrczw\") pod \"9cdf404a-9180-4dce-bb4f-bb6e1151a9fe\" (UID: \"9cdf404a-9180-4dce-bb4f-bb6e1151a9fe\") " Aug 13 01:46:43.261987 kubelet[2796]: I0813 01:46:43.261961 2796 reconciler_common.go:293] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9cdf404a-9180-4dce-bb4f-bb6e1151a9fe-var-lib-calico\") on node \"172-232-7-32\" DevicePath \"\"" Aug 13 01:46:43.269682 kubelet[2796]: I0813 01:46:43.268493 2796 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9cdf404a-9180-4dce-bb4f-bb6e1151a9fe-kube-api-access-zrczw" (OuterVolumeSpecName: "kube-api-access-zrczw") pod "9cdf404a-9180-4dce-bb4f-bb6e1151a9fe" (UID: "9cdf404a-9180-4dce-bb4f-bb6e1151a9fe"). InnerVolumeSpecName "kube-api-access-zrczw". PluginName "kubernetes.io/projected", VolumeGidValue "" Aug 13 01:46:43.271093 systemd[1]: var-lib-kubelet-pods-9cdf404a\x2d9180\x2d4dce\x2dbb4f\x2dbb6e1151a9fe-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzrczw.mount: Deactivated successfully. Aug 13 01:46:43.271221 kubelet[2796]: I0813 01:46:43.271084 2796 kubelet.go:2306] "Pod admission denied" podUID="a1450fbd-6e83-49af-8ccc-7bb2eb51ff0c" pod="tigera-operator/tigera-operator-5bf8dfcb4-rhwb9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.294602 kubelet[2796]: I0813 01:46:43.294534 2796 kubelet.go:2306] "Pod admission denied" podUID="57bdd3de-7646-4d66-9592-6c5e744a135c" pod="tigera-operator/tigera-operator-5bf8dfcb4-22fqt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.325429 kubelet[2796]: I0813 01:46:43.325369 2796 kubelet.go:2306] "Pod admission denied" podUID="a142c939-a80d-44e2-b58a-2235e0cda34e" pod="tigera-operator/tigera-operator-5bf8dfcb4-tnjjf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.352534 kubelet[2796]: I0813 01:46:43.351464 2796 kubelet.go:2306] "Pod admission denied" podUID="f9ce1da6-8dd1-4d91-9ec7-54dabc833776" pod="tigera-operator/tigera-operator-5bf8dfcb4-5dw49" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.362372 kubelet[2796]: I0813 01:46:43.362263 2796 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrczw\" (UniqueName: \"kubernetes.io/projected/9cdf404a-9180-4dce-bb4f-bb6e1151a9fe-kube-api-access-zrczw\") on node \"172-232-7-32\" DevicePath \"\"" Aug 13 01:46:43.406346 systemd[1]: Removed slice kubepods-besteffort-pod9cdf404a_9180_4dce_bb4f_bb6e1151a9fe.slice - libcontainer container kubepods-besteffort-pod9cdf404a_9180_4dce_bb4f_bb6e1151a9fe.slice. Aug 13 01:46:43.406463 systemd[1]: kubepods-besteffort-pod9cdf404a_9180_4dce_bb4f_bb6e1151a9fe.slice: Consumed 5.537s CPU time, 81.6M memory peak. Aug 13 01:46:43.488203 kubelet[2796]: I0813 01:46:43.488116 2796 kubelet.go:2306] "Pod admission denied" podUID="7652f472-386c-45d7-b491-42b2fb8b0e47" pod="tigera-operator/tigera-operator-5bf8dfcb4-c2v66" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.636578 kubelet[2796]: I0813 01:46:43.636516 2796 kubelet.go:2306] "Pod admission denied" podUID="8df5cb33-f655-4597-9c80-c5a713e5cf12" pod="tigera-operator/tigera-operator-5bf8dfcb4-cmg6z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.787035 kubelet[2796]: I0813 01:46:43.786851 2796 kubelet.go:2306] "Pod admission denied" podUID="d2681ab6-7255-44a9-88bb-af9a0f6b3cca" pod="tigera-operator/tigera-operator-5bf8dfcb4-f68xc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.819698 kubelet[2796]: I0813 01:46:43.819581 2796 scope.go:117] "RemoveContainer" containerID="50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac" Aug 13 01:46:43.825078 containerd[1581]: time="2025-08-13T01:46:43.825030703Z" level=info msg="RemoveContainer for \"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac\"" Aug 13 01:46:43.832561 containerd[1581]: time="2025-08-13T01:46:43.832496510Z" level=info msg="RemoveContainer for \"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac\" returns successfully" Aug 13 01:46:43.833008 kubelet[2796]: I0813 01:46:43.832829 2796 scope.go:117] "RemoveContainer" containerID="50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac" Aug 13 01:46:43.833362 containerd[1581]: time="2025-08-13T01:46:43.833306382Z" level=error msg="ContainerStatus for \"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac\": not found" Aug 13 01:46:43.833508 kubelet[2796]: E0813 01:46:43.833476 2796 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac\": not found" containerID="50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac" Aug 13 01:46:43.833625 kubelet[2796]: I0813 01:46:43.833510 2796 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac"} err="failed to get container status \"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac\": rpc error: code = NotFound desc = an error occurred when try to find container \"50836e9cf80c4b2606fa26bd609206eec6cc5126076a033560829b9b0b7485ac\": not found" Aug 13 01:46:43.937352 kubelet[2796]: I0813 01:46:43.937276 2796 kubelet.go:2306] "Pod admission denied" podUID="3ece7cde-1683-446a-a038-39795256dbfe" pod="tigera-operator/tigera-operator-5bf8dfcb4-tj9b5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.088836 kubelet[2796]: I0813 01:46:44.088452 2796 kubelet.go:2306] "Pod admission denied" podUID="a96c02df-542f-4c6e-998e-2d1ea348c2d3" pod="tigera-operator/tigera-operator-5bf8dfcb4-fwtbt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.114913 kubelet[2796]: I0813 01:46:44.114803 2796 eviction_manager.go:447] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-5bf8dfcb4-v8hlv"] Aug 13 01:46:44.237784 kubelet[2796]: I0813 01:46:44.237591 2796 kubelet.go:2306] "Pod admission denied" podUID="eed84fca-b19c-41e0-b938-b8b436efe010" pod="tigera-operator/tigera-operator-5bf8dfcb4-f7sq7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.388768 kubelet[2796]: I0813 01:46:44.388509 2796 kubelet.go:2306] "Pod admission denied" podUID="fae59cfd-fb9f-41c1-ae43-b893d090b201" pod="tigera-operator/tigera-operator-5bf8dfcb4-sgb8s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.537396 kubelet[2796]: I0813 01:46:44.537324 2796 kubelet.go:2306] "Pod admission denied" podUID="685bb1be-905c-4371-bf60-c4bd37187f8a" pod="tigera-operator/tigera-operator-5bf8dfcb4-2qwzf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.688789 kubelet[2796]: I0813 01:46:44.688486 2796 kubelet.go:2306] "Pod admission denied" podUID="dad83cb3-c47d-4649-adb9-0e186c833d82" pod="tigera-operator/tigera-operator-5bf8dfcb4-flzlz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.837256 kubelet[2796]: I0813 01:46:44.837140 2796 kubelet.go:2306] "Pod admission denied" podUID="4be2852e-9164-4ec3-9c6a-42b27f2c47c7" pod="tigera-operator/tigera-operator-5bf8dfcb4-hd5zm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.034012 containerd[1581]: time="2025-08-13T01:46:45.033957124Z" level=info msg="TaskExit event in podsandbox handler exit_status:137 exited_at:{seconds:1755049603 nanos:53555150}" Aug 13 01:46:45.087090 kubelet[2796]: I0813 01:46:45.086982 2796 kubelet.go:2306] "Pod admission denied" podUID="ca7079d9-1048-4ff7-ba53-9bc19e7d0c29" pod="tigera-operator/tigera-operator-5bf8dfcb4-fsfqt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.240417 kubelet[2796]: I0813 01:46:45.240345 2796 kubelet.go:2306] "Pod admission denied" podUID="cbff8bed-b3f4-45a8-863e-00521a0c3173" pod="tigera-operator/tigera-operator-5bf8dfcb4-q4m6v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.335877 kubelet[2796]: I0813 01:46:45.335419 2796 kubelet.go:2306] "Pod admission denied" podUID="875b2956-e079-44e8-8446-172d9cbabced" pod="tigera-operator/tigera-operator-5bf8dfcb4-kzsxx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.486917 kubelet[2796]: I0813 01:46:45.486854 2796 kubelet.go:2306] "Pod admission denied" podUID="d6bbdb93-3272-4187-8d11-cbd859187eb5" pod="tigera-operator/tigera-operator-5bf8dfcb4-mhdr9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.639755 kubelet[2796]: I0813 01:46:45.639540 2796 kubelet.go:2306] "Pod admission denied" podUID="981f866c-5160-4777-ab67-65ea8654236f" pod="tigera-operator/tigera-operator-5bf8dfcb4-smgcj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.787866 kubelet[2796]: I0813 01:46:45.787687 2796 kubelet.go:2306] "Pod admission denied" podUID="15700388-cf5b-4ad9-88b1-1e7de30d38ff" pod="tigera-operator/tigera-operator-5bf8dfcb4-sk26s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.887954 kubelet[2796]: I0813 01:46:45.887899 2796 kubelet.go:2306] "Pod admission denied" podUID="4a59b44c-cce5-42f3-82f0-17fad32b2459" pod="tigera-operator/tigera-operator-5bf8dfcb4-dzppl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.938051 kubelet[2796]: I0813 01:46:45.937727 2796 kubelet.go:2306] "Pod admission denied" podUID="3110b418-2920-4eb3-997c-b4d133a36d34" pod="tigera-operator/tigera-operator-5bf8dfcb4-8vt7m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.040245 kubelet[2796]: I0813 01:46:46.039750 2796 kubelet.go:2306] "Pod admission denied" podUID="7b59ba08-4871-46a3-b850-9bee6da1fbb7" pod="tigera-operator/tigera-operator-5bf8dfcb4-hgff5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.137565 kubelet[2796]: I0813 01:46:46.137466 2796 kubelet.go:2306] "Pod admission denied" podUID="71a689c8-ef43-451c-8393-5d14fa94ac53" pod="tigera-operator/tigera-operator-5bf8dfcb4-wbm5l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.241676 kubelet[2796]: I0813 01:46:46.241476 2796 kubelet.go:2306] "Pod admission denied" podUID="b59db44b-16d4-430c-8ae4-e2ac2aba1663" pod="tigera-operator/tigera-operator-5bf8dfcb4-j2ms6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.338179 kubelet[2796]: I0813 01:46:46.338092 2796 kubelet.go:2306] "Pod admission denied" podUID="7c158cda-2797-457d-b4ae-11c80caa92ef" pod="tigera-operator/tigera-operator-5bf8dfcb4-p7vdf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.436764 kubelet[2796]: I0813 01:46:46.436691 2796 kubelet.go:2306] "Pod admission denied" podUID="ba7283dc-9f4d-4101-be43-5378a3635eec" pod="tigera-operator/tigera-operator-5bf8dfcb4-pl6rh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.640292 kubelet[2796]: I0813 01:46:46.640120 2796 kubelet.go:2306] "Pod admission denied" podUID="4900b6f4-e05c-4b21-90b9-1d3320821412" pod="tigera-operator/tigera-operator-5bf8dfcb4-vwb28" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.735974 kubelet[2796]: I0813 01:46:46.735921 2796 kubelet.go:2306] "Pod admission denied" podUID="8f137c2f-daf6-43c5-85be-ae3d62b9f8c7" pod="tigera-operator/tigera-operator-5bf8dfcb4-hnbn8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.843762 kubelet[2796]: I0813 01:46:46.842259 2796 kubelet.go:2306] "Pod admission denied" podUID="f4041646-dbdc-4f68-b5ab-40c1370290f3" pod="tigera-operator/tigera-operator-5bf8dfcb4-cjq67" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.939157 kubelet[2796]: I0813 01:46:46.938961 2796 kubelet.go:2306] "Pod admission denied" podUID="08dd1d90-0964-463a-b55d-f702463a60f5" pod="tigera-operator/tigera-operator-5bf8dfcb4-69c44" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.036365 kubelet[2796]: I0813 01:46:47.036308 2796 kubelet.go:2306] "Pod admission denied" podUID="c36dd251-f896-4db5-a320-2154439f7ff3" pod="tigera-operator/tigera-operator-5bf8dfcb4-bff5d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.136518 kubelet[2796]: I0813 01:46:47.136456 2796 kubelet.go:2306] "Pod admission denied" podUID="679d607e-863c-46b9-9587-180d0ac44589" pod="tigera-operator/tigera-operator-5bf8dfcb4-fwzmm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.237675 kubelet[2796]: I0813 01:46:47.237600 2796 kubelet.go:2306] "Pod admission denied" podUID="879755c1-a2fb-4a82-8312-5d971f2121a0" pod="tigera-operator/tigera-operator-5bf8dfcb4-srlgb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.438589 kubelet[2796]: I0813 01:46:47.438521 2796 kubelet.go:2306] "Pod admission denied" podUID="ce4441bc-9136-476e-a086-0f0daeb7c64a" pod="tigera-operator/tigera-operator-5bf8dfcb4-zkh8p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.537399 kubelet[2796]: I0813 01:46:47.537244 2796 kubelet.go:2306] "Pod admission denied" podUID="ed8a1665-44b8-4db5-a763-a041dcede111" pod="tigera-operator/tigera-operator-5bf8dfcb4-8z4dd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.637220 kubelet[2796]: I0813 01:46:47.637165 2796 kubelet.go:2306] "Pod admission denied" podUID="97810354-169e-4177-b2d9-843a90ecaf59" pod="tigera-operator/tigera-operator-5bf8dfcb4-76skr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.738681 kubelet[2796]: I0813 01:46:47.738607 2796 kubelet.go:2306] "Pod admission denied" podUID="aff5b32c-5022-47b8-9198-6b60d88e1ada" pod="tigera-operator/tigera-operator-5bf8dfcb4-m27j7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.788251 kubelet[2796]: I0813 01:46:47.787867 2796 kubelet.go:2306] "Pod admission denied" podUID="b4a4f040-e4a3-4285-9d7d-5c606597b9be" pod="tigera-operator/tigera-operator-5bf8dfcb4-pxhh6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.887611 kubelet[2796]: I0813 01:46:47.887530 2796 kubelet.go:2306] "Pod admission denied" podUID="1d9433bd-4ee6-4dd2-849c-aa52843c26bf" pod="tigera-operator/tigera-operator-5bf8dfcb4-864fr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.987731 kubelet[2796]: I0813 01:46:47.987676 2796 kubelet.go:2306] "Pod admission denied" podUID="48392b2c-0758-42d6-b24d-7e41cfa28cd3" pod="tigera-operator/tigera-operator-5bf8dfcb4-qpfsm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.035876 kubelet[2796]: I0813 01:46:48.035797 2796 kubelet.go:2306] "Pod admission denied" podUID="ba586158-1d88-4919-8e66-b8819dc9bd4b" pod="tigera-operator/tigera-operator-5bf8dfcb4-8vrfg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.139577 kubelet[2796]: I0813 01:46:48.137905 2796 kubelet.go:2306] "Pod admission denied" podUID="f4155f9d-2a52-4290-897a-92570289ff33" pod="tigera-operator/tigera-operator-5bf8dfcb4-lnm42" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.338921 kubelet[2796]: I0813 01:46:48.338858 2796 kubelet.go:2306] "Pod admission denied" podUID="a21841b9-912c-41a8-bb37-9f537a428228" pod="tigera-operator/tigera-operator-5bf8dfcb4-8k85z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.438199 kubelet[2796]: I0813 01:46:48.437389 2796 kubelet.go:2306] "Pod admission denied" podUID="cfa4a65e-b8db-4243-a977-067cf0891b2b" pod="tigera-operator/tigera-operator-5bf8dfcb4-6fmbj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.538266 kubelet[2796]: I0813 01:46:48.538213 2796 kubelet.go:2306] "Pod admission denied" podUID="36e5cd04-f638-4b58-b319-277a7b45cad2" pod="tigera-operator/tigera-operator-5bf8dfcb4-gn6zs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.737127 kubelet[2796]: I0813 01:46:48.737063 2796 kubelet.go:2306] "Pod admission denied" podUID="24b18256-216b-4106-aeac-cef11a65d5e3" pod="tigera-operator/tigera-operator-5bf8dfcb4-jsm98" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.838673 kubelet[2796]: I0813 01:46:48.838500 2796 kubelet.go:2306] "Pod admission denied" podUID="40e9d867-ec2f-448c-991c-ab2de508fdd9" pod="tigera-operator/tigera-operator-5bf8dfcb4-vg8c5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.887110 kubelet[2796]: I0813 01:46:48.887031 2796 kubelet.go:2306] "Pod admission denied" podUID="734863a1-c799-4f57-a423-12ea779ffb9f" pod="tigera-operator/tigera-operator-5bf8dfcb4-wjmj6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.987299 kubelet[2796]: I0813 01:46:48.987127 2796 kubelet.go:2306] "Pod admission denied" podUID="8c37231d-0199-4691-8a4c-56b735362cc4" pod="tigera-operator/tigera-operator-5bf8dfcb4-vzbt9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.092331 kubelet[2796]: I0813 01:46:49.092201 2796 kubelet.go:2306] "Pod admission denied" podUID="512ac10b-5b93-4ff3-96a1-503976e705c1" pod="tigera-operator/tigera-operator-5bf8dfcb4-2vdm2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.140430 kubelet[2796]: I0813 01:46:49.140364 2796 kubelet.go:2306] "Pod admission denied" podUID="e142f818-55f4-44de-a013-ddc5e8604423" pod="tigera-operator/tigera-operator-5bf8dfcb4-lzb4v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.238702 kubelet[2796]: I0813 01:46:49.238366 2796 kubelet.go:2306] "Pod admission denied" podUID="9a82ba73-b07f-45d9-b041-25cd4ba48b08" pod="tigera-operator/tigera-operator-5bf8dfcb4-xdxzc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.336212 kubelet[2796]: I0813 01:46:49.336149 2796 kubelet.go:2306] "Pod admission denied" podUID="65698418-c9c7-42ec-b468-4836d670156a" pod="tigera-operator/tigera-operator-5bf8dfcb4-zb6n7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.392669 kubelet[2796]: I0813 01:46:49.391843 2796 kubelet.go:2306] "Pod admission denied" podUID="69d2f795-f7f4-4283-b81b-ea1094ff6bc0" pod="tigera-operator/tigera-operator-5bf8dfcb4-7sh4m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.403224 containerd[1581]: time="2025-08-13T01:46:49.402871808Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:46:49.487437 kubelet[2796]: I0813 01:46:49.487368 2796 kubelet.go:2306] "Pod admission denied" podUID="729cc0e2-2ade-43ae-824c-f5a6b41dc740" pod="tigera-operator/tigera-operator-5bf8dfcb4-8tgpm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.587989 kubelet[2796]: I0813 01:46:49.587522 2796 kubelet.go:2306] "Pod admission denied" podUID="7c1de0ee-b3a5-4c8f-b8fc-200da2c8a6eb" pod="tigera-operator/tigera-operator-5bf8dfcb4-6zmb9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.690456 kubelet[2796]: I0813 01:46:49.690403 2796 kubelet.go:2306] "Pod admission denied" podUID="0224fe8c-605e-4cfc-9107-faa879c68c28" pod="tigera-operator/tigera-operator-5bf8dfcb4-k2fhg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.889294 kubelet[2796]: I0813 01:46:49.888881 2796 kubelet.go:2306] "Pod admission denied" podUID="ee3a1f23-e0d0-43fa-8560-a63b77318ea9" pod="tigera-operator/tigera-operator-5bf8dfcb4-pt7pj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.999106 kubelet[2796]: I0813 01:46:49.999029 2796 kubelet.go:2306] "Pod admission denied" podUID="92f91267-5813-4318-896c-98f8e68e2549" pod="tigera-operator/tigera-operator-5bf8dfcb4-fx8tx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.097293 kubelet[2796]: I0813 01:46:50.097223 2796 kubelet.go:2306] "Pod admission denied" podUID="5e8ee961-fa9b-4a91-b3ad-102f1a646d9e" pod="tigera-operator/tigera-operator-5bf8dfcb4-jbbhr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.201730 kubelet[2796]: I0813 01:46:50.200581 2796 kubelet.go:2306] "Pod admission denied" podUID="eff4f326-38a3-49a2-a359-78efff99c177" pod="tigera-operator/tigera-operator-5bf8dfcb4-lkrbt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.299121 kubelet[2796]: I0813 01:46:50.298751 2796 kubelet.go:2306] "Pod admission denied" podUID="9f155cbc-994e-4b72-af53-2de8b897f786" pod="tigera-operator/tigera-operator-5bf8dfcb4-q8l9h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.402942 containerd[1581]: time="2025-08-13T01:46:50.402740503Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:50.406189 kubelet[2796]: I0813 01:46:50.406079 2796 kubelet.go:2306] "Pod admission denied" podUID="e7607571-080f-4724-b3b1-5a263f213326" pod="tigera-operator/tigera-operator-5bf8dfcb4-hhjb7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.447678 kubelet[2796]: I0813 01:46:50.447119 2796 kubelet.go:2306] "Pod admission denied" podUID="69cb96cb-f710-4119-bbd0-4795791b9bcc" pod="tigera-operator/tigera-operator-5bf8dfcb4-v4zm8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.527689 containerd[1581]: time="2025-08-13T01:46:50.526897545Z" level=error msg="Failed to destroy network for sandbox \"ffaaa74f5ae876c5ad3594e056cf81cbfc1910e64af9c53339af5be6d6cf10d5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:50.531951 systemd[1]: run-netns-cni\x2dfb3ae795\x2d10ed\x2d9b9d\x2db40b\x2d5ffe17f523db.mount: Deactivated successfully. Aug 13 01:46:50.533782 containerd[1581]: time="2025-08-13T01:46:50.532855067Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffaaa74f5ae876c5ad3594e056cf81cbfc1910e64af9c53339af5be6d6cf10d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:50.535425 kubelet[2796]: E0813 01:46:50.535335 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffaaa74f5ae876c5ad3594e056cf81cbfc1910e64af9c53339af5be6d6cf10d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:50.535507 kubelet[2796]: E0813 01:46:50.535452 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffaaa74f5ae876c5ad3594e056cf81cbfc1910e64af9c53339af5be6d6cf10d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:46:50.535541 kubelet[2796]: E0813 01:46:50.535511 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffaaa74f5ae876c5ad3594e056cf81cbfc1910e64af9c53339af5be6d6cf10d5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:46:50.535712 kubelet[2796]: E0813 01:46:50.535608 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffaaa74f5ae876c5ad3594e056cf81cbfc1910e64af9c53339af5be6d6cf10d5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bk2p6" podUID="0e8898c7-a3f5-4010-bb1f-d756673c29b2" Aug 13 01:46:50.552265 kubelet[2796]: I0813 01:46:50.552208 2796 kubelet.go:2306] "Pod admission denied" podUID="49217a8a-7524-4ad8-9c80-1124513482be" pod="tigera-operator/tigera-operator-5bf8dfcb4-88jqk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.647746 kubelet[2796]: I0813 01:46:50.647688 2796 kubelet.go:2306] "Pod admission denied" podUID="280cef38-4175-4cb1-abd5-1c629b75cddf" pod="tigera-operator/tigera-operator-5bf8dfcb4-979n2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.746176 kubelet[2796]: I0813 01:46:50.745793 2796 kubelet.go:2306] "Pod admission denied" podUID="a3ee45ca-a823-4db2-b8ab-b579966b63a1" pod="tigera-operator/tigera-operator-5bf8dfcb4-bnfzk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.846687 kubelet[2796]: I0813 01:46:50.845518 2796 kubelet.go:2306] "Pod admission denied" podUID="e1edb4b1-a5f3-47a2-9780-d3e224fbde8d" pod="tigera-operator/tigera-operator-5bf8dfcb4-6lrn2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.950620 kubelet[2796]: I0813 01:46:50.950486 2796 kubelet.go:2306] "Pod admission denied" podUID="d486da27-67b8-4ad2-b1cd-4004ac43bd20" pod="tigera-operator/tigera-operator-5bf8dfcb4-gs7tr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.053098 kubelet[2796]: I0813 01:46:51.052616 2796 kubelet.go:2306] "Pod admission denied" podUID="cdc82eb1-8257-4ea4-9d78-d5201e29dfcd" pod="tigera-operator/tigera-operator-5bf8dfcb4-l4t68" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.148747 kubelet[2796]: I0813 01:46:51.148606 2796 kubelet.go:2306] "Pod admission denied" podUID="99aed21a-d0d9-43b3-8f34-d15cf881cab3" pod="tigera-operator/tigera-operator-5bf8dfcb4-hz94q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.347767 kubelet[2796]: I0813 01:46:51.347656 2796 kubelet.go:2306] "Pod admission denied" podUID="02cbdc45-9fa5-44a8-a0f0-841d6fb0c56b" pod="tigera-operator/tigera-operator-5bf8dfcb4-zxzfl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.399542 kubelet[2796]: E0813 01:46:51.398945 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:46:51.399898 kubelet[2796]: E0813 01:46:51.398957 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:46:51.401680 containerd[1581]: time="2025-08-13T01:46:51.401302195Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vrr8,Uid:cbf6d4b0-f3bc-4a92-9977-6d91de60b65f,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:51.449541 kubelet[2796]: I0813 01:46:51.449488 2796 kubelet.go:2306] "Pod admission denied" podUID="5c62e752-a62e-4e1b-ae80-b679fc9290e0" pod="tigera-operator/tigera-operator-5bf8dfcb4-phf9f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.498774 containerd[1581]: time="2025-08-13T01:46:51.498704829Z" level=error msg="Failed to destroy network for sandbox \"130ecf6b5d9d7a43b798ee8d585d77d0cf9a88a76c7ca8cc7af53da2a75bf774\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:51.501835 systemd[1]: run-netns-cni\x2db866b1b9\x2d2369\x2d28d3\x2da07a\x2d74abf1fb007d.mount: Deactivated successfully. Aug 13 01:46:51.502864 containerd[1581]: time="2025-08-13T01:46:51.502385043Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vrr8,Uid:cbf6d4b0-f3bc-4a92-9977-6d91de60b65f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"130ecf6b5d9d7a43b798ee8d585d77d0cf9a88a76c7ca8cc7af53da2a75bf774\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:51.505372 kubelet[2796]: E0813 01:46:51.503566 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"130ecf6b5d9d7a43b798ee8d585d77d0cf9a88a76c7ca8cc7af53da2a75bf774\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:51.505372 kubelet[2796]: E0813 01:46:51.503667 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"130ecf6b5d9d7a43b798ee8d585d77d0cf9a88a76c7ca8cc7af53da2a75bf774\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:46:51.505372 kubelet[2796]: E0813 01:46:51.503691 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"130ecf6b5d9d7a43b798ee8d585d77d0cf9a88a76c7ca8cc7af53da2a75bf774\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:46:51.505372 kubelet[2796]: E0813 01:46:51.503743 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6vrr8_kube-system(cbf6d4b0-f3bc-4a92-9977-6d91de60b65f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6vrr8_kube-system(cbf6d4b0-f3bc-4a92-9977-6d91de60b65f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"130ecf6b5d9d7a43b798ee8d585d77d0cf9a88a76c7ca8cc7af53da2a75bf774\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6vrr8" podUID="cbf6d4b0-f3bc-4a92-9977-6d91de60b65f" Aug 13 01:46:51.544814 kubelet[2796]: I0813 01:46:51.544667 2796 kubelet.go:2306] "Pod admission denied" podUID="daf3ac53-565b-4f31-89bf-e7211d01c745" pod="tigera-operator/tigera-operator-5bf8dfcb4-7rlv4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.643905 kubelet[2796]: I0813 01:46:51.643796 2796 kubelet.go:2306] "Pod admission denied" podUID="e41a7151-e358-4a41-b568-7e9dbb0e1ed1" pod="tigera-operator/tigera-operator-5bf8dfcb4-f2nl2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.704255 kubelet[2796]: I0813 01:46:51.704092 2796 kubelet.go:2306] "Pod admission denied" podUID="97d34230-2d06-4298-aa73-edba3a1f665a" pod="tigera-operator/tigera-operator-5bf8dfcb4-bmkm7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.797109 kubelet[2796]: I0813 01:46:51.797032 2796 kubelet.go:2306] "Pod admission denied" podUID="00a211df-8f83-4b3b-91d8-1ef5e4858cca" pod="tigera-operator/tigera-operator-5bf8dfcb4-k89v4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.899269 kubelet[2796]: I0813 01:46:51.899109 2796 kubelet.go:2306] "Pod admission denied" podUID="1d58f1a3-f84c-4a0b-a607-b23c55c560cc" pod="tigera-operator/tigera-operator-5bf8dfcb4-pkkcp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.998102 kubelet[2796]: I0813 01:46:51.998013 2796 kubelet.go:2306] "Pod admission denied" podUID="7bbb1f78-3e98-4834-808c-6cced33dc4ba" pod="tigera-operator/tigera-operator-5bf8dfcb4-vmxbz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.094277 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount178724474.mount: Deactivated successfully. Aug 13 01:46:52.097319 kubelet[2796]: I0813 01:46:52.096776 2796 kubelet.go:2306] "Pod admission denied" podUID="670207f9-e590-46f8-a36e-50eceab1202e" pod="tigera-operator/tigera-operator-5bf8dfcb4-dx4k9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.099548 containerd[1581]: time="2025-08-13T01:46:52.099507455Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:46:52.100774 containerd[1581]: time="2025-08-13T01:46:52.099722656Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount178724474: write /var/lib/containerd/tmpmounts/containerd-mount178724474/usr/bin/calico-node: no space left on device" Aug 13 01:46:52.101177 kubelet[2796]: E0813 01:46:52.101138 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount178724474: write /var/lib/containerd/tmpmounts/containerd-mount178724474/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:46:52.101254 kubelet[2796]: E0813 01:46:52.101220 2796 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount178724474: write /var/lib/containerd/tmpmounts/containerd-mount178724474/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:46:52.102050 kubelet[2796]: E0813 01:46:52.101917 2796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7p5j4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-8j6cb_calico-system(416d9de4-5101-44c9-b974-0fedf790aa67): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount178724474: write /var/lib/containerd/tmpmounts/containerd-mount178724474/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:46:52.103194 kubelet[2796]: E0813 01:46:52.103104 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount178724474: write /var/lib/containerd/tmpmounts/containerd-mount178724474/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-8j6cb" podUID="416d9de4-5101-44c9-b974-0fedf790aa67" Aug 13 01:46:52.203419 kubelet[2796]: I0813 01:46:52.203246 2796 kubelet.go:2306] "Pod admission denied" podUID="b1b84537-7118-414e-ab32-45f792f9e593" pod="tigera-operator/tigera-operator-5bf8dfcb4-4d4k6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.288314 kubelet[2796]: I0813 01:46:52.288156 2796 kubelet.go:2306] "Pod admission denied" podUID="a877e986-1fb5-4513-9542-fc5a47f19f10" pod="tigera-operator/tigera-operator-5bf8dfcb4-p4f28" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.389412 kubelet[2796]: I0813 01:46:52.389351 2796 kubelet.go:2306] "Pod admission denied" podUID="13b66145-b0df-4415-9ba7-f78c7fbc30ae" pod="tigera-operator/tigera-operator-5bf8dfcb4-v4sg9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.397667 kubelet[2796]: E0813 01:46:52.397412 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:46:52.400413 containerd[1581]: time="2025-08-13T01:46:52.400000538Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:52.491621 kubelet[2796]: I0813 01:46:52.491560 2796 kubelet.go:2306] "Pod admission denied" podUID="b259c3f1-4875-45d5-a113-a0ed909a5f20" pod="tigera-operator/tigera-operator-5bf8dfcb4-5jl25" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.496799 containerd[1581]: time="2025-08-13T01:46:52.496719893Z" level=error msg="Failed to destroy network for sandbox \"0f0dacdb2d44b336d08a665170daff4b32d6c5230a18b34378982fd1c1b90d1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:52.500489 systemd[1]: run-netns-cni\x2ddf89aea1\x2dab52\x2d8a18\x2db4ca\x2d6998e40962bc.mount: Deactivated successfully. Aug 13 01:46:52.503416 containerd[1581]: time="2025-08-13T01:46:52.501832756Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f0dacdb2d44b336d08a665170daff4b32d6c5230a18b34378982fd1c1b90d1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:52.504137 kubelet[2796]: E0813 01:46:52.503845 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f0dacdb2d44b336d08a665170daff4b32d6c5230a18b34378982fd1c1b90d1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:52.504137 kubelet[2796]: E0813 01:46:52.503901 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f0dacdb2d44b336d08a665170daff4b32d6c5230a18b34378982fd1c1b90d1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:46:52.504137 kubelet[2796]: E0813 01:46:52.503923 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f0dacdb2d44b336d08a665170daff4b32d6c5230a18b34378982fd1c1b90d1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:46:52.504137 kubelet[2796]: E0813 01:46:52.503963 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-djvw6_kube-system(981696e3-42b0-4ae8-b44b-fa439a03a402)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-djvw6_kube-system(981696e3-42b0-4ae8-b44b-fa439a03a402)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f0dacdb2d44b336d08a665170daff4b32d6c5230a18b34378982fd1c1b90d1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-djvw6" podUID="981696e3-42b0-4ae8-b44b-fa439a03a402" Aug 13 01:46:52.588846 kubelet[2796]: I0813 01:46:52.588077 2796 kubelet.go:2306] "Pod admission denied" podUID="f1d25733-7109-4caf-b4c4-69122ed673b5" pod="tigera-operator/tigera-operator-5bf8dfcb4-rhjf2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.788906 kubelet[2796]: I0813 01:46:52.788850 2796 kubelet.go:2306] "Pod admission denied" podUID="f858b4a3-94f8-4d61-b968-3c937b4aa5bb" pod="tigera-operator/tigera-operator-5bf8dfcb4-j775h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.888996 kubelet[2796]: I0813 01:46:52.888832 2796 kubelet.go:2306] "Pod admission denied" podUID="00a656ce-5983-45e7-9424-c6e6e8d19f62" pod="tigera-operator/tigera-operator-5bf8dfcb4-tkvms" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.989215 kubelet[2796]: I0813 01:46:52.989162 2796 kubelet.go:2306] "Pod admission denied" podUID="abe91fb2-b238-4e8e-b6b2-16e0e3908f1a" pod="tigera-operator/tigera-operator-5bf8dfcb4-jzwlm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.189992 kubelet[2796]: I0813 01:46:53.189588 2796 kubelet.go:2306] "Pod admission denied" podUID="0adb2c97-03e5-4ed1-82aa-988e7ce21e88" pod="tigera-operator/tigera-operator-5bf8dfcb4-z462x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.288819 kubelet[2796]: I0813 01:46:53.288756 2796 kubelet.go:2306] "Pod admission denied" podUID="82dc39cb-1f9f-4ccf-a12f-9342fe3c7831" pod="tigera-operator/tigera-operator-5bf8dfcb4-nctbb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.343680 kubelet[2796]: I0813 01:46:53.343577 2796 kubelet.go:2306] "Pod admission denied" podUID="8628eb1a-ece1-4f9b-8909-583375b5cc91" pod="tigera-operator/tigera-operator-5bf8dfcb4-t4558" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.442874 kubelet[2796]: I0813 01:46:53.442635 2796 kubelet.go:2306] "Pod admission denied" podUID="293ae36b-2ef7-433e-bf2d-becc5f946f66" pod="tigera-operator/tigera-operator-5bf8dfcb4-rlhsb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.540558 kubelet[2796]: I0813 01:46:53.540492 2796 kubelet.go:2306] "Pod admission denied" podUID="5e60745f-b1be-4bf2-a47f-be05bdf47ee7" pod="tigera-operator/tigera-operator-5bf8dfcb4-hxwjh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.588664 kubelet[2796]: I0813 01:46:53.588589 2796 kubelet.go:2306] "Pod admission denied" podUID="0d82552b-4588-4b14-836c-4f8325c5cb28" pod="tigera-operator/tigera-operator-5bf8dfcb4-t95hj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.690391 kubelet[2796]: I0813 01:46:53.690329 2796 kubelet.go:2306] "Pod admission denied" podUID="d98e3deb-6e8f-475d-aee0-850b4dc554ab" pod="tigera-operator/tigera-operator-5bf8dfcb4-shzp6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.889836 kubelet[2796]: I0813 01:46:53.889770 2796 kubelet.go:2306] "Pod admission denied" podUID="71755310-e765-43d9-b448-1c0d3416513e" pod="tigera-operator/tigera-operator-5bf8dfcb4-4b5r7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.988770 kubelet[2796]: I0813 01:46:53.988705 2796 kubelet.go:2306] "Pod admission denied" podUID="e069e32f-e327-4aa6-81df-e5b8a78abb03" pod="tigera-operator/tigera-operator-5bf8dfcb4-h2rc6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.090842 kubelet[2796]: I0813 01:46:54.090770 2796 kubelet.go:2306] "Pod admission denied" podUID="528184d2-6d22-4aba-bd82-4fa838e9bc62" pod="tigera-operator/tigera-operator-5bf8dfcb4-blhjc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.149466 kubelet[2796]: I0813 01:46:54.149133 2796 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:54.149466 kubelet[2796]: I0813 01:46:54.149185 2796 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:46:54.152278 containerd[1581]: time="2025-08-13T01:46:54.152156522Z" level=info msg="StopPodSandbox for \"19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2\"" Aug 13 01:46:54.152872 containerd[1581]: time="2025-08-13T01:46:54.152470672Z" level=info msg="TearDown network for sandbox \"19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2\" successfully" Aug 13 01:46:54.152872 containerd[1581]: time="2025-08-13T01:46:54.152486512Z" level=info msg="StopPodSandbox for \"19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2\" returns successfully" Aug 13 01:46:54.153452 containerd[1581]: time="2025-08-13T01:46:54.153403177Z" level=info msg="RemovePodSandbox for \"19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2\"" Aug 13 01:46:54.153613 containerd[1581]: time="2025-08-13T01:46:54.153428207Z" level=info msg="Forcibly stopping sandbox \"19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2\"" Aug 13 01:46:54.153793 containerd[1581]: time="2025-08-13T01:46:54.153777549Z" level=info msg="TearDown network for sandbox \"19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2\" successfully" Aug 13 01:46:54.155332 containerd[1581]: time="2025-08-13T01:46:54.155311946Z" level=info msg="Ensure that sandbox 19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2 in task-service has been cleanup successfully" Aug 13 01:46:54.157824 containerd[1581]: time="2025-08-13T01:46:54.157794897Z" level=info msg="RemovePodSandbox \"19143f3abd72677669f85e49a88a014075df99b822463c2bba0c11bc76ceded2\" returns successfully" Aug 13 01:46:54.158484 kubelet[2796]: I0813 01:46:54.158457 2796 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:46:54.170066 kubelet[2796]: I0813 01:46:54.170019 2796 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:54.170362 kubelet[2796]: I0813 01:46:54.170327 2796 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-djvw6","calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7","kube-system/coredns-7c65d6cfc9-6vrr8","calico-system/calico-node-8j6cb","calico-system/csi-node-driver-bk2p6","calico-system/calico-typha-bf6ccb678-cdfdr","kube-system/kube-controller-manager-172-232-7-32","kube-system/kube-proxy-dmp9l","kube-system/kube-apiserver-172-232-7-32","kube-system/kube-scheduler-172-232-7-32"] Aug 13 01:46:54.170477 kubelet[2796]: E0813 01:46:54.170369 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:46:54.170477 kubelet[2796]: E0813 01:46:54.170402 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:46:54.170477 kubelet[2796]: E0813 01:46:54.170411 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:46:54.170477 kubelet[2796]: E0813 01:46:54.170420 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-8j6cb" Aug 13 01:46:54.170477 kubelet[2796]: E0813 01:46:54.170428 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:46:54.170477 kubelet[2796]: E0813 01:46:54.170443 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-bf6ccb678-cdfdr" Aug 13 01:46:54.170477 kubelet[2796]: E0813 01:46:54.170453 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-32" Aug 13 01:46:54.170632 kubelet[2796]: E0813 01:46:54.170481 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-dmp9l" Aug 13 01:46:54.170632 kubelet[2796]: E0813 01:46:54.170492 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-32" Aug 13 01:46:54.170632 kubelet[2796]: E0813 01:46:54.170503 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-32" Aug 13 01:46:54.170632 kubelet[2796]: I0813 01:46:54.170512 2796 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:46:54.189723 kubelet[2796]: I0813 01:46:54.189664 2796 kubelet.go:2306] "Pod admission denied" podUID="6debb029-01f9-417b-9213-0440534e3a26" pod="tigera-operator/tigera-operator-5bf8dfcb4-9jzvq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.290256 kubelet[2796]: I0813 01:46:54.290132 2796 kubelet.go:2306] "Pod admission denied" podUID="ae05d698-37e0-4c80-a3de-6d1c504c192e" pod="tigera-operator/tigera-operator-5bf8dfcb4-s468h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.398168 containerd[1581]: time="2025-08-13T01:46:54.398115305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:54.457700 containerd[1581]: time="2025-08-13T01:46:54.456010135Z" level=error msg="Failed to destroy network for sandbox \"8bf893ccfdc336980f56109061fc47dc2b4f6e7d1418d02289ac259bf9aedefc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:54.457700 containerd[1581]: time="2025-08-13T01:46:54.457193870Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bf893ccfdc336980f56109061fc47dc2b4f6e7d1418d02289ac259bf9aedefc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:54.457908 kubelet[2796]: E0813 01:46:54.457568 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bf893ccfdc336980f56109061fc47dc2b4f6e7d1418d02289ac259bf9aedefc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:54.457908 kubelet[2796]: E0813 01:46:54.457670 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bf893ccfdc336980f56109061fc47dc2b4f6e7d1418d02289ac259bf9aedefc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:46:54.457908 kubelet[2796]: E0813 01:46:54.457699 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8bf893ccfdc336980f56109061fc47dc2b4f6e7d1418d02289ac259bf9aedefc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:46:54.457908 kubelet[2796]: E0813 01:46:54.457748 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8bf893ccfdc336980f56109061fc47dc2b4f6e7d1418d02289ac259bf9aedefc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" podUID="4720bedc-4719-4a57-b2ff-e5b21f7acb7f" Aug 13 01:46:54.460491 systemd[1]: run-netns-cni\x2dfcb3d456\x2d786c\x2d9704\x2d9e0a\x2deeb73a7b9a38.mount: Deactivated successfully. Aug 13 01:46:54.490843 kubelet[2796]: I0813 01:46:54.490778 2796 kubelet.go:2306] "Pod admission denied" podUID="e52826f1-3db3-41c3-8523-863cfb0ea6c3" pod="tigera-operator/tigera-operator-5bf8dfcb4-2x2sq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.588774 kubelet[2796]: I0813 01:46:54.588713 2796 kubelet.go:2306] "Pod admission denied" podUID="59c09c0b-eba0-4735-98d4-5789a103bee6" pod="tigera-operator/tigera-operator-5bf8dfcb4-dtg5h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.693330 kubelet[2796]: I0813 01:46:54.693255 2796 kubelet.go:2306] "Pod admission denied" podUID="6c30bb92-12b7-4f98-a63c-b25bf1d92211" pod="tigera-operator/tigera-operator-5bf8dfcb4-xb75h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.792009 kubelet[2796]: I0813 01:46:54.791935 2796 kubelet.go:2306] "Pod admission denied" podUID="3618a934-97a0-4877-8fbf-9bebfb405fa2" pod="tigera-operator/tigera-operator-5bf8dfcb4-84klr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.893600 kubelet[2796]: I0813 01:46:54.893233 2796 kubelet.go:2306] "Pod admission denied" podUID="50eae883-f44b-446e-8424-4bd2fc4a3dbf" pod="tigera-operator/tigera-operator-5bf8dfcb4-hfqpj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.012360 kubelet[2796]: I0813 01:46:55.012276 2796 kubelet.go:2306] "Pod admission denied" podUID="b4e70c51-1ec1-4fd2-9bc1-39db5e48fc32" pod="tigera-operator/tigera-operator-5bf8dfcb4-scjfb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.152514 kubelet[2796]: I0813 01:46:55.151495 2796 kubelet.go:2306] "Pod admission denied" podUID="4698a0b7-d8f9-43b9-9e22-bb830b0d4bba" pod="tigera-operator/tigera-operator-5bf8dfcb4-hrzlp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.254635 kubelet[2796]: I0813 01:46:55.254546 2796 kubelet.go:2306] "Pod admission denied" podUID="6bcf414c-a31a-431d-9f84-3197bbcec6a5" pod="tigera-operator/tigera-operator-5bf8dfcb4-psc2k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.361476 kubelet[2796]: I0813 01:46:55.361370 2796 kubelet.go:2306] "Pod admission denied" podUID="617f4606-5a66-4669-93f2-62fc88c1ee65" pod="tigera-operator/tigera-operator-5bf8dfcb4-pvcdk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.425221 kubelet[2796]: I0813 01:46:55.424866 2796 kubelet.go:2306] "Pod admission denied" podUID="e55dd08c-c7dd-4997-9c9f-ea84cfd71ebb" pod="tigera-operator/tigera-operator-5bf8dfcb4-f2xv8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.551084 kubelet[2796]: I0813 01:46:55.550849 2796 kubelet.go:2306] "Pod admission denied" podUID="4e18a05a-8894-42b5-b448-3e0cb99315f0" pod="tigera-operator/tigera-operator-5bf8dfcb4-wtb4h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.655110 kubelet[2796]: I0813 01:46:55.655017 2796 kubelet.go:2306] "Pod admission denied" podUID="9f91d603-0a56-48aa-8729-44393af410a8" pod="tigera-operator/tigera-operator-5bf8dfcb4-pc7h7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.759114 kubelet[2796]: I0813 01:46:55.759014 2796 kubelet.go:2306] "Pod admission denied" podUID="54ffe13a-3ee9-495b-89d5-f31c4d5d3c16" pod="tigera-operator/tigera-operator-5bf8dfcb4-2j2m8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.871516 kubelet[2796]: I0813 01:46:55.871428 2796 kubelet.go:2306] "Pod admission denied" podUID="71f16f97-b399-46d0-b954-c25723ce08fd" pod="tigera-operator/tigera-operator-5bf8dfcb4-glhss" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.107828 kubelet[2796]: I0813 01:46:56.107389 2796 kubelet.go:2306] "Pod admission denied" podUID="d54e51f2-7378-496c-a219-fd080da42b31" pod="tigera-operator/tigera-operator-5bf8dfcb4-jl8f2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.205756 kubelet[2796]: I0813 01:46:56.205552 2796 kubelet.go:2306] "Pod admission denied" podUID="9a6b2f3a-7581-4806-945d-f0b6fda0c242" pod="tigera-operator/tigera-operator-5bf8dfcb4-wnpp4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.304609 kubelet[2796]: I0813 01:46:56.303692 2796 kubelet.go:2306] "Pod admission denied" podUID="4cda0aae-ec32-44fd-a790-5ab837cb5473" pod="tigera-operator/tigera-operator-5bf8dfcb4-222vh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.400357 kubelet[2796]: I0813 01:46:56.399307 2796 kubelet.go:2306] "Pod admission denied" podUID="5d534ba8-419a-4881-9341-3fe249ccddf5" pod="tigera-operator/tigera-operator-5bf8dfcb4-lpfnj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.519090 kubelet[2796]: I0813 01:46:56.518982 2796 kubelet.go:2306] "Pod admission denied" podUID="46403e35-a470-483c-8618-fc2df0f81896" pod="tigera-operator/tigera-operator-5bf8dfcb4-wggwb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.656594 kubelet[2796]: I0813 01:46:56.656122 2796 kubelet.go:2306] "Pod admission denied" podUID="7ae4ae45-4b8c-4ab3-b4e4-ddd6e80af952" pod="tigera-operator/tigera-operator-5bf8dfcb4-b54k7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.760614 kubelet[2796]: I0813 01:46:56.759922 2796 kubelet.go:2306] "Pod admission denied" podUID="97f49e4f-12cc-4714-a367-ef06473e2280" pod="tigera-operator/tigera-operator-5bf8dfcb4-h8vmx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.865782 kubelet[2796]: I0813 01:46:56.865643 2796 kubelet.go:2306] "Pod admission denied" podUID="19473d1f-5f49-46f7-8cc8-7deb4b1d490e" pod="tigera-operator/tigera-operator-5bf8dfcb4-7xb98" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.970850 kubelet[2796]: I0813 01:46:56.970487 2796 kubelet.go:2306] "Pod admission denied" podUID="53e00ede-98d8-404a-99e0-8649fb47baa8" pod="tigera-operator/tigera-operator-5bf8dfcb4-pwptq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.049028 kubelet[2796]: I0813 01:46:57.048923 2796 kubelet.go:2306] "Pod admission denied" podUID="cf95cd9c-f6d9-464d-9394-c4826806e716" pod="tigera-operator/tigera-operator-5bf8dfcb4-4k6cf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.156740 kubelet[2796]: I0813 01:46:57.155014 2796 kubelet.go:2306] "Pod admission denied" podUID="2739f3a1-8b9a-4886-ba77-91f97b8be6b3" pod="tigera-operator/tigera-operator-5bf8dfcb4-95rs7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.398465 kubelet[2796]: E0813 01:46:57.398269 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:46:57.411094 kubelet[2796]: I0813 01:46:57.410477 2796 kubelet.go:2306] "Pod admission denied" podUID="392189c0-d9a2-475f-8af8-ff389e75b090" pod="tigera-operator/tigera-operator-5bf8dfcb4-6hsk8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.554375 kubelet[2796]: I0813 01:46:57.554279 2796 kubelet.go:2306] "Pod admission denied" podUID="f083258a-1abd-427c-afcf-437cf85a0351" pod="tigera-operator/tigera-operator-5bf8dfcb4-xr9xb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.610559 kubelet[2796]: I0813 01:46:57.610436 2796 kubelet.go:2306] "Pod admission denied" podUID="50895ad2-6fa5-4d9f-971f-fe50a059f8a6" pod="tigera-operator/tigera-operator-5bf8dfcb4-qfr5n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.703166 kubelet[2796]: I0813 01:46:57.701338 2796 kubelet.go:2306] "Pod admission denied" podUID="ae13f416-9b33-4ffa-b95f-7ca6df7b776f" pod="tigera-operator/tigera-operator-5bf8dfcb4-bxbl8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.807566 kubelet[2796]: I0813 01:46:57.805228 2796 kubelet.go:2306] "Pod admission denied" podUID="1424bb09-b45c-4b61-a838-ecb42f5252ce" pod="tigera-operator/tigera-operator-5bf8dfcb4-zbxpn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.924721 kubelet[2796]: I0813 01:46:57.922810 2796 kubelet.go:2306] "Pod admission denied" podUID="45e456da-8279-4ba1-9010-a96ff4a6467a" pod="tigera-operator/tigera-operator-5bf8dfcb4-qt9rc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.000411 kubelet[2796]: I0813 01:46:58.000294 2796 kubelet.go:2306] "Pod admission denied" podUID="730c7d73-ee8e-4413-97f6-7ac2ba87bb14" pod="tigera-operator/tigera-operator-5bf8dfcb4-ndwqr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.103535 kubelet[2796]: I0813 01:46:58.103469 2796 kubelet.go:2306] "Pod admission denied" podUID="537fac2f-ee3d-44bb-beff-7ea3b7701a3e" pod="tigera-operator/tigera-operator-5bf8dfcb4-wj7fq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.300410 kubelet[2796]: I0813 01:46:58.300175 2796 kubelet.go:2306] "Pod admission denied" podUID="a952999b-5697-4ad4-b8cc-526020417990" pod="tigera-operator/tigera-operator-5bf8dfcb4-bcpv5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.398318 kubelet[2796]: I0813 01:46:58.397831 2796 kubelet.go:2306] "Pod admission denied" podUID="ab45a066-1d28-453f-a42a-0e697daaa302" pod="tigera-operator/tigera-operator-5bf8dfcb4-cwbvp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.497071 kubelet[2796]: I0813 01:46:58.496984 2796 kubelet.go:2306] "Pod admission denied" podUID="4bb1f867-22d9-45c7-9024-26595e6a9a34" pod="tigera-operator/tigera-operator-5bf8dfcb4-m9bv6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.598767 kubelet[2796]: I0813 01:46:58.598304 2796 kubelet.go:2306] "Pod admission denied" podUID="e77617bf-f407-464b-a851-430950745a37" pod="tigera-operator/tigera-operator-5bf8dfcb4-vdjzb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.695544 kubelet[2796]: I0813 01:46:58.695454 2796 kubelet.go:2306] "Pod admission denied" podUID="7a432a73-bd83-434d-8439-344d6ba479d1" pod="tigera-operator/tigera-operator-5bf8dfcb4-4lqsw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.795151 kubelet[2796]: I0813 01:46:58.795065 2796 kubelet.go:2306] "Pod admission denied" podUID="d31979a1-02fb-447f-804b-1e6f37705961" pod="tigera-operator/tigera-operator-5bf8dfcb4-nxxpb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.896588 kubelet[2796]: I0813 01:46:58.896381 2796 kubelet.go:2306] "Pod admission denied" podUID="dc3d85d8-1b6c-4910-a860-4baf73bf7935" pod="tigera-operator/tigera-operator-5bf8dfcb4-zc44d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.994428 kubelet[2796]: I0813 01:46:58.994344 2796 kubelet.go:2306] "Pod admission denied" podUID="387bfafd-44ff-4a7f-b00c-dd72c584bea1" pod="tigera-operator/tigera-operator-5bf8dfcb4-n8w9v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.093478 kubelet[2796]: I0813 01:46:59.093397 2796 kubelet.go:2306] "Pod admission denied" podUID="8e2ae789-d4db-437a-8312-3b3953f566b4" pod="tigera-operator/tigera-operator-5bf8dfcb4-wtmpj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.191467 kubelet[2796]: I0813 01:46:59.191288 2796 kubelet.go:2306] "Pod admission denied" podUID="72289567-be71-4b79-abe3-185a920b2c86" pod="tigera-operator/tigera-operator-5bf8dfcb4-gz4pv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.292109 kubelet[2796]: I0813 01:46:59.292037 2796 kubelet.go:2306] "Pod admission denied" podUID="4e0f99e3-9c6b-44f5-ba5d-28d6cbf5b5c8" pod="tigera-operator/tigera-operator-5bf8dfcb4-9fx7v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.392098 kubelet[2796]: I0813 01:46:59.392033 2796 kubelet.go:2306] "Pod admission denied" podUID="f38a4151-1e63-402a-8499-953998baea88" pod="tigera-operator/tigera-operator-5bf8dfcb4-x8brl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.500933 kubelet[2796]: I0813 01:46:59.500841 2796 kubelet.go:2306] "Pod admission denied" podUID="3ff98b0a-74f9-45dd-bffc-7f0b97bb0a1c" pod="tigera-operator/tigera-operator-5bf8dfcb4-6jg5q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.590510 kubelet[2796]: I0813 01:46:59.590436 2796 kubelet.go:2306] "Pod admission denied" podUID="d2876c83-1b0f-4c83-90d1-c93cd9224358" pod="tigera-operator/tigera-operator-5bf8dfcb4-vmc8v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.692337 kubelet[2796]: I0813 01:46:59.692255 2796 kubelet.go:2306] "Pod admission denied" podUID="eba97448-be7c-4bf6-8d9e-17ce6efd9f26" pod="tigera-operator/tigera-operator-5bf8dfcb4-bszkj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.791872 kubelet[2796]: I0813 01:46:59.791668 2796 kubelet.go:2306] "Pod admission denied" podUID="33cbad46-56e5-4f8a-a90f-b52fb0069c6a" pod="tigera-operator/tigera-operator-5bf8dfcb4-6kvhn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.889898 kubelet[2796]: I0813 01:46:59.889833 2796 kubelet.go:2306] "Pod admission denied" podUID="c91ba73e-d101-4798-987b-79b02166fc9d" pod="tigera-operator/tigera-operator-5bf8dfcb4-6rgpc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.991029 kubelet[2796]: I0813 01:46:59.990955 2796 kubelet.go:2306] "Pod admission denied" podUID="b95f7cb4-3a6e-4df2-b684-d77b37360b8d" pod="tigera-operator/tigera-operator-5bf8dfcb4-fq4pq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.041346 kubelet[2796]: I0813 01:47:00.041271 2796 kubelet.go:2306] "Pod admission denied" podUID="b41b2487-0cfb-4cd2-9892-27b5879b9894" pod="tigera-operator/tigera-operator-5bf8dfcb4-dnt6j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.144512 kubelet[2796]: I0813 01:47:00.144289 2796 kubelet.go:2306] "Pod admission denied" podUID="f0c774de-a8b4-4e66-adeb-3b89178944ef" pod="tigera-operator/tigera-operator-5bf8dfcb4-wlmp5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.243137 kubelet[2796]: I0813 01:47:00.243010 2796 kubelet.go:2306] "Pod admission denied" podUID="55ea4429-9894-4f50-aeb4-81f37e64699f" pod="tigera-operator/tigera-operator-5bf8dfcb4-4kglq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.346553 kubelet[2796]: I0813 01:47:00.346453 2796 kubelet.go:2306] "Pod admission denied" podUID="7faf2f2f-a53d-4345-8cda-d35e5d66f8c0" pod="tigera-operator/tigera-operator-5bf8dfcb4-fxl6t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.443834 kubelet[2796]: I0813 01:47:00.443311 2796 kubelet.go:2306] "Pod admission denied" podUID="5a48b046-314d-48ad-82b2-96674fece59f" pod="tigera-operator/tigera-operator-5bf8dfcb4-f2kmv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.550277 kubelet[2796]: I0813 01:47:00.550199 2796 kubelet.go:2306] "Pod admission denied" podUID="69adcc48-8f9f-4c88-acbd-beae32ad48ad" pod="tigera-operator/tigera-operator-5bf8dfcb4-vxsnc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.642608 kubelet[2796]: I0813 01:47:00.642529 2796 kubelet.go:2306] "Pod admission denied" podUID="2d62bd03-f95a-4e6b-8e53-926116c96905" pod="tigera-operator/tigera-operator-5bf8dfcb4-gzddh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.743941 kubelet[2796]: I0813 01:47:00.743867 2796 kubelet.go:2306] "Pod admission denied" podUID="fc18dcc8-4933-41d1-89b4-e3bde4d6fbb0" pod="tigera-operator/tigera-operator-5bf8dfcb4-mkt4d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.841151 kubelet[2796]: I0813 01:47:00.840703 2796 kubelet.go:2306] "Pod admission denied" podUID="c3ca6d8b-fd8b-4c44-9604-38646dbd85be" pod="tigera-operator/tigera-operator-5bf8dfcb4-gnr2c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.940464 kubelet[2796]: I0813 01:47:00.940387 2796 kubelet.go:2306] "Pod admission denied" podUID="cea8d78c-dd17-4606-a82f-4678f319ed6b" pod="tigera-operator/tigera-operator-5bf8dfcb4-q75j7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.043250 kubelet[2796]: I0813 01:47:01.043088 2796 kubelet.go:2306] "Pod admission denied" podUID="444c0116-6bbf-4e4c-9b82-d0b1ffec486e" pod="tigera-operator/tigera-operator-5bf8dfcb4-g6g9b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.142289 kubelet[2796]: I0813 01:47:01.142234 2796 kubelet.go:2306] "Pod admission denied" podUID="416e216d-eaeb-4a1e-bf7f-566117421ebc" pod="tigera-operator/tigera-operator-5bf8dfcb4-gr8bh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.345347 kubelet[2796]: I0813 01:47:01.344759 2796 kubelet.go:2306] "Pod admission denied" podUID="916025c7-abaa-449d-854c-5121bafd0a34" pod="tigera-operator/tigera-operator-5bf8dfcb4-6d6b9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.442801 kubelet[2796]: I0813 01:47:01.442667 2796 kubelet.go:2306] "Pod admission denied" podUID="10288a45-e2ff-4e49-9a64-fd58e243f343" pod="tigera-operator/tigera-operator-5bf8dfcb4-7mn96" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.543801 kubelet[2796]: I0813 01:47:01.543698 2796 kubelet.go:2306] "Pod admission denied" podUID="84c84aec-4644-4c25-a1af-d083ad769b25" pod="tigera-operator/tigera-operator-5bf8dfcb4-bkl75" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.743818 kubelet[2796]: I0813 01:47:01.743747 2796 kubelet.go:2306] "Pod admission denied" podUID="e685c00c-d581-4ed3-b54e-0b83a334ccd5" pod="tigera-operator/tigera-operator-5bf8dfcb4-dv2dm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.840914 kubelet[2796]: I0813 01:47:01.840845 2796 kubelet.go:2306] "Pod admission denied" podUID="44565558-878d-4f61-b2df-784965b4ff0a" pod="tigera-operator/tigera-operator-5bf8dfcb4-b2d4g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.945470 kubelet[2796]: I0813 01:47:01.945392 2796 kubelet.go:2306] "Pod admission denied" podUID="75db361f-b94f-4408-8a09-eb2ba6fe3b30" pod="tigera-operator/tigera-operator-5bf8dfcb4-dw8wk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.144992 kubelet[2796]: I0813 01:47:02.144429 2796 kubelet.go:2306] "Pod admission denied" podUID="1214b1a0-462d-461a-985b-be8d319a41cb" pod="tigera-operator/tigera-operator-5bf8dfcb4-znsvg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.241232 kubelet[2796]: I0813 01:47:02.241138 2796 kubelet.go:2306] "Pod admission denied" podUID="06ae3c54-a57e-484c-8df3-cc7ecf627b46" pod="tigera-operator/tigera-operator-5bf8dfcb4-nn86t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.291746 kubelet[2796]: I0813 01:47:02.291695 2796 kubelet.go:2306] "Pod admission denied" podUID="d9de51f2-75e0-4566-bd52-14ed26a91bba" pod="tigera-operator/tigera-operator-5bf8dfcb4-ld5wh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.391151 kubelet[2796]: I0813 01:47:02.391085 2796 kubelet.go:2306] "Pod admission denied" podUID="f1c0d9cf-1096-4c87-b045-8568511f9218" pod="tigera-operator/tigera-operator-5bf8dfcb4-hsrkq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.593914 kubelet[2796]: I0813 01:47:02.593849 2796 kubelet.go:2306] "Pod admission denied" podUID="156487aa-4c3a-4f36-99d3-b899db75fbf1" pod="tigera-operator/tigera-operator-5bf8dfcb4-jnnj7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.690209 kubelet[2796]: I0813 01:47:02.690134 2796 kubelet.go:2306] "Pod admission denied" podUID="272cfe21-c532-4d70-8f75-a0f77d7aac69" pod="tigera-operator/tigera-operator-5bf8dfcb4-jmkxz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.793345 kubelet[2796]: I0813 01:47:02.793264 2796 kubelet.go:2306] "Pod admission denied" podUID="c8d3949c-6125-4249-a633-00e2724858cc" pod="tigera-operator/tigera-operator-5bf8dfcb4-r2q8v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.895736 kubelet[2796]: I0813 01:47:02.895528 2796 kubelet.go:2306] "Pod admission denied" podUID="c5e4bcfb-6abd-4d87-8359-651524e124cf" pod="tigera-operator/tigera-operator-5bf8dfcb4-8nmlq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.944429 kubelet[2796]: I0813 01:47:02.944359 2796 kubelet.go:2306] "Pod admission denied" podUID="e5a19454-9b0c-420a-9e4a-ff2d2cff5857" pod="tigera-operator/tigera-operator-5bf8dfcb4-lw7lx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.052744 kubelet[2796]: I0813 01:47:03.052655 2796 kubelet.go:2306] "Pod admission denied" podUID="ae5dfead-4740-45f6-bd65-be773d4930b1" pod="tigera-operator/tigera-operator-5bf8dfcb4-bpljt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.141359 kubelet[2796]: I0813 01:47:03.141287 2796 kubelet.go:2306] "Pod admission denied" podUID="3430a684-4898-465c-b252-7408a9a4fe88" pod="tigera-operator/tigera-operator-5bf8dfcb4-p2df9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.240696 kubelet[2796]: I0813 01:47:03.240610 2796 kubelet.go:2306] "Pod admission denied" podUID="a76e5f84-85ff-4106-9c2b-a70e41d2005b" pod="tigera-operator/tigera-operator-5bf8dfcb4-4wz2k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.398786 kubelet[2796]: E0813 01:47:03.398486 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:47:03.400040 kubelet[2796]: E0813 01:47:03.400016 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-8j6cb" podUID="416d9de4-5101-44c9-b974-0fedf790aa67" Aug 13 01:47:03.445944 kubelet[2796]: I0813 01:47:03.445875 2796 kubelet.go:2306] "Pod admission denied" podUID="71811fa1-b105-44a2-8b44-965501bbdbb5" pod="tigera-operator/tigera-operator-5bf8dfcb4-gjs67" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.544679 kubelet[2796]: I0813 01:47:03.544156 2796 kubelet.go:2306] "Pod admission denied" podUID="f2717a75-3c3e-49e7-83b9-55fe63e3c029" pod="tigera-operator/tigera-operator-5bf8dfcb4-qp4qf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.644945 kubelet[2796]: I0813 01:47:03.644875 2796 kubelet.go:2306] "Pod admission denied" podUID="f830bb01-0ece-44d8-9d49-e2bcf837f04b" pod="tigera-operator/tigera-operator-5bf8dfcb4-lbffw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.742345 kubelet[2796]: I0813 01:47:03.742269 2796 kubelet.go:2306] "Pod admission denied" podUID="84c9b474-a08d-463f-a891-0c97097eeea3" pod="tigera-operator/tigera-operator-5bf8dfcb4-cs8kk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.845721 kubelet[2796]: I0813 01:47:03.845532 2796 kubelet.go:2306] "Pod admission denied" podUID="580aa3c9-43e6-4a06-9c6a-fe6f7b5b2844" pod="tigera-operator/tigera-operator-5bf8dfcb4-gk9w9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.043178 kubelet[2796]: I0813 01:47:04.043112 2796 kubelet.go:2306] "Pod admission denied" podUID="44e5c2c3-6279-4cd1-acd6-91d43c16df4d" pod="tigera-operator/tigera-operator-5bf8dfcb4-kznz2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.142496 kubelet[2796]: I0813 01:47:04.141926 2796 kubelet.go:2306] "Pod admission denied" podUID="7ede3fb6-2149-49d9-9013-84cab21f538a" pod="tigera-operator/tigera-operator-5bf8dfcb4-96ttv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.193889 kubelet[2796]: I0813 01:47:04.193851 2796 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:04.193889 kubelet[2796]: I0813 01:47:04.193897 2796 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:47:04.196323 kubelet[2796]: I0813 01:47:04.196300 2796 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:47:04.207411 kubelet[2796]: I0813 01:47:04.207380 2796 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:04.207559 kubelet[2796]: I0813 01:47:04.207459 2796 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7","kube-system/coredns-7c65d6cfc9-6vrr8","kube-system/coredns-7c65d6cfc9-djvw6","calico-system/csi-node-driver-bk2p6","calico-system/calico-node-8j6cb","calico-system/calico-typha-bf6ccb678-cdfdr","kube-system/kube-controller-manager-172-232-7-32","kube-system/kube-proxy-dmp9l","kube-system/kube-apiserver-172-232-7-32","kube-system/kube-scheduler-172-232-7-32"] Aug 13 01:47:04.207559 kubelet[2796]: E0813 01:47:04.207512 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:47:04.207559 kubelet[2796]: E0813 01:47:04.207529 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:47:04.207559 kubelet[2796]: E0813 01:47:04.207541 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:47:04.207559 kubelet[2796]: E0813 01:47:04.207551 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:47:04.207559 kubelet[2796]: E0813 01:47:04.207559 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-8j6cb" Aug 13 01:47:04.207903 kubelet[2796]: E0813 01:47:04.207573 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-bf6ccb678-cdfdr" Aug 13 01:47:04.207903 kubelet[2796]: E0813 01:47:04.207584 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-32" Aug 13 01:47:04.207903 kubelet[2796]: E0813 01:47:04.207595 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-dmp9l" Aug 13 01:47:04.207903 kubelet[2796]: E0813 01:47:04.207603 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-32" Aug 13 01:47:04.207903 kubelet[2796]: E0813 01:47:04.207612 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-32" Aug 13 01:47:04.207903 kubelet[2796]: I0813 01:47:04.207622 2796 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:47:04.241895 kubelet[2796]: I0813 01:47:04.241834 2796 kubelet.go:2306] "Pod admission denied" podUID="bd04193a-f18a-4a11-9634-5415cfdfa766" pod="tigera-operator/tigera-operator-5bf8dfcb4-smmlh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.353792 kubelet[2796]: I0813 01:47:04.352962 2796 kubelet.go:2306] "Pod admission denied" podUID="e65432e2-1f46-47ad-a282-fb2be5562127" pod="tigera-operator/tigera-operator-5bf8dfcb4-nm7mx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.399066 containerd[1581]: time="2025-08-13T01:47:04.398428999Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:04.446272 kubelet[2796]: I0813 01:47:04.446217 2796 kubelet.go:2306] "Pod admission denied" podUID="04879ee2-abe0-4e14-96be-ee627119eb8a" pod="tigera-operator/tigera-operator-5bf8dfcb4-87pgt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.472409 containerd[1581]: time="2025-08-13T01:47:04.472351528Z" level=error msg="Failed to destroy network for sandbox \"d8c0e569802480ab4ec68c13c5e108ed55295b0aa8c2aaf33c1f35636fe12301\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:04.477707 systemd[1]: run-netns-cni\x2d6e0f1f0f\x2d31e1\x2deb96\x2d154d\x2d2e87ea81bb5c.mount: Deactivated successfully. Aug 13 01:47:04.478114 containerd[1581]: time="2025-08-13T01:47:04.477950220Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c0e569802480ab4ec68c13c5e108ed55295b0aa8c2aaf33c1f35636fe12301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:04.479531 kubelet[2796]: E0813 01:47:04.479321 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c0e569802480ab4ec68c13c5e108ed55295b0aa8c2aaf33c1f35636fe12301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:04.479698 kubelet[2796]: E0813 01:47:04.479670 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c0e569802480ab4ec68c13c5e108ed55295b0aa8c2aaf33c1f35636fe12301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:47:04.479845 kubelet[2796]: E0813 01:47:04.479798 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d8c0e569802480ab4ec68c13c5e108ed55295b0aa8c2aaf33c1f35636fe12301\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:47:04.480246 kubelet[2796]: E0813 01:47:04.480129 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d8c0e569802480ab4ec68c13c5e108ed55295b0aa8c2aaf33c1f35636fe12301\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bk2p6" podUID="0e8898c7-a3f5-4010-bb1f-d756673c29b2" Aug 13 01:47:04.645027 kubelet[2796]: I0813 01:47:04.644939 2796 kubelet.go:2306] "Pod admission denied" podUID="7685c2c9-4c4f-48c0-85be-ebe8a1a6058c" pod="tigera-operator/tigera-operator-5bf8dfcb4-kqlhc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.742345 kubelet[2796]: I0813 01:47:04.742252 2796 kubelet.go:2306] "Pod admission denied" podUID="6c1608a5-1537-4b87-8806-c9252e8245b0" pod="tigera-operator/tigera-operator-5bf8dfcb4-g8hxp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.844191 kubelet[2796]: I0813 01:47:04.844117 2796 kubelet.go:2306] "Pod admission denied" podUID="ead983ac-2ae9-476f-a8a8-eac4de638db2" pod="tigera-operator/tigera-operator-5bf8dfcb4-24pml" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.943476 kubelet[2796]: I0813 01:47:04.943403 2796 kubelet.go:2306] "Pod admission denied" podUID="927ad2ff-36e0-42cf-96c5-efdc641733c8" pod="tigera-operator/tigera-operator-5bf8dfcb4-gqmvs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.048903 kubelet[2796]: I0813 01:47:05.048727 2796 kubelet.go:2306] "Pod admission denied" podUID="12bc5683-b114-4ec5-b064-34e2259360b6" pod="tigera-operator/tigera-operator-5bf8dfcb4-w6msz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.144806 kubelet[2796]: I0813 01:47:05.144750 2796 kubelet.go:2306] "Pod admission denied" podUID="4ec3fd94-52d8-4d4c-93da-96371ac84e7b" pod="tigera-operator/tigera-operator-5bf8dfcb4-zkhv9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.241191 kubelet[2796]: I0813 01:47:05.241120 2796 kubelet.go:2306] "Pod admission denied" podUID="4fe0f170-1e29-4e13-a4cc-fae9d40ba398" pod="tigera-operator/tigera-operator-5bf8dfcb4-7nlhz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.399773 kubelet[2796]: E0813 01:47:05.398453 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:47:05.403197 containerd[1581]: time="2025-08-13T01:47:05.403157028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vrr8,Uid:cbf6d4b0-f3bc-4a92-9977-6d91de60b65f,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:05.454326 kubelet[2796]: I0813 01:47:05.454274 2796 kubelet.go:2306] "Pod admission denied" podUID="71f6e399-fb3d-47f6-8c9f-184aa09df12e" pod="tigera-operator/tigera-operator-5bf8dfcb4-mdxrn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.501901 containerd[1581]: time="2025-08-13T01:47:05.501828429Z" level=error msg="Failed to destroy network for sandbox \"4ec049723c4c5b8cda75081bd0a7d13a6eafcab596c22dcd40333953702cbca1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:05.506391 systemd[1]: run-netns-cni\x2d0ca01ac9\x2d71c5\x2d2e38\x2d13f8\x2d3eba6387f6eb.mount: Deactivated successfully. Aug 13 01:47:05.507853 containerd[1581]: time="2025-08-13T01:47:05.506731586Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vrr8,Uid:cbf6d4b0-f3bc-4a92-9977-6d91de60b65f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ec049723c4c5b8cda75081bd0a7d13a6eafcab596c22dcd40333953702cbca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:05.507947 kubelet[2796]: E0813 01:47:05.507329 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ec049723c4c5b8cda75081bd0a7d13a6eafcab596c22dcd40333953702cbca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:05.507947 kubelet[2796]: E0813 01:47:05.507411 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ec049723c4c5b8cda75081bd0a7d13a6eafcab596c22dcd40333953702cbca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:47:05.507947 kubelet[2796]: E0813 01:47:05.507439 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ec049723c4c5b8cda75081bd0a7d13a6eafcab596c22dcd40333953702cbca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:47:05.507947 kubelet[2796]: E0813 01:47:05.507490 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6vrr8_kube-system(cbf6d4b0-f3bc-4a92-9977-6d91de60b65f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6vrr8_kube-system(cbf6d4b0-f3bc-4a92-9977-6d91de60b65f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ec049723c4c5b8cda75081bd0a7d13a6eafcab596c22dcd40333953702cbca1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6vrr8" podUID="cbf6d4b0-f3bc-4a92-9977-6d91de60b65f" Aug 13 01:47:05.543072 kubelet[2796]: I0813 01:47:05.542966 2796 kubelet.go:2306] "Pod admission denied" podUID="c38fb9e2-dd0b-4491-a605-fa14bf2cb547" pod="tigera-operator/tigera-operator-5bf8dfcb4-hglsd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.645338 kubelet[2796]: I0813 01:47:05.645263 2796 kubelet.go:2306] "Pod admission denied" podUID="b4646913-84b2-43cd-bcc6-25c37adb5f1c" pod="tigera-operator/tigera-operator-5bf8dfcb4-ll9qj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.743310 kubelet[2796]: I0813 01:47:05.743215 2796 kubelet.go:2306] "Pod admission denied" podUID="b9279bcc-0592-41b5-b2f4-4369fc1d5901" pod="tigera-operator/tigera-operator-5bf8dfcb4-54gxq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.853425 kubelet[2796]: I0813 01:47:05.851875 2796 kubelet.go:2306] "Pod admission denied" podUID="acf2c215-8157-4c35-b3d0-f28eaa143b9f" pod="tigera-operator/tigera-operator-5bf8dfcb4-f245g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.044481 kubelet[2796]: I0813 01:47:06.044293 2796 kubelet.go:2306] "Pod admission denied" podUID="87ee8976-8dd1-4a92-bb1d-f0b5b51949fd" pod="tigera-operator/tigera-operator-5bf8dfcb4-nr8rv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.142607 kubelet[2796]: I0813 01:47:06.142514 2796 kubelet.go:2306] "Pod admission denied" podUID="2c60c896-773d-4b7e-8455-1434557df228" pod="tigera-operator/tigera-operator-5bf8dfcb4-26brc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.244126 kubelet[2796]: I0813 01:47:06.244062 2796 kubelet.go:2306] "Pod admission denied" podUID="b4618075-dd96-45e2-9a91-71e43feae16b" pod="tigera-operator/tigera-operator-5bf8dfcb4-gtjvc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.342138 kubelet[2796]: I0813 01:47:06.341957 2796 kubelet.go:2306] "Pod admission denied" podUID="28fee73b-a427-46fe-9712-cb668243e198" pod="tigera-operator/tigera-operator-5bf8dfcb4-7s26l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.397106 kubelet[2796]: E0813 01:47:06.397049 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:47:06.398419 containerd[1581]: time="2025-08-13T01:47:06.398097511Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:06.453271 kubelet[2796]: I0813 01:47:06.453210 2796 kubelet.go:2306] "Pod admission denied" podUID="83f899be-7d3d-46da-9b4f-a64f6e722e8e" pod="tigera-operator/tigera-operator-5bf8dfcb4-8l6ww" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.479375 containerd[1581]: time="2025-08-13T01:47:06.479276489Z" level=error msg="Failed to destroy network for sandbox \"479ab73d2589e90394976056cdc809e1f75d8dab1233d0dd98a2d0a57abf4402\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:06.482567 systemd[1]: run-netns-cni\x2d95e32888\x2d7e87\x2ddce2\x2d7487\x2d171a48024581.mount: Deactivated successfully. Aug 13 01:47:06.485225 containerd[1581]: time="2025-08-13T01:47:06.485059692Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"479ab73d2589e90394976056cdc809e1f75d8dab1233d0dd98a2d0a57abf4402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:06.485881 kubelet[2796]: E0813 01:47:06.485835 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"479ab73d2589e90394976056cdc809e1f75d8dab1233d0dd98a2d0a57abf4402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:06.485951 kubelet[2796]: E0813 01:47:06.485913 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"479ab73d2589e90394976056cdc809e1f75d8dab1233d0dd98a2d0a57abf4402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:47:06.485951 kubelet[2796]: E0813 01:47:06.485939 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"479ab73d2589e90394976056cdc809e1f75d8dab1233d0dd98a2d0a57abf4402\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:47:06.486033 kubelet[2796]: E0813 01:47:06.485992 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-djvw6_kube-system(981696e3-42b0-4ae8-b44b-fa439a03a402)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-djvw6_kube-system(981696e3-42b0-4ae8-b44b-fa439a03a402)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"479ab73d2589e90394976056cdc809e1f75d8dab1233d0dd98a2d0a57abf4402\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-djvw6" podUID="981696e3-42b0-4ae8-b44b-fa439a03a402" Aug 13 01:47:06.643813 kubelet[2796]: I0813 01:47:06.643230 2796 kubelet.go:2306] "Pod admission denied" podUID="6aa348b0-f88a-4a2d-a269-58d6e0af8b9f" pod="tigera-operator/tigera-operator-5bf8dfcb4-dfh58" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.744014 kubelet[2796]: I0813 01:47:06.743940 2796 kubelet.go:2306] "Pod admission denied" podUID="0fbe4d77-9661-4871-8c89-b917a2756dc9" pod="tigera-operator/tigera-operator-5bf8dfcb4-gvkdh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.794853 kubelet[2796]: I0813 01:47:06.794776 2796 kubelet.go:2306] "Pod admission denied" podUID="7cfe26fa-9ca6-4e35-9aee-b2af75501300" pod="tigera-operator/tigera-operator-5bf8dfcb4-lnsb5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.900165 kubelet[2796]: I0813 01:47:06.898847 2796 kubelet.go:2306] "Pod admission denied" podUID="1450c491-197d-43e6-9be0-fa294ec876cd" pod="tigera-operator/tigera-operator-5bf8dfcb4-zmd5r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.092234 kubelet[2796]: I0813 01:47:07.092167 2796 kubelet.go:2306] "Pod admission denied" podUID="f470671f-0236-479b-954e-54919e57b090" pod="tigera-operator/tigera-operator-5bf8dfcb4-gwkb7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.194746 kubelet[2796]: I0813 01:47:07.194556 2796 kubelet.go:2306] "Pod admission denied" podUID="d59cb3f9-f585-4b3b-8da5-7a6d0d0d579d" pod="tigera-operator/tigera-operator-5bf8dfcb4-bs6lf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.297892 kubelet[2796]: I0813 01:47:07.297400 2796 kubelet.go:2306] "Pod admission denied" podUID="ef551201-c712-4fee-9c4b-7aec4cffa646" pod="tigera-operator/tigera-operator-5bf8dfcb4-tj9mk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.393286 kubelet[2796]: I0813 01:47:07.393184 2796 kubelet.go:2306] "Pod admission denied" podUID="301ac164-b08f-47e1-9ba9-ec05afd92e78" pod="tigera-operator/tigera-operator-5bf8dfcb4-vtkjg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.444891 kubelet[2796]: I0813 01:47:07.444728 2796 kubelet.go:2306] "Pod admission denied" podUID="c2ff6062-fde3-478c-bd2d-6600ce00f6cb" pod="tigera-operator/tigera-operator-5bf8dfcb4-4db76" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.542324 kubelet[2796]: I0813 01:47:07.542257 2796 kubelet.go:2306] "Pod admission denied" podUID="8e9674c6-0d9a-4f07-af85-4a0499e7f279" pod="tigera-operator/tigera-operator-5bf8dfcb4-48htz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.643569 kubelet[2796]: I0813 01:47:07.643488 2796 kubelet.go:2306] "Pod admission denied" podUID="bb7e5fcd-8f38-436a-b557-a0a5b1ee941f" pod="tigera-operator/tigera-operator-5bf8dfcb4-z5gvn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.743529 kubelet[2796]: I0813 01:47:07.743457 2796 kubelet.go:2306] "Pod admission denied" podUID="cb54fc55-ad57-4aa4-8d28-fe3d16c4bd8b" pod="tigera-operator/tigera-operator-5bf8dfcb4-mcjwj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.853168 kubelet[2796]: I0813 01:47:07.852492 2796 kubelet.go:2306] "Pod admission denied" podUID="2249908c-fffc-413f-800c-55a6fd47d8d0" pod="tigera-operator/tigera-operator-5bf8dfcb4-l9z99" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.949981 kubelet[2796]: I0813 01:47:07.949910 2796 kubelet.go:2306] "Pod admission denied" podUID="a9ce69d0-0c7c-4f70-be98-cb53167b637d" pod="tigera-operator/tigera-operator-5bf8dfcb4-ccb2r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.041953 kubelet[2796]: I0813 01:47:08.041578 2796 kubelet.go:2306] "Pod admission denied" podUID="4865cd88-52bc-4479-aa82-bb151edba575" pod="tigera-operator/tigera-operator-5bf8dfcb4-l8v4m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.141363 kubelet[2796]: I0813 01:47:08.141289 2796 kubelet.go:2306] "Pod admission denied" podUID="88525423-c435-46e5-b593-40b800c7abb2" pod="tigera-operator/tigera-operator-5bf8dfcb4-ffnmj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.346493 kubelet[2796]: I0813 01:47:08.346296 2796 kubelet.go:2306] "Pod admission denied" podUID="dd7f0f3c-be65-4989-bf37-a10d95dec1b5" pod="tigera-operator/tigera-operator-5bf8dfcb4-6mg77" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.445741 kubelet[2796]: I0813 01:47:08.445672 2796 kubelet.go:2306] "Pod admission denied" podUID="3f8dac31-d9fd-4e0e-8f2c-2ffac7ff6ae9" pod="tigera-operator/tigera-operator-5bf8dfcb4-5dsc9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.544480 kubelet[2796]: I0813 01:47:08.544409 2796 kubelet.go:2306] "Pod admission denied" podUID="ddf98db1-b1ea-4759-8a1e-af9b7e97f934" pod="tigera-operator/tigera-operator-5bf8dfcb4-wvl96" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.642635 kubelet[2796]: I0813 01:47:08.642167 2796 kubelet.go:2306] "Pod admission denied" podUID="428f2f58-c2b4-498d-9c4c-f5eec5a2ed76" pod="tigera-operator/tigera-operator-5bf8dfcb4-lwhdp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.754680 kubelet[2796]: I0813 01:47:08.752336 2796 kubelet.go:2306] "Pod admission denied" podUID="52b31096-a8e9-4b09-bb0d-6fcaf8c4c4ff" pod="tigera-operator/tigera-operator-5bf8dfcb4-t6dmg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.842441 kubelet[2796]: I0813 01:47:08.842361 2796 kubelet.go:2306] "Pod admission denied" podUID="64dc4468-2f2b-4488-a493-80613f367314" pod="tigera-operator/tigera-operator-5bf8dfcb4-pnmvp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.942737 kubelet[2796]: I0813 01:47:08.941991 2796 kubelet.go:2306] "Pod admission denied" podUID="8b1cceed-4cc3-4975-9436-55a8b18f45e0" pod="tigera-operator/tigera-operator-5bf8dfcb4-tn78d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.044920 kubelet[2796]: I0813 01:47:09.044796 2796 kubelet.go:2306] "Pod admission denied" podUID="c390bf7e-6ad2-4287-a673-bf8aa2575273" pod="tigera-operator/tigera-operator-5bf8dfcb4-g5qb2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.150671 kubelet[2796]: I0813 01:47:09.150528 2796 kubelet.go:2306] "Pod admission denied" podUID="6335653d-dbdd-437b-afc5-3b2a56ca9509" pod="tigera-operator/tigera-operator-5bf8dfcb4-jzbf4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.243642 kubelet[2796]: I0813 01:47:09.243562 2796 kubelet.go:2306] "Pod admission denied" podUID="b7340e78-223e-4b01-b12d-73c826a09c77" pod="tigera-operator/tigera-operator-5bf8dfcb4-hlsrv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.294101 kubelet[2796]: I0813 01:47:09.294033 2796 kubelet.go:2306] "Pod admission denied" podUID="c7fce008-2ece-4212-a896-3843ca89ed4d" pod="tigera-operator/tigera-operator-5bf8dfcb4-bslmw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.395515 kubelet[2796]: I0813 01:47:09.395454 2796 kubelet.go:2306] "Pod admission denied" podUID="bfe542a6-bc35-4a20-b16e-ef5c7b1855f7" pod="tigera-operator/tigera-operator-5bf8dfcb4-tdw2q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.495796 kubelet[2796]: I0813 01:47:09.495436 2796 kubelet.go:2306] "Pod admission denied" podUID="64d08462-9f16-4f2a-8f9f-cb5016d01cb5" pod="tigera-operator/tigera-operator-5bf8dfcb4-b6mhs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.592784 kubelet[2796]: I0813 01:47:09.592697 2796 kubelet.go:2306] "Pod admission denied" podUID="cbf8e5a6-3037-4247-93f3-dae9d3f7cb53" pod="tigera-operator/tigera-operator-5bf8dfcb4-54s24" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.695266 kubelet[2796]: I0813 01:47:09.695190 2796 kubelet.go:2306] "Pod admission denied" podUID="810ac3d5-30e8-4fd9-b064-ae9120905ffc" pod="tigera-operator/tigera-operator-5bf8dfcb4-6wkcg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.795962 kubelet[2796]: I0813 01:47:09.795365 2796 kubelet.go:2306] "Pod admission denied" podUID="8d265768-a03b-47a5-af8f-c29efce8f5f2" pod="tigera-operator/tigera-operator-5bf8dfcb4-tsnz5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.896172 kubelet[2796]: I0813 01:47:09.896103 2796 kubelet.go:2306] "Pod admission denied" podUID="a07575ca-5676-4069-ae35-6396ff9e1365" pod="tigera-operator/tigera-operator-5bf8dfcb4-lnwvw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.994855 kubelet[2796]: I0813 01:47:09.994784 2796 kubelet.go:2306] "Pod admission denied" podUID="7cd5d5db-ade2-4857-bb74-a0cf95b9b237" pod="tigera-operator/tigera-operator-5bf8dfcb4-556f4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.095062 kubelet[2796]: I0813 01:47:10.094898 2796 kubelet.go:2306] "Pod admission denied" podUID="6a0e14e6-fc36-4f76-bdf2-4953a7202b6d" pod="tigera-operator/tigera-operator-5bf8dfcb4-zhxcd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.192750 kubelet[2796]: I0813 01:47:10.192671 2796 kubelet.go:2306] "Pod admission denied" podUID="a79e1c8d-20ca-441d-b8e8-46136aca8223" pod="tigera-operator/tigera-operator-5bf8dfcb4-vllh9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.295233 kubelet[2796]: I0813 01:47:10.295165 2796 kubelet.go:2306] "Pod admission denied" podUID="6c396e24-adb2-4c42-9727-159b642dad19" pod="tigera-operator/tigera-operator-5bf8dfcb4-8hjl5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.393030 kubelet[2796]: I0813 01:47:10.392530 2796 kubelet.go:2306] "Pod admission denied" podUID="e69fb97b-9b95-4431-ae30-8ab1d3d7377a" pod="tigera-operator/tigera-operator-5bf8dfcb4-txvfv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.398992 containerd[1581]: time="2025-08-13T01:47:10.398920391Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:10.464348 containerd[1581]: time="2025-08-13T01:47:10.464279178Z" level=error msg="Failed to destroy network for sandbox \"9b5ff380043e173d3c8eaab885b0bb66860a045bd63e72ccfc1a3373b58ecbc5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:10.467736 containerd[1581]: time="2025-08-13T01:47:10.466302342Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b5ff380043e173d3c8eaab885b0bb66860a045bd63e72ccfc1a3373b58ecbc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:10.468261 kubelet[2796]: E0813 01:47:10.468139 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b5ff380043e173d3c8eaab885b0bb66860a045bd63e72ccfc1a3373b58ecbc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:10.469500 kubelet[2796]: E0813 01:47:10.468782 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b5ff380043e173d3c8eaab885b0bb66860a045bd63e72ccfc1a3373b58ecbc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:47:10.469500 kubelet[2796]: E0813 01:47:10.468815 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9b5ff380043e173d3c8eaab885b0bb66860a045bd63e72ccfc1a3373b58ecbc5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:47:10.469500 kubelet[2796]: E0813 01:47:10.468872 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9b5ff380043e173d3c8eaab885b0bb66860a045bd63e72ccfc1a3373b58ecbc5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" podUID="4720bedc-4719-4a57-b2ff-e5b21f7acb7f" Aug 13 01:47:10.468434 systemd[1]: run-netns-cni\x2d5ccd9753\x2d46ea\x2de368\x2d1cca\x2d871471b37aa3.mount: Deactivated successfully. Aug 13 01:47:10.493136 kubelet[2796]: I0813 01:47:10.493064 2796 kubelet.go:2306] "Pod admission denied" podUID="e3d2e4d7-c8a2-4d78-bfef-e2551b2b0152" pod="tigera-operator/tigera-operator-5bf8dfcb4-5d4d7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.544393 kubelet[2796]: I0813 01:47:10.544315 2796 kubelet.go:2306] "Pod admission denied" podUID="738ad006-89af-487f-b37d-f30aabd86d7a" pod="tigera-operator/tigera-operator-5bf8dfcb4-ctg7c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.642919 kubelet[2796]: I0813 01:47:10.642852 2796 kubelet.go:2306] "Pod admission denied" podUID="b80618e5-ca95-4d74-9009-063a29dbaf68" pod="tigera-operator/tigera-operator-5bf8dfcb4-qnrlq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.749605 kubelet[2796]: I0813 01:47:10.749528 2796 kubelet.go:2306] "Pod admission denied" podUID="07a78a2d-2d10-441d-99d4-fe7740c0ccee" pod="tigera-operator/tigera-operator-5bf8dfcb4-4gw29" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.845684 kubelet[2796]: I0813 01:47:10.845580 2796 kubelet.go:2306] "Pod admission denied" podUID="1d869b95-7a31-4a19-bef8-600431e78fe0" pod="tigera-operator/tigera-operator-5bf8dfcb4-hmrrx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.062282 kubelet[2796]: I0813 01:47:11.061178 2796 kubelet.go:2306] "Pod admission denied" podUID="ab94602d-8978-4da1-af94-30f7c7391b62" pod="tigera-operator/tigera-operator-5bf8dfcb4-bz7qg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.144302 kubelet[2796]: I0813 01:47:11.144222 2796 kubelet.go:2306] "Pod admission denied" podUID="89735e25-a08a-46f3-a5aa-f15de4f90ff1" pod="tigera-operator/tigera-operator-5bf8dfcb4-p5jdb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.259402 kubelet[2796]: I0813 01:47:11.256687 2796 kubelet.go:2306] "Pod admission denied" podUID="af2d8e9c-6d8d-4e8c-82b5-7fb3c39ac2e2" pod="tigera-operator/tigera-operator-5bf8dfcb4-gjw7x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.345521 kubelet[2796]: I0813 01:47:11.345010 2796 kubelet.go:2306] "Pod admission denied" podUID="23cbe443-bbdd-431c-a100-682e25628209" pod="tigera-operator/tigera-operator-5bf8dfcb4-9vrvb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.457676 kubelet[2796]: I0813 01:47:11.456217 2796 kubelet.go:2306] "Pod admission denied" podUID="c1af354b-77ea-469b-84ed-91fe60654ea8" pod="tigera-operator/tigera-operator-5bf8dfcb4-5h297" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.545252 kubelet[2796]: I0813 01:47:11.545180 2796 kubelet.go:2306] "Pod admission denied" podUID="d0eee09c-9a0e-4cf5-b001-fc6035608441" pod="tigera-operator/tigera-operator-5bf8dfcb4-kwxsb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.650511 kubelet[2796]: I0813 01:47:11.649933 2796 kubelet.go:2306] "Pod admission denied" podUID="3de1a2a9-59aa-410f-afdd-31f6f77b12fd" pod="tigera-operator/tigera-operator-5bf8dfcb4-fkx5v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.745287 kubelet[2796]: I0813 01:47:11.745183 2796 kubelet.go:2306] "Pod admission denied" podUID="5717952a-f5bb-4477-af64-14bd4f512b05" pod="tigera-operator/tigera-operator-5bf8dfcb4-qldbp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.855981 kubelet[2796]: I0813 01:47:11.855880 2796 kubelet.go:2306] "Pod admission denied" podUID="0056d231-62b4-4406-8f0f-d19402d6da2f" pod="tigera-operator/tigera-operator-5bf8dfcb4-gj2tx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.947469 kubelet[2796]: I0813 01:47:11.946840 2796 kubelet.go:2306] "Pod admission denied" podUID="ba03f135-f691-4a2e-94af-b8bb9029977c" pod="tigera-operator/tigera-operator-5bf8dfcb4-x75s8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.044455 kubelet[2796]: I0813 01:47:12.044374 2796 kubelet.go:2306] "Pod admission denied" podUID="b367288c-3672-4072-8987-6f0115176814" pod="tigera-operator/tigera-operator-5bf8dfcb4-hbx2k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.243301 kubelet[2796]: I0813 01:47:12.243248 2796 kubelet.go:2306] "Pod admission denied" podUID="ca8c1472-0a20-431a-a89d-157176d6bea4" pod="tigera-operator/tigera-operator-5bf8dfcb4-896gs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.343659 kubelet[2796]: I0813 01:47:12.343559 2796 kubelet.go:2306] "Pod admission denied" podUID="b3839cee-f6f2-4ebe-b09e-91b0e9db93b0" pod="tigera-operator/tigera-operator-5bf8dfcb4-4f98f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.445047 kubelet[2796]: I0813 01:47:12.444985 2796 kubelet.go:2306] "Pod admission denied" podUID="e3e4bd12-4457-4bf8-ac39-b810d122e3b7" pod="tigera-operator/tigera-operator-5bf8dfcb4-nmjwg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.547042 kubelet[2796]: I0813 01:47:12.546473 2796 kubelet.go:2306] "Pod admission denied" podUID="148d6afe-990c-4659-a143-a2d31d7b0e13" pod="tigera-operator/tigera-operator-5bf8dfcb4-qtfb7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.663745 kubelet[2796]: I0813 01:47:12.663665 2796 kubelet.go:2306] "Pod admission denied" podUID="9ea1062d-dc63-4057-a992-dbd5e4440cc8" pod="tigera-operator/tigera-operator-5bf8dfcb4-rdfnz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.846478 kubelet[2796]: I0813 01:47:12.846289 2796 kubelet.go:2306] "Pod admission denied" podUID="6b6122e3-187f-4e01-84a8-e4b81fd85401" pod="tigera-operator/tigera-operator-5bf8dfcb4-9cqjw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.945860 kubelet[2796]: I0813 01:47:12.945786 2796 kubelet.go:2306] "Pod admission denied" podUID="c221da22-1d56-40ef-b4b7-3143ab886679" pod="tigera-operator/tigera-operator-5bf8dfcb4-8492j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.994526 kubelet[2796]: I0813 01:47:12.994436 2796 kubelet.go:2306] "Pod admission denied" podUID="812ce9b6-0fe7-4434-9d78-0111aba5bdb8" pod="tigera-operator/tigera-operator-5bf8dfcb4-87znm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.113001 kubelet[2796]: I0813 01:47:13.110843 2796 kubelet.go:2306] "Pod admission denied" podUID="15ebc229-87c4-4855-8458-4e57f036cc03" pod="tigera-operator/tigera-operator-5bf8dfcb4-nc2lm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.198985 kubelet[2796]: I0813 01:47:13.198909 2796 kubelet.go:2306] "Pod admission denied" podUID="a263cfc5-24c1-48d9-a312-cb3782f4b56f" pod="tigera-operator/tigera-operator-5bf8dfcb4-kdjvj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.294513 kubelet[2796]: I0813 01:47:13.294442 2796 kubelet.go:2306] "Pod admission denied" podUID="68b29765-3d93-4092-bc7d-1de848714062" pod="tigera-operator/tigera-operator-5bf8dfcb4-4mx85" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.401062 kubelet[2796]: I0813 01:47:13.400524 2796 kubelet.go:2306] "Pod admission denied" podUID="4c042754-95ed-4b21-9da2-fd412ac86624" pod="tigera-operator/tigera-operator-5bf8dfcb4-wswx9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.452158 kubelet[2796]: I0813 01:47:13.452092 2796 kubelet.go:2306] "Pod admission denied" podUID="a5f34bc3-2f31-48de-bd02-ac17169a92ae" pod="tigera-operator/tigera-operator-5bf8dfcb4-nzz62" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.542537 kubelet[2796]: I0813 01:47:13.542456 2796 kubelet.go:2306] "Pod admission denied" podUID="a617796d-69c3-4684-a351-89795241268b" pod="tigera-operator/tigera-operator-5bf8dfcb4-zshrd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.745888 kubelet[2796]: I0813 01:47:13.745817 2796 kubelet.go:2306] "Pod admission denied" podUID="007e66f3-7476-4dc9-a30a-6dddbcffc7fc" pod="tigera-operator/tigera-operator-5bf8dfcb4-h7xql" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.852677 kubelet[2796]: I0813 01:47:13.851942 2796 kubelet.go:2306] "Pod admission denied" podUID="a603c72f-8192-41d6-ae9a-555494e09c47" pod="tigera-operator/tigera-operator-5bf8dfcb4-4cvs2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.945862 kubelet[2796]: I0813 01:47:13.945775 2796 kubelet.go:2306] "Pod admission denied" podUID="71cb504d-a96f-468a-8501-09666d3acd85" pod="tigera-operator/tigera-operator-5bf8dfcb4-zhbw5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.045023 kubelet[2796]: I0813 01:47:14.044774 2796 kubelet.go:2306] "Pod admission denied" podUID="1e05072c-bbff-4642-a0a0-2d1afcd8bfd6" pod="tigera-operator/tigera-operator-5bf8dfcb4-h8jgc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.147521 kubelet[2796]: I0813 01:47:14.147442 2796 kubelet.go:2306] "Pod admission denied" podUID="ff650383-91b5-461a-9a81-3eba20dcfe81" pod="tigera-operator/tigera-operator-5bf8dfcb4-hfkbg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.228703 kubelet[2796]: I0813 01:47:14.228523 2796 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:14.228703 kubelet[2796]: I0813 01:47:14.228576 2796 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:47:14.233428 kubelet[2796]: I0813 01:47:14.233406 2796 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:47:14.252558 kubelet[2796]: I0813 01:47:14.252526 2796 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:14.252853 kubelet[2796]: I0813 01:47:14.252833 2796 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7","kube-system/coredns-7c65d6cfc9-6vrr8","kube-system/coredns-7c65d6cfc9-djvw6","calico-system/csi-node-driver-bk2p6","calico-system/calico-node-8j6cb","calico-system/calico-typha-bf6ccb678-cdfdr","kube-system/kube-controller-manager-172-232-7-32","kube-system/kube-proxy-dmp9l","kube-system/kube-apiserver-172-232-7-32","kube-system/kube-scheduler-172-232-7-32"] Aug 13 01:47:14.253096 kubelet[2796]: E0813 01:47:14.252975 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:47:14.253096 kubelet[2796]: E0813 01:47:14.252995 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:47:14.253096 kubelet[2796]: E0813 01:47:14.253004 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:47:14.253096 kubelet[2796]: E0813 01:47:14.253012 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:47:14.253096 kubelet[2796]: E0813 01:47:14.253021 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-8j6cb" Aug 13 01:47:14.253096 kubelet[2796]: E0813 01:47:14.253035 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-bf6ccb678-cdfdr" Aug 13 01:47:14.253096 kubelet[2796]: E0813 01:47:14.253044 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-32" Aug 13 01:47:14.253096 kubelet[2796]: E0813 01:47:14.253053 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-dmp9l" Aug 13 01:47:14.253096 kubelet[2796]: E0813 01:47:14.253063 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-32" Aug 13 01:47:14.253096 kubelet[2796]: E0813 01:47:14.253071 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-32" Aug 13 01:47:14.253096 kubelet[2796]: I0813 01:47:14.253082 2796 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:47:14.255997 kubelet[2796]: I0813 01:47:14.255928 2796 kubelet.go:2306] "Pod admission denied" podUID="dec89066-abb1-4b7f-83c0-60a3a7a71a12" pod="tigera-operator/tigera-operator-5bf8dfcb4-9s2fr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.344768 kubelet[2796]: I0813 01:47:14.344542 2796 kubelet.go:2306] "Pod admission denied" podUID="ca58f145-c55e-42f1-bcf2-2fa3c4af2486" pod="tigera-operator/tigera-operator-5bf8dfcb4-w9psm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.445553 kubelet[2796]: I0813 01:47:14.445469 2796 kubelet.go:2306] "Pod admission denied" podUID="55599eca-96e0-4838-8ea6-2a353f102f5d" pod="tigera-operator/tigera-operator-5bf8dfcb4-dltkb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.547152 kubelet[2796]: I0813 01:47:14.547073 2796 kubelet.go:2306] "Pod admission denied" podUID="759c59c9-b2d4-4ff4-b873-8750c88d605c" pod="tigera-operator/tigera-operator-5bf8dfcb4-dq8db" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.646217 kubelet[2796]: I0813 01:47:14.646023 2796 kubelet.go:2306] "Pod admission denied" podUID="12e93193-e125-4c5e-9569-7651c1bb2f5c" pod="tigera-operator/tigera-operator-5bf8dfcb4-bl4jh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.745112 kubelet[2796]: I0813 01:47:14.745030 2796 kubelet.go:2306] "Pod admission denied" podUID="4b5d6a3a-c530-43e3-b1f8-0edc8a0e8023" pod="tigera-operator/tigera-operator-5bf8dfcb4-zvrcl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.852919 kubelet[2796]: I0813 01:47:14.852506 2796 kubelet.go:2306] "Pod admission denied" podUID="9c280729-acf9-4b85-bc77-85c533ec5dd5" pod="tigera-operator/tigera-operator-5bf8dfcb4-6wr9c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.945536 kubelet[2796]: I0813 01:47:14.945188 2796 kubelet.go:2306] "Pod admission denied" podUID="141f90ed-073e-4fa6-b10f-87eeae0f46a4" pod="tigera-operator/tigera-operator-5bf8dfcb4-6q95w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.043460 kubelet[2796]: I0813 01:47:15.043366 2796 kubelet.go:2306] "Pod admission denied" podUID="3779ac7b-1166-459f-a92a-fc98ba74e9f4" pod="tigera-operator/tigera-operator-5bf8dfcb4-q8pdm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.145980 kubelet[2796]: I0813 01:47:15.145892 2796 kubelet.go:2306] "Pod admission denied" podUID="28078502-0cd6-43ca-83ce-a396b3dec335" pod="tigera-operator/tigera-operator-5bf8dfcb4-lbhhh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.355961 kubelet[2796]: I0813 01:47:15.355865 2796 kubelet.go:2306] "Pod admission denied" podUID="a031ed9a-a13d-4334-9810-a33f284e2dc9" pod="tigera-operator/tigera-operator-5bf8dfcb4-8sff4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.445459 kubelet[2796]: I0813 01:47:15.445392 2796 kubelet.go:2306] "Pod admission denied" podUID="f8f70b07-9f5d-4700-993f-73aea2bd6a66" pod="tigera-operator/tigera-operator-5bf8dfcb4-fp7s6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.542562 kubelet[2796]: I0813 01:47:15.542494 2796 kubelet.go:2306] "Pod admission denied" podUID="c07bf9a9-2e0f-4ffd-acc5-0a9fde49547a" pod="tigera-operator/tigera-operator-5bf8dfcb4-8h2bp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.645037 kubelet[2796]: I0813 01:47:15.644396 2796 kubelet.go:2306] "Pod admission denied" podUID="a81596fe-9975-40b1-b598-61fd2fa59187" pod="tigera-operator/tigera-operator-5bf8dfcb4-dbfzs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.762518 kubelet[2796]: I0813 01:47:15.762431 2796 kubelet.go:2306] "Pod admission denied" podUID="33661f6a-1618-4600-af92-e583626e7b0a" pod="tigera-operator/tigera-operator-5bf8dfcb4-ghw5t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.845244 kubelet[2796]: I0813 01:47:15.845168 2796 kubelet.go:2306] "Pod admission denied" podUID="2ac1b27c-ca05-456e-ad45-675ffbfacebf" pod="tigera-operator/tigera-operator-5bf8dfcb4-tpj9q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.945487 kubelet[2796]: I0813 01:47:15.945306 2796 kubelet.go:2306] "Pod admission denied" podUID="e3de2995-610b-4378-93dd-d8d71222b038" pod="tigera-operator/tigera-operator-5bf8dfcb4-72jwx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.042819 kubelet[2796]: I0813 01:47:16.042738 2796 kubelet.go:2306] "Pod admission denied" podUID="7ae16690-2f2a-470e-9a37-7e850f0bd25a" pod="tigera-operator/tigera-operator-5bf8dfcb4-t4lfd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.159693 kubelet[2796]: I0813 01:47:16.158503 2796 kubelet.go:2306] "Pod admission denied" podUID="900906cf-71e2-4683-94c2-2ba2b14a8267" pod="tigera-operator/tigera-operator-5bf8dfcb4-5n578" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.244047 kubelet[2796]: I0813 01:47:16.243980 2796 kubelet.go:2306] "Pod admission denied" podUID="9aa74ae6-d77f-4920-bedb-9e2c4d430858" pod="tigera-operator/tigera-operator-5bf8dfcb4-knzp2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.344195 kubelet[2796]: I0813 01:47:16.344134 2796 kubelet.go:2306] "Pod admission denied" podUID="94b2efd9-432c-4732-9c24-ee5095c81d73" pod="tigera-operator/tigera-operator-5bf8dfcb4-d2vt7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.445617 kubelet[2796]: I0813 01:47:16.445541 2796 kubelet.go:2306] "Pod admission denied" podUID="e6078848-924d-48db-b856-ac4ec86c1692" pod="tigera-operator/tigera-operator-5bf8dfcb4-twhnj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.552820 kubelet[2796]: I0813 01:47:16.552405 2796 kubelet.go:2306] "Pod admission denied" podUID="5e2e1bce-b4ea-4a00-a8e0-4e9833ea029b" pod="tigera-operator/tigera-operator-5bf8dfcb4-gv7xl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.746942 kubelet[2796]: I0813 01:47:16.746866 2796 kubelet.go:2306] "Pod admission denied" podUID="ebef76ad-3ea6-4283-b247-ecb61c42c84f" pod="tigera-operator/tigera-operator-5bf8dfcb4-d78t2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.847991 kubelet[2796]: I0813 01:47:16.847067 2796 kubelet.go:2306] "Pod admission denied" podUID="76ef53aa-275b-4070-b391-0c061f7cd2d8" pod="tigera-operator/tigera-operator-5bf8dfcb4-msb69" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.894322 kubelet[2796]: I0813 01:47:16.894251 2796 kubelet.go:2306] "Pod admission denied" podUID="ba137949-04ae-49e8-851c-2355b2ed4bbc" pod="tigera-operator/tigera-operator-5bf8dfcb4-jzftw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.009112 kubelet[2796]: I0813 01:47:17.007777 2796 kubelet.go:2306] "Pod admission denied" podUID="c3a67a9a-ff59-479e-9d7f-db5364f6e716" pod="tigera-operator/tigera-operator-5bf8dfcb4-ld2h6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.096930 kubelet[2796]: I0813 01:47:17.096857 2796 kubelet.go:2306] "Pod admission denied" podUID="2d065daf-265c-4672-91a0-e5a7d2a0a58f" pod="tigera-operator/tigera-operator-5bf8dfcb4-cmhh7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.146441 kubelet[2796]: I0813 01:47:17.146257 2796 kubelet.go:2306] "Pod admission denied" podUID="99debd08-7a81-4973-beb6-332bc56d499d" pod="tigera-operator/tigera-operator-5bf8dfcb4-hj9vq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.246273 kubelet[2796]: I0813 01:47:17.246201 2796 kubelet.go:2306] "Pod admission denied" podUID="019dda51-69fd-469d-bb1c-bb5d012102d5" pod="tigera-operator/tigera-operator-5bf8dfcb4-r7gbx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.345961 kubelet[2796]: I0813 01:47:17.345877 2796 kubelet.go:2306] "Pod admission denied" podUID="f380a42a-3f8f-46c0-bcf3-6e2530e3550c" pod="tigera-operator/tigera-operator-5bf8dfcb4-zvd6s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.396195 kubelet[2796]: I0813 01:47:17.396128 2796 kubelet.go:2306] "Pod admission denied" podUID="6c4c8fee-bf10-48ea-acb3-2742f655ed70" pod="tigera-operator/tigera-operator-5bf8dfcb4-g2qvm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.401801 kubelet[2796]: E0813 01:47:17.401247 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-8j6cb" podUID="416d9de4-5101-44c9-b974-0fedf790aa67" Aug 13 01:47:17.509150 kubelet[2796]: I0813 01:47:17.509084 2796 kubelet.go:2306] "Pod admission denied" podUID="c134f505-4e21-4089-bc72-2118b29301d7" pod="tigera-operator/tigera-operator-5bf8dfcb4-wftc9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.595918 kubelet[2796]: I0813 01:47:17.595849 2796 kubelet.go:2306] "Pod admission denied" podUID="2f6192bb-9039-4f02-9fd8-dd74406a583d" pod="tigera-operator/tigera-operator-5bf8dfcb4-cq7gp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.697916 kubelet[2796]: I0813 01:47:17.697471 2796 kubelet.go:2306] "Pod admission denied" podUID="0aee6b61-c721-44b2-85e5-1885079e6597" pod="tigera-operator/tigera-operator-5bf8dfcb4-ckmdd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.796234 kubelet[2796]: I0813 01:47:17.796147 2796 kubelet.go:2306] "Pod admission denied" podUID="8b537096-a5e9-41a8-a93c-e859c3df8c3c" pod="tigera-operator/tigera-operator-5bf8dfcb4-sx2p4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.897278 kubelet[2796]: I0813 01:47:17.897202 2796 kubelet.go:2306] "Pod admission denied" podUID="404e9dfe-b7b5-4772-99c2-462d2dd9b8c4" pod="tigera-operator/tigera-operator-5bf8dfcb4-8mf5s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.997381 kubelet[2796]: I0813 01:47:17.997293 2796 kubelet.go:2306] "Pod admission denied" podUID="a9311c38-58f8-4503-999f-2a3dcb255de9" pod="tigera-operator/tigera-operator-5bf8dfcb4-rr6wm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.096258 kubelet[2796]: I0813 01:47:18.096185 2796 kubelet.go:2306] "Pod admission denied" podUID="61d55564-e747-4381-9fc0-b7e0cb71fa17" pod="tigera-operator/tigera-operator-5bf8dfcb4-4pksb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.204982 kubelet[2796]: I0813 01:47:18.204894 2796 kubelet.go:2306] "Pod admission denied" podUID="0ee62af5-4106-46a6-bde6-dbac42174bbf" pod="tigera-operator/tigera-operator-5bf8dfcb4-g76f2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.296283 kubelet[2796]: I0813 01:47:18.296083 2796 kubelet.go:2306] "Pod admission denied" podUID="5f2d4d16-14ad-4b81-8260-dd11f1a311a3" pod="tigera-operator/tigera-operator-5bf8dfcb4-mf68n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.494604 kubelet[2796]: I0813 01:47:18.494533 2796 kubelet.go:2306] "Pod admission denied" podUID="5dc13262-c270-46c3-ba04-7ee6dd33586a" pod="tigera-operator/tigera-operator-5bf8dfcb4-xr8gc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.599195 kubelet[2796]: I0813 01:47:18.598453 2796 kubelet.go:2306] "Pod admission denied" podUID="0df8d184-2ce5-447c-9afd-723dca034f70" pod="tigera-operator/tigera-operator-5bf8dfcb4-bns7g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.694677 kubelet[2796]: I0813 01:47:18.694576 2796 kubelet.go:2306] "Pod admission denied" podUID="6a264568-d7c5-4071-a8c9-76097400286e" pod="tigera-operator/tigera-operator-5bf8dfcb4-nck8z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.795807 kubelet[2796]: I0813 01:47:18.795713 2796 kubelet.go:2306] "Pod admission denied" podUID="7b228e8b-e651-4cbe-86d1-4ea7c72a6987" pod="tigera-operator/tigera-operator-5bf8dfcb4-lkn5g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.842073 kubelet[2796]: I0813 01:47:18.842003 2796 kubelet.go:2306] "Pod admission denied" podUID="d73e71ea-5963-4942-a3f3-1df7aae6ac15" pod="tigera-operator/tigera-operator-5bf8dfcb4-4n5sw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.959464 kubelet[2796]: I0813 01:47:18.958893 2796 kubelet.go:2306] "Pod admission denied" podUID="86af741c-5a34-400a-a945-e802b5df5b3e" pod="tigera-operator/tigera-operator-5bf8dfcb4-pdwgg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:19.046126 kubelet[2796]: I0813 01:47:19.046069 2796 kubelet.go:2306] "Pod admission denied" podUID="f2f903c0-c1a1-4c72-be52-99464acc9726" pod="tigera-operator/tigera-operator-5bf8dfcb4-mrbqm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:19.144607 kubelet[2796]: I0813 01:47:19.144531 2796 kubelet.go:2306] "Pod admission denied" podUID="8e51d4e2-6210-42e4-8ce9-3741bb74fd0e" pod="tigera-operator/tigera-operator-5bf8dfcb4-gkhql" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:19.253820 kubelet[2796]: I0813 01:47:19.253758 2796 kubelet.go:2306] "Pod admission denied" podUID="e74ca0d9-9d54-4cc9-9f1d-3aab536b2d08" pod="tigera-operator/tigera-operator-5bf8dfcb4-nzftd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:19.360727 kubelet[2796]: I0813 01:47:19.360663 2796 kubelet.go:2306] "Pod admission denied" podUID="100b5f4e-c9a8-4e15-9b4b-a61f36b2164f" pod="tigera-operator/tigera-operator-5bf8dfcb4-xgk76" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:19.398489 containerd[1581]: time="2025-08-13T01:47:19.398443000Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:19.447192 kubelet[2796]: I0813 01:47:19.447126 2796 kubelet.go:2306] "Pod admission denied" podUID="4068fa45-2f38-4f0c-aca4-365a83b0a752" pod="tigera-operator/tigera-operator-5bf8dfcb4-gn95n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:19.471004 containerd[1581]: time="2025-08-13T01:47:19.470944695Z" level=error msg="Failed to destroy network for sandbox \"16074e1063bcae52976944c97b2cf7c3a838626b8361d6cddadb2d315cede588\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:19.472828 containerd[1581]: time="2025-08-13T01:47:19.472776734Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"16074e1063bcae52976944c97b2cf7c3a838626b8361d6cddadb2d315cede588\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:19.473607 kubelet[2796]: E0813 01:47:19.473569 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16074e1063bcae52976944c97b2cf7c3a838626b8361d6cddadb2d315cede588\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:19.473741 kubelet[2796]: E0813 01:47:19.473632 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16074e1063bcae52976944c97b2cf7c3a838626b8361d6cddadb2d315cede588\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:47:19.473741 kubelet[2796]: E0813 01:47:19.473679 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16074e1063bcae52976944c97b2cf7c3a838626b8361d6cddadb2d315cede588\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:47:19.474229 systemd[1]: run-netns-cni\x2dbd89d02f\x2defd6\x2d677b\x2db92d\x2d9e208a25b476.mount: Deactivated successfully. Aug 13 01:47:19.476079 kubelet[2796]: E0813 01:47:19.474459 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16074e1063bcae52976944c97b2cf7c3a838626b8361d6cddadb2d315cede588\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bk2p6" podUID="0e8898c7-a3f5-4010-bb1f-d756673c29b2" Aug 13 01:47:19.549113 kubelet[2796]: I0813 01:47:19.548526 2796 kubelet.go:2306] "Pod admission denied" podUID="d931cf0c-f980-4698-bd79-751b35b06094" pod="tigera-operator/tigera-operator-5bf8dfcb4-7tjsn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:19.655574 kubelet[2796]: I0813 01:47:19.654707 2796 kubelet.go:2306] "Pod admission denied" podUID="70a42f95-704f-41b0-baac-154958c70a54" pod="tigera-operator/tigera-operator-5bf8dfcb4-rgqxp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:19.745825 kubelet[2796]: I0813 01:47:19.745734 2796 kubelet.go:2306] "Pod admission denied" podUID="44acbd37-15f1-4541-a6ab-befa8f064cd1" pod="tigera-operator/tigera-operator-5bf8dfcb4-95cvh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:19.947515 kubelet[2796]: I0813 01:47:19.947361 2796 kubelet.go:2306] "Pod admission denied" podUID="66255b3a-41c5-4fac-ba10-eb4a0e2cdc77" pod="tigera-operator/tigera-operator-5bf8dfcb4-cr4vw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.045431 kubelet[2796]: I0813 01:47:20.045366 2796 kubelet.go:2306] "Pod admission denied" podUID="e44edcc1-efa8-42c3-ac96-b3ee780c87d3" pod="tigera-operator/tigera-operator-5bf8dfcb4-lmzsm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.095264 kubelet[2796]: I0813 01:47:20.095193 2796 kubelet.go:2306] "Pod admission denied" podUID="32ba2c9c-bad9-4939-972f-ba33454f706f" pod="tigera-operator/tigera-operator-5bf8dfcb4-x79mz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.198326 kubelet[2796]: I0813 01:47:20.198011 2796 kubelet.go:2306] "Pod admission denied" podUID="95480ac7-b8ff-4a0d-90aa-8b9903b3d14b" pod="tigera-operator/tigera-operator-5bf8dfcb4-nsqcx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.296757 kubelet[2796]: I0813 01:47:20.296677 2796 kubelet.go:2306] "Pod admission denied" podUID="0623530e-c787-4c3d-ae47-d24975a248f9" pod="tigera-operator/tigera-operator-5bf8dfcb4-86hz2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.404977 kubelet[2796]: I0813 01:47:20.403832 2796 kubelet.go:2306] "Pod admission denied" podUID="de625bf5-f223-4c6d-9412-4eb06a9a3cf1" pod="tigera-operator/tigera-operator-5bf8dfcb4-8g9zz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.404977 kubelet[2796]: E0813 01:47:20.404409 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:47:20.406218 containerd[1581]: time="2025-08-13T01:47:20.406177445Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:20.406763 kubelet[2796]: E0813 01:47:20.406746 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:47:20.407427 containerd[1581]: time="2025-08-13T01:47:20.407393042Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vrr8,Uid:cbf6d4b0-f3bc-4a92-9977-6d91de60b65f,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:20.504698 containerd[1581]: time="2025-08-13T01:47:20.504497081Z" level=error msg="Failed to destroy network for sandbox \"de7071151f5e365c637c555794ea7e43243979281d1844ad177110c0e7588196\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:20.507578 systemd[1]: run-netns-cni\x2d47638bb0\x2d984b\x2da926\x2d63c9\x2d92c5cade2763.mount: Deactivated successfully. Aug 13 01:47:20.508460 kubelet[2796]: I0813 01:47:20.508425 2796 kubelet.go:2306] "Pod admission denied" podUID="47255577-7234-433d-8e61-9535989cf57f" pod="tigera-operator/tigera-operator-5bf8dfcb4-tp6xc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.511398 containerd[1581]: time="2025-08-13T01:47:20.511250340Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de7071151f5e365c637c555794ea7e43243979281d1844ad177110c0e7588196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:20.512476 kubelet[2796]: E0813 01:47:20.512303 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de7071151f5e365c637c555794ea7e43243979281d1844ad177110c0e7588196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:20.512476 kubelet[2796]: E0813 01:47:20.512359 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de7071151f5e365c637c555794ea7e43243979281d1844ad177110c0e7588196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:47:20.512476 kubelet[2796]: E0813 01:47:20.512384 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de7071151f5e365c637c555794ea7e43243979281d1844ad177110c0e7588196\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:47:20.512476 kubelet[2796]: E0813 01:47:20.512422 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-djvw6_kube-system(981696e3-42b0-4ae8-b44b-fa439a03a402)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-djvw6_kube-system(981696e3-42b0-4ae8-b44b-fa439a03a402)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de7071151f5e365c637c555794ea7e43243979281d1844ad177110c0e7588196\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-djvw6" podUID="981696e3-42b0-4ae8-b44b-fa439a03a402" Aug 13 01:47:20.525147 containerd[1581]: time="2025-08-13T01:47:20.524719516Z" level=error msg="Failed to destroy network for sandbox \"4b53e8eafc73e145a69adba9dcfc4fa60e21b1574544aad7e7edd0e57258e14d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:20.529525 containerd[1581]: time="2025-08-13T01:47:20.528874298Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vrr8,Uid:cbf6d4b0-f3bc-4a92-9977-6d91de60b65f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b53e8eafc73e145a69adba9dcfc4fa60e21b1574544aad7e7edd0e57258e14d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:20.529278 systemd[1]: run-netns-cni\x2d3401c5f2\x2db7d4\x2d56f4\x2d9844\x2d5d0b2f8eee6f.mount: Deactivated successfully. Aug 13 01:47:20.531842 kubelet[2796]: E0813 01:47:20.529153 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b53e8eafc73e145a69adba9dcfc4fa60e21b1574544aad7e7edd0e57258e14d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:20.531842 kubelet[2796]: E0813 01:47:20.529221 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b53e8eafc73e145a69adba9dcfc4fa60e21b1574544aad7e7edd0e57258e14d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:47:20.531842 kubelet[2796]: E0813 01:47:20.529242 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b53e8eafc73e145a69adba9dcfc4fa60e21b1574544aad7e7edd0e57258e14d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:47:20.531842 kubelet[2796]: E0813 01:47:20.529283 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6vrr8_kube-system(cbf6d4b0-f3bc-4a92-9977-6d91de60b65f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6vrr8_kube-system(cbf6d4b0-f3bc-4a92-9977-6d91de60b65f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b53e8eafc73e145a69adba9dcfc4fa60e21b1574544aad7e7edd0e57258e14d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6vrr8" podUID="cbf6d4b0-f3bc-4a92-9977-6d91de60b65f" Aug 13 01:47:20.595315 kubelet[2796]: I0813 01:47:20.595214 2796 kubelet.go:2306] "Pod admission denied" podUID="72a527ca-d956-4890-8afc-c6005d2c9324" pod="tigera-operator/tigera-operator-5bf8dfcb4-hlxft" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.696121 kubelet[2796]: I0813 01:47:20.696043 2796 kubelet.go:2306] "Pod admission denied" podUID="b38be1ff-2a84-489e-9f8b-d25b9707eb19" pod="tigera-operator/tigera-operator-5bf8dfcb4-dlpc2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.798221 kubelet[2796]: I0813 01:47:20.797966 2796 kubelet.go:2306] "Pod admission denied" podUID="5db0ea86-c442-4954-aceb-4a8b20c4c0e7" pod="tigera-operator/tigera-operator-5bf8dfcb4-fbsmb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.896258 kubelet[2796]: I0813 01:47:20.896181 2796 kubelet.go:2306] "Pod admission denied" podUID="de564d10-96b4-40a0-8b5a-c8347dd34af9" pod="tigera-operator/tigera-operator-5bf8dfcb4-2btm9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.995590 kubelet[2796]: I0813 01:47:20.995527 2796 kubelet.go:2306] "Pod admission denied" podUID="3b17d60b-158e-457b-8b4c-08499485524b" pod="tigera-operator/tigera-operator-5bf8dfcb4-x25bq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.102781 kubelet[2796]: I0813 01:47:21.102543 2796 kubelet.go:2306] "Pod admission denied" podUID="65d89ec5-419a-471e-b7e0-0679ec275880" pod="tigera-operator/tigera-operator-5bf8dfcb4-7j4c9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.196915 kubelet[2796]: I0813 01:47:21.196840 2796 kubelet.go:2306] "Pod admission denied" podUID="87f0b72d-78a2-407e-8c63-8c67c9dd8d4e" pod="tigera-operator/tigera-operator-5bf8dfcb4-2jk88" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.401591 kubelet[2796]: I0813 01:47:21.400926 2796 kubelet.go:2306] "Pod admission denied" podUID="37ac2044-cdf0-4c86-ba3c-c6e998626792" pod="tigera-operator/tigera-operator-5bf8dfcb4-tsc7h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.503167 kubelet[2796]: I0813 01:47:21.503078 2796 kubelet.go:2306] "Pod admission denied" podUID="d8a162b9-d910-47e0-9fa2-443de168167a" pod="tigera-operator/tigera-operator-5bf8dfcb4-nn89r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.561337 kubelet[2796]: I0813 01:47:21.561264 2796 kubelet.go:2306] "Pod admission denied" podUID="8c056beb-4b50-487f-b91f-14b7efe0a540" pod="tigera-operator/tigera-operator-5bf8dfcb4-wmx67" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.646561 kubelet[2796]: I0813 01:47:21.646481 2796 kubelet.go:2306] "Pod admission denied" podUID="fa0cf2a1-21d3-4b2d-862f-ca6db8a8c6f7" pod="tigera-operator/tigera-operator-5bf8dfcb4-qm2lx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.745939 kubelet[2796]: I0813 01:47:21.745876 2796 kubelet.go:2306] "Pod admission denied" podUID="91d5ec00-1fe9-4b75-be67-318bbd2bc037" pod="tigera-operator/tigera-operator-5bf8dfcb4-xwdxt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.857285 kubelet[2796]: I0813 01:47:21.856617 2796 kubelet.go:2306] "Pod admission denied" podUID="c7dad52d-042e-40d1-85ab-af661c7e27ef" pod="tigera-operator/tigera-operator-5bf8dfcb4-b2hpm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.947079 kubelet[2796]: I0813 01:47:21.947018 2796 kubelet.go:2306] "Pod admission denied" podUID="39a86f7d-f3b8-4360-b117-1e0172ea3f64" pod="tigera-operator/tigera-operator-5bf8dfcb4-q72sk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:22.047870 kubelet[2796]: I0813 01:47:22.047690 2796 kubelet.go:2306] "Pod admission denied" podUID="10588745-51b9-468e-b376-62c819f16128" pod="tigera-operator/tigera-operator-5bf8dfcb4-f4cp9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:22.247600 kubelet[2796]: I0813 01:47:22.247530 2796 kubelet.go:2306] "Pod admission denied" podUID="94b3b0ae-4cf7-40b8-86ea-19abacdee8c7" pod="tigera-operator/tigera-operator-5bf8dfcb4-8pchz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:22.348875 kubelet[2796]: I0813 01:47:22.348330 2796 kubelet.go:2306] "Pod admission denied" podUID="36e9e4a2-c342-42c1-ae39-f9634d8992e3" pod="tigera-operator/tigera-operator-5bf8dfcb4-wn8rh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:22.394122 kubelet[2796]: I0813 01:47:22.394037 2796 kubelet.go:2306] "Pod admission denied" podUID="a5a688f8-e8a3-4e2d-b165-29ee3b6d51cc" pod="tigera-operator/tigera-operator-5bf8dfcb4-6whcn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:22.499183 kubelet[2796]: I0813 01:47:22.499096 2796 kubelet.go:2306] "Pod admission denied" podUID="3390f7c1-bc11-4e8c-9d3c-657cff4f535e" pod="tigera-operator/tigera-operator-5bf8dfcb4-tdwkf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:22.605493 kubelet[2796]: I0813 01:47:22.604407 2796 kubelet.go:2306] "Pod admission denied" podUID="1c325241-6764-43ae-9683-ab307828c35f" pod="tigera-operator/tigera-operator-5bf8dfcb4-j4bfm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:22.647513 kubelet[2796]: I0813 01:47:22.647452 2796 kubelet.go:2306] "Pod admission denied" podUID="bb8fdcc7-a2cb-4220-8f57-7da4603f1ead" pod="tigera-operator/tigera-operator-5bf8dfcb4-89dqn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:22.767362 kubelet[2796]: I0813 01:47:22.767286 2796 kubelet.go:2306] "Pod admission denied" podUID="f2f5dfec-6068-4ef3-8a45-b09369d41f5e" pod="tigera-operator/tigera-operator-5bf8dfcb4-7bqhb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:22.953077 kubelet[2796]: I0813 01:47:22.952696 2796 kubelet.go:2306] "Pod admission denied" podUID="00b19db8-ed6e-4f6f-aac2-c37b583c8b81" pod="tigera-operator/tigera-operator-5bf8dfcb4-wv4qz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:23.047040 kubelet[2796]: I0813 01:47:23.046966 2796 kubelet.go:2306] "Pod admission denied" podUID="97039d72-1b32-46df-9a43-98db78d0fa15" pod="tigera-operator/tigera-operator-5bf8dfcb4-wlzvh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:23.148405 kubelet[2796]: I0813 01:47:23.148316 2796 kubelet.go:2306] "Pod admission denied" podUID="48524a89-d6b1-403f-9e39-590efa35ec6a" pod="tigera-operator/tigera-operator-5bf8dfcb4-lwkb7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:23.245710 kubelet[2796]: I0813 01:47:23.245629 2796 kubelet.go:2306] "Pod admission denied" podUID="9ba036fe-154f-4a35-9f0b-d1cfe1a10a88" pod="tigera-operator/tigera-operator-5bf8dfcb4-clvl9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:23.359767 kubelet[2796]: I0813 01:47:23.358705 2796 kubelet.go:2306] "Pod admission denied" podUID="f3ddf157-1278-4280-a228-773f72752968" pod="tigera-operator/tigera-operator-5bf8dfcb4-9vnq6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:23.447870 kubelet[2796]: I0813 01:47:23.447787 2796 kubelet.go:2306] "Pod admission denied" podUID="39b04c18-f83d-4e4a-a816-26158eb1367d" pod="tigera-operator/tigera-operator-5bf8dfcb4-6hftp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:23.550450 kubelet[2796]: I0813 01:47:23.549925 2796 kubelet.go:2306] "Pod admission denied" podUID="eb05177c-4d60-425a-bfd6-7dba331b5a48" pod="tigera-operator/tigera-operator-5bf8dfcb4-nt64h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:23.647956 kubelet[2796]: I0813 01:47:23.647861 2796 kubelet.go:2306] "Pod admission denied" podUID="fbc49629-c4f6-4b28-93c0-3d207e7a506f" pod="tigera-operator/tigera-operator-5bf8dfcb4-wrxhj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:23.749240 kubelet[2796]: I0813 01:47:23.749150 2796 kubelet.go:2306] "Pod admission denied" podUID="e73995fe-2b3c-4a8a-93cb-91cbafb97198" pod="tigera-operator/tigera-operator-5bf8dfcb4-rts28" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:23.851035 kubelet[2796]: I0813 01:47:23.850029 2796 kubelet.go:2306] "Pod admission denied" podUID="44119806-cc32-43a2-aeae-69986a16fe58" pod="tigera-operator/tigera-operator-5bf8dfcb4-f8mxv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:23.948490 kubelet[2796]: I0813 01:47:23.948414 2796 kubelet.go:2306] "Pod admission denied" podUID="16016525-4768-47ac-af62-6f11270951aa" pod="tigera-operator/tigera-operator-5bf8dfcb4-5ttdz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.055070 kubelet[2796]: I0813 01:47:24.054734 2796 kubelet.go:2306] "Pod admission denied" podUID="b1cbb118-1777-4095-91d6-f28ff0f8d963" pod="tigera-operator/tigera-operator-5bf8dfcb4-f7cg2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.147377 kubelet[2796]: I0813 01:47:24.146886 2796 kubelet.go:2306] "Pod admission denied" podUID="34ff8200-f21d-4b16-a4ae-ef5c3ab239d8" pod="tigera-operator/tigera-operator-5bf8dfcb4-45kwq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.244902 kubelet[2796]: I0813 01:47:24.244833 2796 kubelet.go:2306] "Pod admission denied" podUID="f820fe6a-5af2-4c3f-b7c1-6c52355a71a9" pod="tigera-operator/tigera-operator-5bf8dfcb4-tk7wd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.273279 kubelet[2796]: I0813 01:47:24.273241 2796 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:24.273279 kubelet[2796]: I0813 01:47:24.273289 2796 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:47:24.276898 kubelet[2796]: I0813 01:47:24.276487 2796 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:47:24.295698 kubelet[2796]: I0813 01:47:24.295622 2796 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:24.295872 kubelet[2796]: I0813 01:47:24.295761 2796 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-6vrr8","kube-system/coredns-7c65d6cfc9-djvw6","calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7","calico-system/csi-node-driver-bk2p6","calico-system/calico-node-8j6cb","calico-system/calico-typha-bf6ccb678-cdfdr","kube-system/kube-controller-manager-172-232-7-32","kube-system/kube-proxy-dmp9l","kube-system/kube-apiserver-172-232-7-32","kube-system/kube-scheduler-172-232-7-32"] Aug 13 01:47:24.295872 kubelet[2796]: E0813 01:47:24.295804 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:47:24.295872 kubelet[2796]: E0813 01:47:24.295815 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:47:24.295872 kubelet[2796]: E0813 01:47:24.295822 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:47:24.295872 kubelet[2796]: E0813 01:47:24.295829 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:47:24.295872 kubelet[2796]: E0813 01:47:24.295837 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-8j6cb" Aug 13 01:47:24.295872 kubelet[2796]: E0813 01:47:24.295849 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-bf6ccb678-cdfdr" Aug 13 01:47:24.295872 kubelet[2796]: E0813 01:47:24.295860 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-32" Aug 13 01:47:24.295872 kubelet[2796]: E0813 01:47:24.295872 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-dmp9l" Aug 13 01:47:24.296146 kubelet[2796]: E0813 01:47:24.295886 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-32" Aug 13 01:47:24.296146 kubelet[2796]: E0813 01:47:24.295897 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-32" Aug 13 01:47:24.296146 kubelet[2796]: I0813 01:47:24.295910 2796 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:47:24.352723 kubelet[2796]: I0813 01:47:24.352630 2796 kubelet.go:2306] "Pod admission denied" podUID="bcb5f388-b6d5-4b73-a442-5ce370572316" pod="tigera-operator/tigera-operator-5bf8dfcb4-trsrw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.445547 kubelet[2796]: I0813 01:47:24.444825 2796 kubelet.go:2306] "Pod admission denied" podUID="5c78d932-b03f-4ff3-b179-81ce7ba8c291" pod="tigera-operator/tigera-operator-5bf8dfcb4-4qfdj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.549052 kubelet[2796]: I0813 01:47:24.548949 2796 kubelet.go:2306] "Pod admission denied" podUID="bfb45ef5-4a0f-4cdb-a967-fc2c7e820da9" pod="tigera-operator/tigera-operator-5bf8dfcb4-7m9bs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.647595 kubelet[2796]: I0813 01:47:24.647079 2796 kubelet.go:2306] "Pod admission denied" podUID="b8c88b68-5f27-4f3d-926d-b01609dc91ce" pod="tigera-operator/tigera-operator-5bf8dfcb4-9vj8m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.747931 kubelet[2796]: I0813 01:47:24.747835 2796 kubelet.go:2306] "Pod admission denied" podUID="c70325e6-e3c5-4711-b165-39397aba6031" pod="tigera-operator/tigera-operator-5bf8dfcb4-klz2s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.847765 kubelet[2796]: I0813 01:47:24.847665 2796 kubelet.go:2306] "Pod admission denied" podUID="eb21f283-d065-4054-b90b-789b52a3a366" pod="tigera-operator/tigera-operator-5bf8dfcb4-gw4qs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.957674 kubelet[2796]: I0813 01:47:24.956925 2796 kubelet.go:2306] "Pod admission denied" podUID="4026814c-c177-4107-85a3-bd4fbfb329c9" pod="tigera-operator/tigera-operator-5bf8dfcb4-ccf68" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.149005 kubelet[2796]: I0813 01:47:25.148802 2796 kubelet.go:2306] "Pod admission denied" podUID="4f693ecf-289f-4db7-91f2-2baa47c6a45f" pod="tigera-operator/tigera-operator-5bf8dfcb4-j2mth" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.247609 kubelet[2796]: I0813 01:47:25.247534 2796 kubelet.go:2306] "Pod admission denied" podUID="71fb6aa3-68d3-47c8-bbb3-85c5e60f125e" pod="tigera-operator/tigera-operator-5bf8dfcb4-d2n85" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.298668 kubelet[2796]: I0813 01:47:25.297896 2796 kubelet.go:2306] "Pod admission denied" podUID="b1420339-ecec-4b7f-bd5f-e73040e8549b" pod="tigera-operator/tigera-operator-5bf8dfcb4-bv6nr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.398937 containerd[1581]: time="2025-08-13T01:47:25.398852754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:25.408678 kubelet[2796]: I0813 01:47:25.406357 2796 kubelet.go:2306] "Pod admission denied" podUID="bd3db8d7-96dc-4f9b-b967-c8b83241fff2" pod="tigera-operator/tigera-operator-5bf8dfcb4-pwqr2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.473486 containerd[1581]: time="2025-08-13T01:47:25.471085410Z" level=error msg="Failed to destroy network for sandbox \"9df4c4bb666aac77dd3147476ac4be536e92c53292ddcbad907b83b58795c1e0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:25.473629 containerd[1581]: time="2025-08-13T01:47:25.473499617Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9df4c4bb666aac77dd3147476ac4be536e92c53292ddcbad907b83b58795c1e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:25.474506 systemd[1]: run-netns-cni\x2df756c811\x2ddf6c\x2d8898\x2d38bd\x2dd80ce637f359.mount: Deactivated successfully. Aug 13 01:47:25.476250 kubelet[2796]: E0813 01:47:25.475516 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9df4c4bb666aac77dd3147476ac4be536e92c53292ddcbad907b83b58795c1e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:25.476250 kubelet[2796]: E0813 01:47:25.475593 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9df4c4bb666aac77dd3147476ac4be536e92c53292ddcbad907b83b58795c1e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:47:25.476250 kubelet[2796]: E0813 01:47:25.475679 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9df4c4bb666aac77dd3147476ac4be536e92c53292ddcbad907b83b58795c1e0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:47:25.476250 kubelet[2796]: E0813 01:47:25.475735 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9df4c4bb666aac77dd3147476ac4be536e92c53292ddcbad907b83b58795c1e0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" podUID="4720bedc-4719-4a57-b2ff-e5b21f7acb7f" Aug 13 01:47:25.598120 kubelet[2796]: I0813 01:47:25.598035 2796 kubelet.go:2306] "Pod admission denied" podUID="c58e6de0-e7db-4e24-b5f0-cf7a584c214d" pod="tigera-operator/tigera-operator-5bf8dfcb4-4vqsw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.697871 kubelet[2796]: I0813 01:47:25.697683 2796 kubelet.go:2306] "Pod admission denied" podUID="bce65ef4-7c12-4dda-8536-9ca997d1489c" pod="tigera-operator/tigera-operator-5bf8dfcb4-67qm5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.797882 kubelet[2796]: I0813 01:47:25.797808 2796 kubelet.go:2306] "Pod admission denied" podUID="69b3fbc7-c3d7-49a8-b0bc-89dd2f98ea43" pod="tigera-operator/tigera-operator-5bf8dfcb4-vthn6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.998054 kubelet[2796]: I0813 01:47:25.997975 2796 kubelet.go:2306] "Pod admission denied" podUID="990eb5c4-0690-41ba-837c-aec59d3780bf" pod="tigera-operator/tigera-operator-5bf8dfcb4-48s4j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.095509 kubelet[2796]: I0813 01:47:26.095445 2796 kubelet.go:2306] "Pod admission denied" podUID="6db8b1f0-5a45-46f8-8dec-8ba8d8154adb" pod="tigera-operator/tigera-operator-5bf8dfcb4-kkrwn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.199104 kubelet[2796]: I0813 01:47:26.199026 2796 kubelet.go:2306] "Pod admission denied" podUID="9a71d5df-6b45-42f4-b5c3-13f5c1f56f61" pod="tigera-operator/tigera-operator-5bf8dfcb4-ct22m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.297848 kubelet[2796]: I0813 01:47:26.297660 2796 kubelet.go:2306] "Pod admission denied" podUID="694f47f9-d767-4089-aec4-38d7e1d2fa21" pod="tigera-operator/tigera-operator-5bf8dfcb4-4s27f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.352324 kubelet[2796]: I0813 01:47:26.352261 2796 kubelet.go:2306] "Pod admission denied" podUID="7bd8e4ff-d6a5-4453-94d1-dac143311d11" pod="tigera-operator/tigera-operator-5bf8dfcb4-4zw2v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.448814 kubelet[2796]: I0813 01:47:26.448750 2796 kubelet.go:2306] "Pod admission denied" podUID="a1ea3c33-673f-495f-a471-cb34f804ab32" pod="tigera-operator/tigera-operator-5bf8dfcb4-2mjgp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.546881 kubelet[2796]: I0813 01:47:26.546823 2796 kubelet.go:2306] "Pod admission denied" podUID="315df235-2218-49af-a207-d0625c8d52d4" pod="tigera-operator/tigera-operator-5bf8dfcb4-5kc8q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.652576 kubelet[2796]: I0813 01:47:26.651804 2796 kubelet.go:2306] "Pod admission denied" podUID="e19c9f0c-b726-4fdb-8fd7-705fc4e9dd91" pod="tigera-operator/tigera-operator-5bf8dfcb4-7tlpw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.748262 kubelet[2796]: I0813 01:47:26.748163 2796 kubelet.go:2306] "Pod admission denied" podUID="3bd1c190-17cd-442e-b7b3-7b92111263a4" pod="tigera-operator/tigera-operator-5bf8dfcb4-trtkt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.848024 kubelet[2796]: I0813 01:47:26.847941 2796 kubelet.go:2306] "Pod admission denied" podUID="e4b0d713-d3d6-44c6-bb76-d415ba3271cd" pod="tigera-operator/tigera-operator-5bf8dfcb4-x7w8l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.962529 kubelet[2796]: I0813 01:47:26.962326 2796 kubelet.go:2306] "Pod admission denied" podUID="bdc30194-3450-483a-b8a6-53b8d020448a" pod="tigera-operator/tigera-operator-5bf8dfcb4-b7hpf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:27.003616 kubelet[2796]: I0813 01:47:27.003550 2796 kubelet.go:2306] "Pod admission denied" podUID="f53e6237-9370-47df-a97a-c0af097d837e" pod="tigera-operator/tigera-operator-5bf8dfcb4-gs48r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:27.100118 kubelet[2796]: I0813 01:47:27.100056 2796 kubelet.go:2306] "Pod admission denied" podUID="deea60b7-8e13-44a1-aa0e-9843b0d0a725" pod="tigera-operator/tigera-operator-5bf8dfcb4-zrzsk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:27.205351 kubelet[2796]: I0813 01:47:27.205262 2796 kubelet.go:2306] "Pod admission denied" podUID="ba3fceaf-fe1f-4f09-a46b-ccd9ed0fbf0c" pod="tigera-operator/tigera-operator-5bf8dfcb4-sz8z8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:27.307909 kubelet[2796]: I0813 01:47:27.307817 2796 kubelet.go:2306] "Pod admission denied" podUID="c04407a6-d8aa-4192-936a-9e277ea11fd6" pod="tigera-operator/tigera-operator-5bf8dfcb4-lkhqv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:27.428346 kubelet[2796]: I0813 01:47:27.427037 2796 kubelet.go:2306] "Pod admission denied" podUID="bfaaa3d9-8550-4988-9386-fe4b0e84a1d0" pod="tigera-operator/tigera-operator-5bf8dfcb4-trjhh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:27.549059 kubelet[2796]: I0813 01:47:27.548987 2796 kubelet.go:2306] "Pod admission denied" podUID="92948596-f214-4d40-9bb8-f89e21505dcd" pod="tigera-operator/tigera-operator-5bf8dfcb4-vsfwv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:27.660229 kubelet[2796]: I0813 01:47:27.658317 2796 kubelet.go:2306] "Pod admission denied" podUID="d552b330-f1ca-4fd3-bf6d-5b9366814b63" pod="tigera-operator/tigera-operator-5bf8dfcb4-p24kv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:27.904634 kubelet[2796]: I0813 01:47:27.904562 2796 kubelet.go:2306] "Pod admission denied" podUID="490fec7f-5c51-4f79-9b25-cf19720e43c3" pod="tigera-operator/tigera-operator-5bf8dfcb4-78fct" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:27.999202 kubelet[2796]: I0813 01:47:27.999139 2796 kubelet.go:2306] "Pod admission denied" podUID="d69578bf-f1c0-4f64-a49f-edcb700a8fac" pod="tigera-operator/tigera-operator-5bf8dfcb4-7lsgx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:28.103305 kubelet[2796]: I0813 01:47:28.103203 2796 kubelet.go:2306] "Pod admission denied" podUID="3e2ce58b-c18f-421d-b6ce-3ba6d2548206" pod="tigera-operator/tigera-operator-5bf8dfcb4-bwgv8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:28.215672 kubelet[2796]: I0813 01:47:28.215443 2796 kubelet.go:2306] "Pod admission denied" podUID="09d95b71-f250-4a09-97a2-0893e129dafe" pod="tigera-operator/tigera-operator-5bf8dfcb4-j596q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:28.408711 kubelet[2796]: I0813 01:47:28.406956 2796 kubelet.go:2306] "Pod admission denied" podUID="de289280-1eec-459e-8824-41a09cae2ae1" pod="tigera-operator/tigera-operator-5bf8dfcb4-gfxcl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:28.498986 kubelet[2796]: I0813 01:47:28.498903 2796 kubelet.go:2306] "Pod admission denied" podUID="f09a4540-9bf7-4444-8282-f281bddab4ae" pod="tigera-operator/tigera-operator-5bf8dfcb4-ln6mz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:28.602687 kubelet[2796]: I0813 01:47:28.602603 2796 kubelet.go:2306] "Pod admission denied" podUID="b6a44ce4-2d95-41f5-a7cf-724ce1041133" pod="tigera-operator/tigera-operator-5bf8dfcb4-s7d6f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:28.702376 kubelet[2796]: I0813 01:47:28.701049 2796 kubelet.go:2306] "Pod admission denied" podUID="e38a494f-2473-4d83-b2eb-f7fb8f2cb340" pod="tigera-operator/tigera-operator-5bf8dfcb4-f59zm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:28.746794 kubelet[2796]: I0813 01:47:28.746731 2796 kubelet.go:2306] "Pod admission denied" podUID="8937ed2a-d0df-4886-ae16-18a11d81d2ce" pod="tigera-operator/tigera-operator-5bf8dfcb4-zw4rs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:28.857466 kubelet[2796]: I0813 01:47:28.855929 2796 kubelet.go:2306] "Pod admission denied" podUID="6358b163-8f99-4360-9d33-30e3a14e8bd3" pod="tigera-operator/tigera-operator-5bf8dfcb4-g5sbw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.047717 kubelet[2796]: I0813 01:47:29.047659 2796 kubelet.go:2306] "Pod admission denied" podUID="dbac17b2-023c-42ed-9c10-888472988f90" pod="tigera-operator/tigera-operator-5bf8dfcb4-dwlzn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.148109 kubelet[2796]: I0813 01:47:29.148042 2796 kubelet.go:2306] "Pod admission denied" podUID="4d903f6f-4a7f-445c-98f0-be038b17bf1c" pod="tigera-operator/tigera-operator-5bf8dfcb4-hjfrq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.259946 kubelet[2796]: I0813 01:47:29.259306 2796 kubelet.go:2306] "Pod admission denied" podUID="946c98fb-0cea-4cad-b83b-b8578a500f8e" pod="tigera-operator/tigera-operator-5bf8dfcb4-r8wxj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.349822 kubelet[2796]: I0813 01:47:29.349486 2796 kubelet.go:2306] "Pod admission denied" podUID="9bf68e86-ae5d-424c-bf28-02474411e3e9" pod="tigera-operator/tigera-operator-5bf8dfcb4-z8zgv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.405943 kubelet[2796]: I0813 01:47:29.405868 2796 kubelet.go:2306] "Pod admission denied" podUID="fd1ff310-b894-49b1-882e-160647e92c3d" pod="tigera-operator/tigera-operator-5bf8dfcb4-9mqpl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.499164 kubelet[2796]: I0813 01:47:29.499098 2796 kubelet.go:2306] "Pod admission denied" podUID="b2fcce75-1bcf-4f3e-aa91-f2b8bde9610f" pod="tigera-operator/tigera-operator-5bf8dfcb4-dz2zq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.600572 kubelet[2796]: I0813 01:47:29.600008 2796 kubelet.go:2306] "Pod admission denied" podUID="bedd91b2-6567-48f7-b593-aee24186f2f5" pod="tigera-operator/tigera-operator-5bf8dfcb4-tk5vs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.700984 kubelet[2796]: I0813 01:47:29.700900 2796 kubelet.go:2306] "Pod admission denied" podUID="0b4bc3e8-c506-4f65-9b92-8bc296a14ced" pod="tigera-operator/tigera-operator-5bf8dfcb4-kzzc2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.798621 kubelet[2796]: I0813 01:47:29.798518 2796 kubelet.go:2306] "Pod admission denied" podUID="c3159eb6-4662-42cd-a836-09ea702c94ed" pod="tigera-operator/tigera-operator-5bf8dfcb4-lcwfn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.900805 kubelet[2796]: I0813 01:47:29.900592 2796 kubelet.go:2306] "Pod admission denied" podUID="9a378d4b-3450-4d9a-8700-9d750c0f7b12" pod="tigera-operator/tigera-operator-5bf8dfcb4-w5h4s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.999364 kubelet[2796]: I0813 01:47:29.999275 2796 kubelet.go:2306] "Pod admission denied" podUID="34f966e1-062b-4380-8d55-ffd454ffb7cb" pod="tigera-operator/tigera-operator-5bf8dfcb4-pfj54" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:30.100043 kubelet[2796]: I0813 01:47:30.099974 2796 kubelet.go:2306] "Pod admission denied" podUID="7d51f091-8084-489d-8917-04e2152974a8" pod="tigera-operator/tigera-operator-5bf8dfcb4-wwjrx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:30.301350 kubelet[2796]: I0813 01:47:30.301240 2796 kubelet.go:2306] "Pod admission denied" podUID="447231f1-20fe-4dba-805a-5add06a30c4a" pod="tigera-operator/tigera-operator-5bf8dfcb4-d5b4l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:30.399351 kubelet[2796]: I0813 01:47:30.399271 2796 kubelet.go:2306] "Pod admission denied" podUID="309eb941-f7a1-4599-937a-94187a5bd311" pod="tigera-operator/tigera-operator-5bf8dfcb4-95r6m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:30.497899 kubelet[2796]: I0813 01:47:30.497822 2796 kubelet.go:2306] "Pod admission denied" podUID="ea626760-617b-47c1-82c8-920cd7229153" pod="tigera-operator/tigera-operator-5bf8dfcb4-hcxxk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:30.700750 kubelet[2796]: I0813 01:47:30.700539 2796 kubelet.go:2306] "Pod admission denied" podUID="41de5c7b-c400-478e-9abe-3807bab094c1" pod="tigera-operator/tigera-operator-5bf8dfcb4-w745g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:30.799184 kubelet[2796]: I0813 01:47:30.799081 2796 kubelet.go:2306] "Pod admission denied" podUID="a2b6b942-4b85-49f3-ac60-519d779eb40f" pod="tigera-operator/tigera-operator-5bf8dfcb4-v6qg2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:30.899464 kubelet[2796]: I0813 01:47:30.899377 2796 kubelet.go:2306] "Pod admission denied" podUID="73cb2208-111f-4d7e-8cd5-7149c9351b13" pod="tigera-operator/tigera-operator-5bf8dfcb4-wtvvj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:30.999970 kubelet[2796]: I0813 01:47:30.999893 2796 kubelet.go:2306] "Pod admission denied" podUID="6c13d025-a6e2-4ddf-87c8-13564a808988" pod="tigera-operator/tigera-operator-5bf8dfcb4-mv8f4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:31.103374 kubelet[2796]: I0813 01:47:31.103295 2796 kubelet.go:2306] "Pod admission denied" podUID="36aff2ba-1d79-471c-8049-214aee1f1197" pod="tigera-operator/tigera-operator-5bf8dfcb4-z8fb8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:31.202202 kubelet[2796]: I0813 01:47:31.202139 2796 kubelet.go:2306] "Pod admission denied" podUID="c09eb36d-66c8-45d0-9e19-fa28edbfaf24" pod="tigera-operator/tigera-operator-5bf8dfcb4-4zllm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:31.269258 kubelet[2796]: I0813 01:47:31.269011 2796 kubelet.go:2306] "Pod admission denied" podUID="73de3da5-a360-4de3-bf16-d59741251f4b" pod="tigera-operator/tigera-operator-5bf8dfcb4-bbkfr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:31.346668 kubelet[2796]: I0813 01:47:31.346575 2796 kubelet.go:2306] "Pod admission denied" podUID="f23f441c-1246-40a3-9a57-e5af0ded4a74" pod="tigera-operator/tigera-operator-5bf8dfcb4-79v74" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:31.399574 kubelet[2796]: E0813 01:47:31.398233 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:47:31.399574 kubelet[2796]: E0813 01:47:31.399199 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\"\"" pod="calico-system/calico-node-8j6cb" podUID="416d9de4-5101-44c9-b974-0fedf790aa67" Aug 13 01:47:31.399966 containerd[1581]: time="2025-08-13T01:47:31.399189818Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:31.459115 kubelet[2796]: I0813 01:47:31.459044 2796 kubelet.go:2306] "Pod admission denied" podUID="31363b88-8b57-4d2a-94f8-c49d774d619e" pod="tigera-operator/tigera-operator-5bf8dfcb4-th87v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:31.476822 containerd[1581]: time="2025-08-13T01:47:31.476754030Z" level=error msg="Failed to destroy network for sandbox \"6d3212eae59db61a5e335dff21875bd450dc250d063553391896dbce0b0f79b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:31.480544 systemd[1]: run-netns-cni\x2d7422f34b\x2d473b\x2d1b27\x2d441a\x2da720eb8267d4.mount: Deactivated successfully. Aug 13 01:47:31.481352 containerd[1581]: time="2025-08-13T01:47:31.481210376Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d3212eae59db61a5e335dff21875bd450dc250d063553391896dbce0b0f79b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:31.482208 kubelet[2796]: E0813 01:47:31.481939 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d3212eae59db61a5e335dff21875bd450dc250d063553391896dbce0b0f79b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:31.482361 kubelet[2796]: E0813 01:47:31.482291 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d3212eae59db61a5e335dff21875bd450dc250d063553391896dbce0b0f79b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:47:31.482406 kubelet[2796]: E0813 01:47:31.482366 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d3212eae59db61a5e335dff21875bd450dc250d063553391896dbce0b0f79b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:47:31.482511 kubelet[2796]: E0813 01:47:31.482449 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-djvw6_kube-system(981696e3-42b0-4ae8-b44b-fa439a03a402)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-djvw6_kube-system(981696e3-42b0-4ae8-b44b-fa439a03a402)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d3212eae59db61a5e335dff21875bd450dc250d063553391896dbce0b0f79b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-djvw6" podUID="981696e3-42b0-4ae8-b44b-fa439a03a402" Aug 13 01:47:31.548209 kubelet[2796]: I0813 01:47:31.547190 2796 kubelet.go:2306] "Pod admission denied" podUID="ca5492c7-ea52-4dd4-a030-9f92db958e27" pod="tigera-operator/tigera-operator-5bf8dfcb4-67f8b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:31.749413 kubelet[2796]: I0813 01:47:31.749345 2796 kubelet.go:2306] "Pod admission denied" podUID="f9122a04-97e6-4fda-a87e-afe7bc83837b" pod="tigera-operator/tigera-operator-5bf8dfcb4-fq4rn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:31.852229 kubelet[2796]: I0813 01:47:31.851861 2796 kubelet.go:2306] "Pod admission denied" podUID="fd16d097-0887-4bf4-a25b-9577bbac0c79" pod="tigera-operator/tigera-operator-5bf8dfcb4-rql4r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:31.905926 kubelet[2796]: I0813 01:47:31.905844 2796 kubelet.go:2306] "Pod admission denied" podUID="12afe099-15e8-4f53-9cd2-9669eccf26e2" pod="tigera-operator/tigera-operator-5bf8dfcb4-jvm2w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.000578 kubelet[2796]: I0813 01:47:32.000504 2796 kubelet.go:2306] "Pod admission denied" podUID="57142c48-ca7c-4c75-ad57-f1c797fd2fba" pod="tigera-operator/tigera-operator-5bf8dfcb4-bb6fl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.099668 kubelet[2796]: I0813 01:47:32.099603 2796 kubelet.go:2306] "Pod admission denied" podUID="c7b96a79-a937-4ef3-a729-529d7a4df81f" pod="tigera-operator/tigera-operator-5bf8dfcb4-rkddk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.168983 kubelet[2796]: I0813 01:47:32.168220 2796 kubelet.go:2306] "Pod admission denied" podUID="36bd2311-e161-4ed5-80ee-0f476b8cc7f2" pod="tigera-operator/tigera-operator-5bf8dfcb4-qn9ff" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.251318 kubelet[2796]: I0813 01:47:32.251238 2796 kubelet.go:2306] "Pod admission denied" podUID="c43146ba-5aaf-46e1-bdc7-2f3c56b65fda" pod="tigera-operator/tigera-operator-5bf8dfcb4-57lts" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.348933 kubelet[2796]: I0813 01:47:32.348858 2796 kubelet.go:2306] "Pod admission denied" podUID="a23a015e-f89d-415b-9cfa-016f281f4e26" pod="tigera-operator/tigera-operator-5bf8dfcb4-fvhq6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.399569 containerd[1581]: time="2025-08-13T01:47:32.398722279Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:32.456250 kubelet[2796]: I0813 01:47:32.455964 2796 kubelet.go:2306] "Pod admission denied" podUID="b4b07c61-ca8c-4e22-a611-96e5750888be" pod="tigera-operator/tigera-operator-5bf8dfcb4-dwffv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.486173 containerd[1581]: time="2025-08-13T01:47:32.485976259Z" level=error msg="Failed to destroy network for sandbox \"7c93d8e382b89da797fbfd4986fd86d387936b49a2d90ee3f6ef1d892a8f5eb7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:32.491257 systemd[1]: run-netns-cni\x2d09ca68d6\x2d446d\x2de7d7\x2d17a9\x2da058ec033e71.mount: Deactivated successfully. Aug 13 01:47:32.491691 containerd[1581]: time="2025-08-13T01:47:32.491629208Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c93d8e382b89da797fbfd4986fd86d387936b49a2d90ee3f6ef1d892a8f5eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:32.492146 kubelet[2796]: E0813 01:47:32.491986 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c93d8e382b89da797fbfd4986fd86d387936b49a2d90ee3f6ef1d892a8f5eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:32.492146 kubelet[2796]: E0813 01:47:32.492105 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c93d8e382b89da797fbfd4986fd86d387936b49a2d90ee3f6ef1d892a8f5eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:47:32.492146 kubelet[2796]: E0813 01:47:32.492134 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c93d8e382b89da797fbfd4986fd86d387936b49a2d90ee3f6ef1d892a8f5eb7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:47:32.492336 kubelet[2796]: E0813 01:47:32.492287 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c93d8e382b89da797fbfd4986fd86d387936b49a2d90ee3f6ef1d892a8f5eb7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bk2p6" podUID="0e8898c7-a3f5-4010-bb1f-d756673c29b2" Aug 13 01:47:32.654028 kubelet[2796]: I0813 01:47:32.653958 2796 kubelet.go:2306] "Pod admission denied" podUID="6b917ce5-872f-4f3c-9504-444b783311a7" pod="tigera-operator/tigera-operator-5bf8dfcb4-2tckc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.755673 kubelet[2796]: I0813 01:47:32.755581 2796 kubelet.go:2306] "Pod admission denied" podUID="7a58b6dd-508f-40b7-bcc9-af63a04f6b41" pod="tigera-operator/tigera-operator-5bf8dfcb4-7whhk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.852781 kubelet[2796]: I0813 01:47:32.852687 2796 kubelet.go:2306] "Pod admission denied" podUID="45030bd2-eb69-4ca1-984d-2bfb202d66ec" pod="tigera-operator/tigera-operator-5bf8dfcb4-xxrk5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.955242 kubelet[2796]: I0813 01:47:32.955151 2796 kubelet.go:2306] "Pod admission denied" podUID="bbb46c8e-f2cf-48c5-a19e-87c38095b1e0" pod="tigera-operator/tigera-operator-5bf8dfcb4-6c42k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:33.011732 kubelet[2796]: I0813 01:47:33.011509 2796 kubelet.go:2306] "Pod admission denied" podUID="c328d1ff-0a84-481f-b1fd-8897773a2876" pod="tigera-operator/tigera-operator-5bf8dfcb4-crqc5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:33.100740 kubelet[2796]: I0813 01:47:33.100675 2796 kubelet.go:2306] "Pod admission denied" podUID="48bc277d-f161-4fda-81cb-c40958f065e1" pod="tigera-operator/tigera-operator-5bf8dfcb4-694bx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:33.206765 kubelet[2796]: I0813 01:47:33.206683 2796 kubelet.go:2306] "Pod admission denied" podUID="4a644ff3-1461-47d5-ac40-7655f66df7bd" pod="tigera-operator/tigera-operator-5bf8dfcb4-n7m9j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:33.253379 kubelet[2796]: I0813 01:47:33.253287 2796 kubelet.go:2306] "Pod admission denied" podUID="186e7dc7-0be0-4beb-b3f8-90e87f3af821" pod="tigera-operator/tigera-operator-5bf8dfcb4-8rn9v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:33.355703 kubelet[2796]: I0813 01:47:33.354002 2796 kubelet.go:2306] "Pod admission denied" podUID="85708c53-cfa6-4f4a-8b60-aa73805e9484" pod="tigera-operator/tigera-operator-5bf8dfcb4-mv6q4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:33.552363 kubelet[2796]: I0813 01:47:33.552242 2796 kubelet.go:2306] "Pod admission denied" podUID="0215214a-5ca2-45a2-8362-6878485ef475" pod="tigera-operator/tigera-operator-5bf8dfcb4-28r8b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:33.649495 kubelet[2796]: I0813 01:47:33.648748 2796 kubelet.go:2306] "Pod admission denied" podUID="71e7c4e9-0cfd-4462-80ca-dc9cda28503e" pod="tigera-operator/tigera-operator-5bf8dfcb4-7vbqk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:33.754013 kubelet[2796]: I0813 01:47:33.753944 2796 kubelet.go:2306] "Pod admission denied" podUID="9845e081-d231-4db6-b189-8ed7c973dad2" pod="tigera-operator/tigera-operator-5bf8dfcb4-tzrqv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:33.851794 kubelet[2796]: I0813 01:47:33.851721 2796 kubelet.go:2306] "Pod admission denied" podUID="ee5f53b0-80f2-4f30-b144-6846225d0830" pod="tigera-operator/tigera-operator-5bf8dfcb4-bdgft" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:33.950960 kubelet[2796]: I0813 01:47:33.950320 2796 kubelet.go:2306] "Pod admission denied" podUID="c2d7de02-8d9c-4dc4-9b2b-53604a02e8a5" pod="tigera-operator/tigera-operator-5bf8dfcb4-smg68" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:34.151283 kubelet[2796]: I0813 01:47:34.151208 2796 kubelet.go:2306] "Pod admission denied" podUID="c2d2abff-7b35-450b-a110-0a6299e6c6cc" pod="tigera-operator/tigera-operator-5bf8dfcb4-9tfc5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:34.249114 kubelet[2796]: I0813 01:47:34.249039 2796 kubelet.go:2306] "Pod admission denied" podUID="aff66a41-a51b-4e37-b021-710b67f6d2bc" pod="tigera-operator/tigera-operator-5bf8dfcb4-pxqw7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:34.313706 kubelet[2796]: I0813 01:47:34.313667 2796 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:34.313706 kubelet[2796]: I0813 01:47:34.313714 2796 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:47:34.315756 kubelet[2796]: I0813 01:47:34.315712 2796 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:47:34.318325 kubelet[2796]: I0813 01:47:34.318283 2796 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" size=18562039 runtimeHandler="" Aug 13 01:47:34.318999 containerd[1581]: time="2025-08-13T01:47:34.318899188Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:47:34.320846 containerd[1581]: time="2025-08-13T01:47:34.320704137Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:47:34.321365 containerd[1581]: time="2025-08-13T01:47:34.321331383Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\"" Aug 13 01:47:34.321829 containerd[1581]: time="2025-08-13T01:47:34.321809266Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" returns successfully" Aug 13 01:47:34.321924 containerd[1581]: time="2025-08-13T01:47:34.321907479Z" level=info msg="ImageDelete event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:47:34.322230 kubelet[2796]: I0813 01:47:34.322137 2796 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4" size=56909194 runtimeHandler="" Aug 13 01:47:34.322508 containerd[1581]: time="2025-08-13T01:47:34.322461054Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:47:34.323408 containerd[1581]: time="2025-08-13T01:47:34.323366878Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.15-0\"" Aug 13 01:47:34.324101 containerd[1581]: time="2025-08-13T01:47:34.324001255Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:a6dc63e6e8cfa0307d7851762fa6b629afb18f28d8aa3fab5a6e91b4af60026a\"" Aug 13 01:47:34.324574 containerd[1581]: time="2025-08-13T01:47:34.324555370Z" level=info msg="RemoveImage \"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\" returns successfully" Aug 13 01:47:34.324761 containerd[1581]: time="2025-08-13T01:47:34.324714175Z" level=info msg="ImageDelete event name:\"sha256:2e96e5913fc06e3d26915af3d0f2ca5048cc4b6327e661e80da792cbf8d8d9d4\"" Aug 13 01:47:34.344157 kubelet[2796]: I0813 01:47:34.343986 2796 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:34.344157 kubelet[2796]: I0813 01:47:34.344121 2796 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7","kube-system/coredns-7c65d6cfc9-6vrr8","kube-system/coredns-7c65d6cfc9-djvw6","calico-system/csi-node-driver-bk2p6","calico-system/calico-node-8j6cb","calico-system/calico-typha-bf6ccb678-cdfdr","kube-system/kube-controller-manager-172-232-7-32","kube-system/kube-proxy-dmp9l","kube-system/kube-apiserver-172-232-7-32","kube-system/kube-scheduler-172-232-7-32"] Aug 13 01:47:34.344157 kubelet[2796]: E0813 01:47:34.344162 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:47:34.344426 kubelet[2796]: E0813 01:47:34.344194 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:47:34.344426 kubelet[2796]: E0813 01:47:34.344203 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:47:34.344426 kubelet[2796]: E0813 01:47:34.344210 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:47:34.344426 kubelet[2796]: E0813 01:47:34.344217 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-8j6cb" Aug 13 01:47:34.344426 kubelet[2796]: E0813 01:47:34.344231 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-bf6ccb678-cdfdr" Aug 13 01:47:34.344426 kubelet[2796]: E0813 01:47:34.344241 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-32" Aug 13 01:47:34.344426 kubelet[2796]: E0813 01:47:34.344252 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-dmp9l" Aug 13 01:47:34.344426 kubelet[2796]: E0813 01:47:34.344263 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-32" Aug 13 01:47:34.344426 kubelet[2796]: E0813 01:47:34.344273 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-32" Aug 13 01:47:34.344426 kubelet[2796]: I0813 01:47:34.344282 2796 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:47:34.357849 kubelet[2796]: I0813 01:47:34.357787 2796 kubelet.go:2306] "Pod admission denied" podUID="6767af4b-9b93-4b4e-a195-43134c0a870d" pod="tigera-operator/tigera-operator-5bf8dfcb4-zjmrh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:34.397021 kubelet[2796]: E0813 01:47:34.396964 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:47:34.398695 containerd[1581]: time="2025-08-13T01:47:34.398631886Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vrr8,Uid:cbf6d4b0-f3bc-4a92-9977-6d91de60b65f,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:34.456809 containerd[1581]: time="2025-08-13T01:47:34.456742991Z" level=error msg="Failed to destroy network for sandbox \"657612f51194ffa6a07f57b33e3f9567b48260c3a81573f771dacccc64ebab9a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:34.460527 systemd[1]: run-netns-cni\x2d7008314d\x2d5057\x2d95f4\x2dbfa8\x2dd2bc7eed3498.mount: Deactivated successfully. Aug 13 01:47:34.462762 containerd[1581]: time="2025-08-13T01:47:34.462701842Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vrr8,Uid:cbf6d4b0-f3bc-4a92-9977-6d91de60b65f,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"657612f51194ffa6a07f57b33e3f9567b48260c3a81573f771dacccc64ebab9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:34.463093 kubelet[2796]: E0813 01:47:34.463031 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"657612f51194ffa6a07f57b33e3f9567b48260c3a81573f771dacccc64ebab9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:34.463169 kubelet[2796]: E0813 01:47:34.463131 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"657612f51194ffa6a07f57b33e3f9567b48260c3a81573f771dacccc64ebab9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:47:34.463169 kubelet[2796]: E0813 01:47:34.463159 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"657612f51194ffa6a07f57b33e3f9567b48260c3a81573f771dacccc64ebab9a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:47:34.463265 kubelet[2796]: E0813 01:47:34.463227 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-6vrr8_kube-system(cbf6d4b0-f3bc-4a92-9977-6d91de60b65f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-6vrr8_kube-system(cbf6d4b0-f3bc-4a92-9977-6d91de60b65f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"657612f51194ffa6a07f57b33e3f9567b48260c3a81573f771dacccc64ebab9a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-6vrr8" podUID="cbf6d4b0-f3bc-4a92-9977-6d91de60b65f" Aug 13 01:47:34.555275 kubelet[2796]: I0813 01:47:34.555113 2796 kubelet.go:2306] "Pod admission denied" podUID="115e628a-a4ae-485c-8e42-37e19ff1e237" pod="tigera-operator/tigera-operator-5bf8dfcb4-kt4lc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:34.660609 kubelet[2796]: I0813 01:47:34.660476 2796 kubelet.go:2306] "Pod admission denied" podUID="ea0810aa-4da9-40cf-9efc-1909bc8342ad" pod="tigera-operator/tigera-operator-5bf8dfcb4-5sx5q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:34.751089 kubelet[2796]: I0813 01:47:34.750962 2796 kubelet.go:2306] "Pod admission denied" podUID="4ccff50b-13dd-41c7-b260-c3884519a7c3" pod="tigera-operator/tigera-operator-5bf8dfcb4-qfjcw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:34.852269 kubelet[2796]: I0813 01:47:34.852078 2796 kubelet.go:2306] "Pod admission denied" podUID="ba5e4576-862f-4bf2-a896-e22d3ce31cad" pod="tigera-operator/tigera-operator-5bf8dfcb4-fm2tx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:34.952518 kubelet[2796]: I0813 01:47:34.952453 2796 kubelet.go:2306] "Pod admission denied" podUID="d614f7d4-951a-41f3-8ec6-c23d01a0b375" pod="tigera-operator/tigera-operator-5bf8dfcb4-snchm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.051093 kubelet[2796]: I0813 01:47:35.051013 2796 kubelet.go:2306] "Pod admission denied" podUID="c67ecd41-bdec-4255-b717-aa1c6e7c040f" pod="tigera-operator/tigera-operator-5bf8dfcb4-c55xb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.150534 kubelet[2796]: I0813 01:47:35.149982 2796 kubelet.go:2306] "Pod admission denied" podUID="834cafc2-20d6-4a81-8b09-3dece946382d" pod="tigera-operator/tigera-operator-5bf8dfcb4-bz8zd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.249869 kubelet[2796]: I0813 01:47:35.249793 2796 kubelet.go:2306] "Pod admission denied" podUID="0d1922f2-1872-419d-ba64-038f2a9a33c3" pod="tigera-operator/tigera-operator-5bf8dfcb4-d7r9t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.352470 kubelet[2796]: I0813 01:47:35.352391 2796 kubelet.go:2306] "Pod admission denied" podUID="01336421-5135-45ad-ae4e-41f23d257dca" pod="tigera-operator/tigera-operator-5bf8dfcb4-qq2ht" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.451990 kubelet[2796]: I0813 01:47:35.451439 2796 kubelet.go:2306] "Pod admission denied" podUID="a576668b-7fe4-4d4c-a074-8ac5a6e54a58" pod="tigera-operator/tigera-operator-5bf8dfcb4-7lhvq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.549963 kubelet[2796]: I0813 01:47:35.549896 2796 kubelet.go:2306] "Pod admission denied" podUID="79f6fcc8-6eb2-4fe8-bc23-f5096cacf769" pod="tigera-operator/tigera-operator-5bf8dfcb4-w5k4h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.650987 kubelet[2796]: I0813 01:47:35.650913 2796 kubelet.go:2306] "Pod admission denied" podUID="08698702-01af-4982-8d3c-5af02957ea6f" pod="tigera-operator/tigera-operator-5bf8dfcb4-67jzq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.749384 kubelet[2796]: I0813 01:47:35.749298 2796 kubelet.go:2306] "Pod admission denied" podUID="214be7a5-c57c-4291-94a0-c6fe6a102a5c" pod="tigera-operator/tigera-operator-5bf8dfcb4-nsqbf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.852866 kubelet[2796]: I0813 01:47:35.852793 2796 kubelet.go:2306] "Pod admission denied" podUID="a9524e41-cb77-46cb-933a-6d811e8b9ff2" pod="tigera-operator/tigera-operator-5bf8dfcb4-ht9bt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.950826 kubelet[2796]: I0813 01:47:35.950756 2796 kubelet.go:2306] "Pod admission denied" podUID="aacbecdd-90f1-4622-be2c-dc03c1cf41e7" pod="tigera-operator/tigera-operator-5bf8dfcb4-rl88q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:36.050386 kubelet[2796]: I0813 01:47:36.050176 2796 kubelet.go:2306] "Pod admission denied" podUID="435752b8-fdb9-44af-b33a-ca874d5289a8" pod="tigera-operator/tigera-operator-5bf8dfcb4-p67jt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:36.151550 kubelet[2796]: I0813 01:47:36.151463 2796 kubelet.go:2306] "Pod admission denied" podUID="f9c35ce5-cc26-447c-8262-ac8e243c3d78" pod="tigera-operator/tigera-operator-5bf8dfcb4-97mdd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:36.350750 kubelet[2796]: I0813 01:47:36.350297 2796 kubelet.go:2306] "Pod admission denied" podUID="066a1bac-1dd3-4173-a36c-7f9cd448017d" pod="tigera-operator/tigera-operator-5bf8dfcb4-frkgk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:36.397270 kubelet[2796]: E0813 01:47:36.397100 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:47:36.397627 containerd[1581]: time="2025-08-13T01:47:36.397569713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:36.458841 kubelet[2796]: I0813 01:47:36.458750 2796 kubelet.go:2306] "Pod admission denied" podUID="871b6161-cd25-433a-85a3-6ff2b3f57626" pod="tigera-operator/tigera-operator-5bf8dfcb4-6zh2l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:36.472150 containerd[1581]: time="2025-08-13T01:47:36.472085681Z" level=error msg="Failed to destroy network for sandbox \"052c7d039f713bca77bf8cb04a2694940ea1e32810bcf10bdd3300303187373e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:36.474322 containerd[1581]: time="2025-08-13T01:47:36.474289092Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"052c7d039f713bca77bf8cb04a2694940ea1e32810bcf10bdd3300303187373e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:36.475085 kubelet[2796]: E0813 01:47:36.474967 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"052c7d039f713bca77bf8cb04a2694940ea1e32810bcf10bdd3300303187373e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:36.476881 kubelet[2796]: E0813 01:47:36.476819 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"052c7d039f713bca77bf8cb04a2694940ea1e32810bcf10bdd3300303187373e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:47:36.476881 kubelet[2796]: E0813 01:47:36.476853 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"052c7d039f713bca77bf8cb04a2694940ea1e32810bcf10bdd3300303187373e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:47:36.477099 kubelet[2796]: E0813 01:47:36.477053 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"052c7d039f713bca77bf8cb04a2694940ea1e32810bcf10bdd3300303187373e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" podUID="4720bedc-4719-4a57-b2ff-e5b21f7acb7f" Aug 13 01:47:36.478300 systemd[1]: run-netns-cni\x2dadb16c10\x2d0a35\x2db96f\x2d4a0d\x2d30496efc0272.mount: Deactivated successfully. Aug 13 01:47:36.580258 kubelet[2796]: I0813 01:47:36.580186 2796 kubelet.go:2306] "Pod admission denied" podUID="241288ab-42f9-470c-9554-8903e81cbdb1" pod="tigera-operator/tigera-operator-5bf8dfcb4-j8shl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:36.752339 kubelet[2796]: I0813 01:47:36.752264 2796 kubelet.go:2306] "Pod admission denied" podUID="da61ee11-9a92-4a37-8357-54e8fc92c5c6" pod="tigera-operator/tigera-operator-5bf8dfcb4-np6qm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:36.852766 kubelet[2796]: I0813 01:47:36.852691 2796 kubelet.go:2306] "Pod admission denied" podUID="cbb97231-41cd-46d1-baf8-e28a6620b267" pod="tigera-operator/tigera-operator-5bf8dfcb4-66hc7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:36.950235 kubelet[2796]: I0813 01:47:36.950171 2796 kubelet.go:2306] "Pod admission denied" podUID="557b84b9-112f-4bc9-befc-dec3a65de3ea" pod="tigera-operator/tigera-operator-5bf8dfcb4-l9g2n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:37.153168 kubelet[2796]: I0813 01:47:37.152961 2796 kubelet.go:2306] "Pod admission denied" podUID="b318ab8c-53e1-4818-abce-25e251df789d" pod="tigera-operator/tigera-operator-5bf8dfcb4-gxnq9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:37.250978 kubelet[2796]: I0813 01:47:37.250884 2796 kubelet.go:2306] "Pod admission denied" podUID="6f90b7a3-30a4-4856-bc6c-e93f0c26f9ed" pod="tigera-operator/tigera-operator-5bf8dfcb4-qhnxg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:37.351061 kubelet[2796]: I0813 01:47:37.350982 2796 kubelet.go:2306] "Pod admission denied" podUID="c68998f7-179a-4f8b-8bb2-b9242a3786ba" pod="tigera-operator/tigera-operator-5bf8dfcb4-22dc8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:37.450249 kubelet[2796]: I0813 01:47:37.450064 2796 kubelet.go:2306] "Pod admission denied" podUID="ca23ba34-5c1c-4546-bcf2-cc0d8f1dce71" pod="tigera-operator/tigera-operator-5bf8dfcb4-g9zjq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:37.548345 kubelet[2796]: I0813 01:47:37.548270 2796 kubelet.go:2306] "Pod admission denied" podUID="b9b7c93e-21b5-486c-9fef-b0122ea34f19" pod="tigera-operator/tigera-operator-5bf8dfcb4-wpkq9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:37.650468 kubelet[2796]: I0813 01:47:37.650392 2796 kubelet.go:2306] "Pod admission denied" podUID="64875670-6150-4f01-909b-6f9adb5eee51" pod="tigera-operator/tigera-operator-5bf8dfcb4-cqjcx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:37.752371 kubelet[2796]: I0813 01:47:37.752284 2796 kubelet.go:2306] "Pod admission denied" podUID="1c762abf-1da8-4626-bf26-50e4ad9597b5" pod="tigera-operator/tigera-operator-5bf8dfcb4-l6qx2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:37.952467 kubelet[2796]: I0813 01:47:37.952407 2796 kubelet.go:2306] "Pod admission denied" podUID="0ee56d50-3f7d-43d4-821a-65b00bcb25af" pod="tigera-operator/tigera-operator-5bf8dfcb4-gz9n4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.050980 kubelet[2796]: I0813 01:47:38.050373 2796 kubelet.go:2306] "Pod admission denied" podUID="e8605e68-0c12-4d3d-80e5-928931126df4" pod="tigera-operator/tigera-operator-5bf8dfcb4-jttss" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.151539 kubelet[2796]: I0813 01:47:38.151445 2796 kubelet.go:2306] "Pod admission denied" podUID="5eae1f57-8628-4260-95e0-4f67ae6d35c8" pod="tigera-operator/tigera-operator-5bf8dfcb4-nqbc9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.254763 kubelet[2796]: I0813 01:47:38.254687 2796 kubelet.go:2306] "Pod admission denied" podUID="eba87d35-215e-429f-be5d-f86c0c2ee996" pod="tigera-operator/tigera-operator-5bf8dfcb4-nz4nc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.351449 kubelet[2796]: I0813 01:47:38.351247 2796 kubelet.go:2306] "Pod admission denied" podUID="11e08545-4a91-496f-bf43-c645ba0a13ee" pod="tigera-operator/tigera-operator-5bf8dfcb4-246fj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.552146 kubelet[2796]: I0813 01:47:38.552084 2796 kubelet.go:2306] "Pod admission denied" podUID="bea15bf8-054c-44aa-908c-f35b1cdac72e" pod="tigera-operator/tigera-operator-5bf8dfcb4-nnxsq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.654241 kubelet[2796]: I0813 01:47:38.653726 2796 kubelet.go:2306] "Pod admission denied" podUID="e63570e4-dadd-480f-b21a-275950601ffc" pod="tigera-operator/tigera-operator-5bf8dfcb4-fw4dc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.701846 kubelet[2796]: I0813 01:47:38.701766 2796 kubelet.go:2306] "Pod admission denied" podUID="a9daba35-dec6-4098-b946-b4576c783127" pod="tigera-operator/tigera-operator-5bf8dfcb4-rdh9x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.800791 kubelet[2796]: I0813 01:47:38.800706 2796 kubelet.go:2306] "Pod admission denied" podUID="ff4b7a48-b293-4aca-9a03-069738ae2314" pod="tigera-operator/tigera-operator-5bf8dfcb4-hqcp4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.900437 kubelet[2796]: I0813 01:47:38.900355 2796 kubelet.go:2306] "Pod admission denied" podUID="8316c7b3-5d7e-49c8-8329-b1e6da0212f8" pod="tigera-operator/tigera-operator-5bf8dfcb4-drbv4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.999150 kubelet[2796]: I0813 01:47:38.999082 2796 kubelet.go:2306] "Pod admission denied" podUID="30d34a99-f9e9-477b-9c79-9c08e1b7c608" pod="tigera-operator/tigera-operator-5bf8dfcb4-ktxw8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:39.201385 kubelet[2796]: I0813 01:47:39.201308 2796 kubelet.go:2306] "Pod admission denied" podUID="e98e88af-e455-42b6-abc2-21b9484f1520" pod="tigera-operator/tigera-operator-5bf8dfcb4-p9hst" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:39.300119 kubelet[2796]: I0813 01:47:39.299911 2796 kubelet.go:2306] "Pod admission denied" podUID="18d1af80-c82e-4cb2-9e9e-55cda7ec73cf" pod="tigera-operator/tigera-operator-5bf8dfcb4-z74tv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:39.402076 kubelet[2796]: I0813 01:47:39.402007 2796 kubelet.go:2306] "Pod admission denied" podUID="95716af1-b36a-46c3-984f-17a914a49ffd" pod="tigera-operator/tigera-operator-5bf8dfcb4-n5s4r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:39.502561 kubelet[2796]: I0813 01:47:39.502457 2796 kubelet.go:2306] "Pod admission denied" podUID="23195cf9-a1bf-4e01-9fa7-06ba57474387" pod="tigera-operator/tigera-operator-5bf8dfcb4-2bxbs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:39.602519 kubelet[2796]: I0813 01:47:39.602329 2796 kubelet.go:2306] "Pod admission denied" podUID="f8a19db4-7d6d-4252-957c-84e9778a0db5" pod="tigera-operator/tigera-operator-5bf8dfcb4-5bfnr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:39.704954 kubelet[2796]: I0813 01:47:39.704859 2796 kubelet.go:2306] "Pod admission denied" podUID="37d8d60f-16bb-457d-b8ca-b3d926224862" pod="tigera-operator/tigera-operator-5bf8dfcb4-wqxmr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:39.804578 kubelet[2796]: I0813 01:47:39.804500 2796 kubelet.go:2306] "Pod admission denied" podUID="6ef82cba-6dd5-4776-83d7-cfa90428c939" pod="tigera-operator/tigera-operator-5bf8dfcb4-s4zwd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:39.903799 kubelet[2796]: I0813 01:47:39.901894 2796 kubelet.go:2306] "Pod admission denied" podUID="1fedb419-b6d5-4f8e-83d0-9579d7585343" pod="tigera-operator/tigera-operator-5bf8dfcb4-l6qjs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.003104 kubelet[2796]: I0813 01:47:40.003020 2796 kubelet.go:2306] "Pod admission denied" podUID="ec9fe37c-8bc1-4f55-a159-779bbf1e7a47" pod="tigera-operator/tigera-operator-5bf8dfcb4-8z5fw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.100951 kubelet[2796]: I0813 01:47:40.100866 2796 kubelet.go:2306] "Pod admission denied" podUID="9f4d36c6-972a-4a7e-a3cd-693d41eb999f" pod="tigera-operator/tigera-operator-5bf8dfcb4-7dnmr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.207823 kubelet[2796]: I0813 01:47:40.206736 2796 kubelet.go:2306] "Pod admission denied" podUID="464b3631-c4a6-4e4d-8827-3507f33d949e" pod="tigera-operator/tigera-operator-5bf8dfcb4-d98zs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.303691 kubelet[2796]: I0813 01:47:40.303581 2796 kubelet.go:2306] "Pod admission denied" podUID="e7b2d86c-f833-4986-8d47-7ce944a54991" pod="tigera-operator/tigera-operator-5bf8dfcb4-x9k2j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.360780 kubelet[2796]: I0813 01:47:40.360701 2796 kubelet.go:2306] "Pod admission denied" podUID="edf1090e-f257-4233-bfa0-781a48e3c66a" pod="tigera-operator/tigera-operator-5bf8dfcb4-k5fd6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.456946 kubelet[2796]: I0813 01:47:40.456874 2796 kubelet.go:2306] "Pod admission denied" podUID="af4c08ad-6167-4ab4-acf3-4e94e3d056a9" pod="tigera-operator/tigera-operator-5bf8dfcb4-c5hkb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.555576 kubelet[2796]: I0813 01:47:40.555495 2796 kubelet.go:2306] "Pod admission denied" podUID="f80b2fde-c74f-40dc-9b9e-10493cea87bd" pod="tigera-operator/tigera-operator-5bf8dfcb4-mb67k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.650627 kubelet[2796]: I0813 01:47:40.650550 2796 kubelet.go:2306] "Pod admission denied" podUID="e05d92dd-5468-41d9-ac83-c5ea020a719c" pod="tigera-operator/tigera-operator-5bf8dfcb4-zjt7t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.750557 kubelet[2796]: I0813 01:47:40.750413 2796 kubelet.go:2306] "Pod admission denied" podUID="befa39e4-a91c-484d-9834-ec127babbfab" pod="tigera-operator/tigera-operator-5bf8dfcb4-cpbjv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.851544 kubelet[2796]: I0813 01:47:40.851365 2796 kubelet.go:2306] "Pod admission denied" podUID="7dfd3e0f-e786-408e-9f1a-fe6e7dc7f5b3" pod="tigera-operator/tigera-operator-5bf8dfcb4-p85lh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.952564 kubelet[2796]: I0813 01:47:40.952485 2796 kubelet.go:2306] "Pod admission denied" podUID="c8aaf34e-e58c-4ed4-a215-e884832c7d4a" pod="tigera-operator/tigera-operator-5bf8dfcb4-rjn2k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:41.052931 kubelet[2796]: I0813 01:47:41.052857 2796 kubelet.go:2306] "Pod admission denied" podUID="7df22ba7-8be8-499a-a5d3-8f8a6bfc32e1" pod="tigera-operator/tigera-operator-5bf8dfcb4-jfnt8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:41.254522 kubelet[2796]: I0813 01:47:41.254431 2796 kubelet.go:2306] "Pod admission denied" podUID="208b08f2-5079-4b03-8ffb-054348b0904f" pod="tigera-operator/tigera-operator-5bf8dfcb4-wgrjb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:41.355783 kubelet[2796]: I0813 01:47:41.355700 2796 kubelet.go:2306] "Pod admission denied" podUID="40192100-f6c9-4da3-a12e-ff2bed90b9e0" pod="tigera-operator/tigera-operator-5bf8dfcb4-fwtmw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:41.455992 kubelet[2796]: I0813 01:47:41.455876 2796 kubelet.go:2306] "Pod admission denied" podUID="75493a72-c38c-4976-99d1-7d3c91771d1c" pod="tigera-operator/tigera-operator-5bf8dfcb4-bbmmg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:41.555054 kubelet[2796]: I0813 01:47:41.554885 2796 kubelet.go:2306] "Pod admission denied" podUID="6a42e6dd-c0e8-4496-9c6e-2b3a80b6dac1" pod="tigera-operator/tigera-operator-5bf8dfcb4-tq8fj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:41.651273 kubelet[2796]: I0813 01:47:41.651199 2796 kubelet.go:2306] "Pod admission denied" podUID="598919e4-0a58-47de-9b66-484d54a5589e" pod="tigera-operator/tigera-operator-5bf8dfcb4-vlxpr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:41.752011 kubelet[2796]: I0813 01:47:41.751935 2796 kubelet.go:2306] "Pod admission denied" podUID="68be5a92-8c58-4455-bc7a-bcafda1d415b" pod="tigera-operator/tigera-operator-5bf8dfcb4-w65zd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:41.853912 kubelet[2796]: I0813 01:47:41.853551 2796 kubelet.go:2306] "Pod admission denied" podUID="ff288a92-26d1-4e93-8c5c-60ffd440a022" pod="tigera-operator/tigera-operator-5bf8dfcb4-7szvx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:41.952926 kubelet[2796]: I0813 01:47:41.952853 2796 kubelet.go:2306] "Pod admission denied" podUID="ad17847d-755b-4224-b2bc-efccbf2f866a" pod="tigera-operator/tigera-operator-5bf8dfcb4-n9lzw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.011295 kubelet[2796]: I0813 01:47:42.011251 2796 kubelet.go:2306] "Pod admission denied" podUID="521e0b52-3e00-4beb-9507-4da146f90ec3" pod="tigera-operator/tigera-operator-5bf8dfcb4-qcgsj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.101240 kubelet[2796]: I0813 01:47:42.101161 2796 kubelet.go:2306] "Pod admission denied" podUID="39199297-8914-4cc3-be38-ce959636b719" pod="tigera-operator/tigera-operator-5bf8dfcb4-xhldl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.205445 kubelet[2796]: I0813 01:47:42.205263 2796 kubelet.go:2306] "Pod admission denied" podUID="c2172b40-a436-4c97-bb79-f585bed3c901" pod="tigera-operator/tigera-operator-5bf8dfcb4-r94gp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.304112 kubelet[2796]: I0813 01:47:42.304028 2796 kubelet.go:2306] "Pod admission denied" podUID="400e7613-5c99-49e0-8d2b-1e22bbeabda7" pod="tigera-operator/tigera-operator-5bf8dfcb4-mzkxl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.407692 kubelet[2796]: I0813 01:47:42.407581 2796 kubelet.go:2306] "Pod admission denied" podUID="17bf5a29-3e30-457b-9ac8-12221e7c83e5" pod="tigera-operator/tigera-operator-5bf8dfcb4-zt4nt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.502059 kubelet[2796]: I0813 01:47:42.501965 2796 kubelet.go:2306] "Pod admission denied" podUID="88f9dbeb-5ef4-4734-9291-4d5391b6a19a" pod="tigera-operator/tigera-operator-5bf8dfcb4-vjbtd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.601728 kubelet[2796]: I0813 01:47:42.601655 2796 kubelet.go:2306] "Pod admission denied" podUID="a5e1e87d-bff8-4bf1-b147-8fb3b02d5e62" pod="tigera-operator/tigera-operator-5bf8dfcb4-cdnv7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.651040 kubelet[2796]: I0813 01:47:42.650959 2796 kubelet.go:2306] "Pod admission denied" podUID="18cb14cf-8591-4cc7-bef8-e1d3756abf52" pod="tigera-operator/tigera-operator-5bf8dfcb4-gd88s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.751148 kubelet[2796]: I0813 01:47:42.751070 2796 kubelet.go:2306] "Pod admission denied" podUID="4f19dff7-81d3-4cd5-b884-e8b6b58d72f7" pod="tigera-operator/tigera-operator-5bf8dfcb4-4g876" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.852627 kubelet[2796]: I0813 01:47:42.852401 2796 kubelet.go:2306] "Pod admission denied" podUID="6ebeb1b1-2228-4548-bc60-536edcd9bba0" pod="tigera-operator/tigera-operator-5bf8dfcb4-29nnp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.951914 kubelet[2796]: I0813 01:47:42.951807 2796 kubelet.go:2306] "Pod admission denied" podUID="11a5875a-24f3-4660-85c3-a550e4897cad" pod="tigera-operator/tigera-operator-5bf8dfcb4-s4j5v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:43.052563 kubelet[2796]: I0813 01:47:43.052480 2796 kubelet.go:2306] "Pod admission denied" podUID="6f4c9425-32d9-441f-a776-262d7dacfdbb" pod="tigera-operator/tigera-operator-5bf8dfcb4-g5t8n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:43.150491 kubelet[2796]: I0813 01:47:43.150326 2796 kubelet.go:2306] "Pod admission denied" podUID="6e1e4c12-5fb8-4150-9538-1bacc816a8ba" pod="tigera-operator/tigera-operator-5bf8dfcb4-n985j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:43.252574 kubelet[2796]: I0813 01:47:43.252500 2796 kubelet.go:2306] "Pod admission denied" podUID="b82eabc7-12ff-4e48-9fd7-3db50beb34c4" pod="tigera-operator/tigera-operator-5bf8dfcb4-l26hl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:43.352860 kubelet[2796]: I0813 01:47:43.352781 2796 kubelet.go:2306] "Pod admission denied" podUID="263e2e6c-5c78-4908-8b84-30206d7bc723" pod="tigera-operator/tigera-operator-5bf8dfcb4-k5bsn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:43.462814 kubelet[2796]: I0813 01:47:43.461825 2796 kubelet.go:2306] "Pod admission denied" podUID="1ee1dcbf-3696-4140-a2ee-cf12a961274c" pod="tigera-operator/tigera-operator-5bf8dfcb4-tzndz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:43.556808 kubelet[2796]: I0813 01:47:43.556730 2796 kubelet.go:2306] "Pod admission denied" podUID="c329bba8-1859-425b-be24-13bd88abef6e" pod="tigera-operator/tigera-operator-5bf8dfcb4-szjrv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:43.760690 kubelet[2796]: I0813 01:47:43.760463 2796 kubelet.go:2306] "Pod admission denied" podUID="874be738-4a24-4069-8448-4a6b5ed3b9f8" pod="tigera-operator/tigera-operator-5bf8dfcb4-wtwm6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:43.852982 kubelet[2796]: I0813 01:47:43.852900 2796 kubelet.go:2306] "Pod admission denied" podUID="222f9513-f480-4c78-a9a0-5f0d66d37848" pod="tigera-operator/tigera-operator-5bf8dfcb4-brxhg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:43.953426 kubelet[2796]: I0813 01:47:43.953328 2796 kubelet.go:2306] "Pod admission denied" podUID="1dd7750d-b93d-48b4-9a60-44aa42f9760f" pod="tigera-operator/tigera-operator-5bf8dfcb4-sn29g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.052451 kubelet[2796]: I0813 01:47:44.051959 2796 kubelet.go:2306] "Pod admission denied" podUID="f17a73fb-b982-41c5-9d52-1bdae9814336" pod="tigera-operator/tigera-operator-5bf8dfcb4-tzzfg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.149930 kubelet[2796]: I0813 01:47:44.149857 2796 kubelet.go:2306] "Pod admission denied" podUID="27a058d0-4bae-4091-8538-e101001a2beb" pod="tigera-operator/tigera-operator-5bf8dfcb4-bp7m9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.378183 kubelet[2796]: I0813 01:47:44.376190 2796 kubelet.go:2306] "Pod admission denied" podUID="b9a029e6-ae67-434f-836d-d458a60252c2" pod="tigera-operator/tigera-operator-5bf8dfcb4-8qt2m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.404385 containerd[1581]: time="2025-08-13T01:47:44.403972650Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:47:44.455074 kubelet[2796]: I0813 01:47:44.454987 2796 kubelet.go:2306] "Pod admission denied" podUID="54af0e34-16cf-4d33-ad5a-4c555a8304af" pod="tigera-operator/tigera-operator-5bf8dfcb4-vbdxf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.553704 kubelet[2796]: I0813 01:47:44.553619 2796 kubelet.go:2306] "Pod admission denied" podUID="9c124751-bdb9-46ec-b915-1879463f5988" pod="tigera-operator/tigera-operator-5bf8dfcb4-cwrs7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.653589 kubelet[2796]: I0813 01:47:44.652633 2796 kubelet.go:2306] "Pod admission denied" podUID="fb3cf6b5-1ceb-4a0a-8558-863d9ab41256" pod="tigera-operator/tigera-operator-5bf8dfcb4-gnpxm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.754351 kubelet[2796]: I0813 01:47:44.754284 2796 kubelet.go:2306] "Pod admission denied" podUID="987c04eb-4bfd-49fe-98a7-d84734e69444" pod="tigera-operator/tigera-operator-5bf8dfcb4-mm2lz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.854544 kubelet[2796]: I0813 01:47:44.854486 2796 kubelet.go:2306] "Pod admission denied" podUID="ea9776ea-324a-4089-9887-161bf1ef1bcd" pod="tigera-operator/tigera-operator-5bf8dfcb4-tsjx9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.911713 kubelet[2796]: I0813 01:47:44.911537 2796 kubelet.go:2306] "Pod admission denied" podUID="712720bc-89b1-4e24-b8aa-b3ea42ef5f43" pod="tigera-operator/tigera-operator-5bf8dfcb4-fjdn8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:45.004772 kubelet[2796]: I0813 01:47:45.004701 2796 kubelet.go:2306] "Pod admission denied" podUID="6b313c04-ca26-496d-904c-5dc7ae31df03" pod="tigera-operator/tigera-operator-5bf8dfcb4-bz6n2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:45.101436 kubelet[2796]: I0813 01:47:45.101363 2796 kubelet.go:2306] "Pod admission denied" podUID="cf878fdd-954f-4322-9031-21d44406080f" pod="tigera-operator/tigera-operator-5bf8dfcb4-hlt9d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:45.207722 kubelet[2796]: I0813 01:47:45.207193 2796 kubelet.go:2306] "Pod admission denied" podUID="fbc3529a-ba07-41b9-b1b4-659641e7d303" pod="tigera-operator/tigera-operator-5bf8dfcb4-9bx99" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:45.332088 kubelet[2796]: I0813 01:47:45.332033 2796 kubelet.go:2306] "Pod admission denied" podUID="7adcf53f-e5eb-4ab9-afd6-96ab16d80c2f" pod="tigera-operator/tigera-operator-5bf8dfcb4-tg85t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:45.403817 kubelet[2796]: E0813 01:47:45.403779 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:47:45.405902 containerd[1581]: time="2025-08-13T01:47:45.405221961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:45.462954 kubelet[2796]: I0813 01:47:45.461568 2796 kubelet.go:2306] "Pod admission denied" podUID="30de2669-2fbe-4391-acc5-f649f4a6409a" pod="tigera-operator/tigera-operator-5bf8dfcb4-kcp4t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:45.565860 kubelet[2796]: I0813 01:47:45.565805 2796 kubelet.go:2306] "Pod admission denied" podUID="307687a4-9cf7-46e8-b622-d5652ae85cfa" pod="tigera-operator/tigera-operator-5bf8dfcb4-kgldl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:45.651881 containerd[1581]: time="2025-08-13T01:47:45.651808372Z" level=error msg="Failed to destroy network for sandbox \"33bb4e6b8beb8489d9fd8a71a15af30dd54f440a58d22e08018afcd1702238a6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:45.654544 systemd[1]: run-netns-cni\x2df86f4a2a\x2d479c\x2d4bb3\x2d9f8d\x2d6a845bd80d66.mount: Deactivated successfully. Aug 13 01:47:45.655521 containerd[1581]: time="2025-08-13T01:47:45.655477430Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"33bb4e6b8beb8489d9fd8a71a15af30dd54f440a58d22e08018afcd1702238a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:45.656664 kubelet[2796]: E0813 01:47:45.655898 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33bb4e6b8beb8489d9fd8a71a15af30dd54f440a58d22e08018afcd1702238a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:45.659516 kubelet[2796]: E0813 01:47:45.656004 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33bb4e6b8beb8489d9fd8a71a15af30dd54f440a58d22e08018afcd1702238a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:47:45.659516 kubelet[2796]: E0813 01:47:45.658727 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"33bb4e6b8beb8489d9fd8a71a15af30dd54f440a58d22e08018afcd1702238a6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:47:45.659692 kubelet[2796]: E0813 01:47:45.659659 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-7c65d6cfc9-djvw6_kube-system(981696e3-42b0-4ae8-b44b-fa439a03a402)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-7c65d6cfc9-djvw6_kube-system(981696e3-42b0-4ae8-b44b-fa439a03a402)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"33bb4e6b8beb8489d9fd8a71a15af30dd54f440a58d22e08018afcd1702238a6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-7c65d6cfc9-djvw6" podUID="981696e3-42b0-4ae8-b44b-fa439a03a402" Aug 13 01:47:45.717808 kubelet[2796]: I0813 01:47:45.715525 2796 kubelet.go:2306] "Pod admission denied" podUID="1209604f-2182-4388-9c16-379618bfdc73" pod="tigera-operator/tigera-operator-5bf8dfcb4-gl4vr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:45.935672 kubelet[2796]: I0813 01:47:45.935042 2796 kubelet.go:2306] "Pod admission denied" podUID="97dea0f1-b2ee-4975-b275-b5ddb1b84bee" pod="tigera-operator/tigera-operator-5bf8dfcb4-skbls" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:46.398000 containerd[1581]: time="2025-08-13T01:47:46.397932647Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:46.485813 containerd[1581]: time="2025-08-13T01:47:46.485717036Z" level=error msg="Failed to destroy network for sandbox \"2372bb01d3351f8b63b45720b3356e312623a59fd287badaf6dade1c18d658c1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:46.489788 systemd[1]: run-netns-cni\x2d908cc8e4\x2dfb4e\x2d4c9a\x2dda08\x2db2dde2447efb.mount: Deactivated successfully. Aug 13 01:47:46.491581 containerd[1581]: time="2025-08-13T01:47:46.491462896Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2372bb01d3351f8b63b45720b3356e312623a59fd287badaf6dade1c18d658c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:46.491980 kubelet[2796]: E0813 01:47:46.491900 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2372bb01d3351f8b63b45720b3356e312623a59fd287badaf6dade1c18d658c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:46.492055 kubelet[2796]: E0813 01:47:46.492008 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2372bb01d3351f8b63b45720b3356e312623a59fd287badaf6dade1c18d658c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:47:46.492055 kubelet[2796]: E0813 01:47:46.492035 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2372bb01d3351f8b63b45720b3356e312623a59fd287badaf6dade1c18d658c1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:47:46.492171 kubelet[2796]: E0813 01:47:46.492093 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2372bb01d3351f8b63b45720b3356e312623a59fd287badaf6dade1c18d658c1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bk2p6" podUID="0e8898c7-a3f5-4010-bb1f-d756673c29b2" Aug 13 01:47:46.805901 systemd[1]: Started sshd@9-172.232.7.32:22-147.75.109.163:34882.service - OpenSSH per-connection server daemon (147.75.109.163:34882). Aug 13 01:47:47.158700 sshd[4791]: Accepted publickey for core from 147.75.109.163 port 34882 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:47:47.162426 sshd-session[4791]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:47:47.173166 systemd-logind[1544]: New session 10 of user core. Aug 13 01:47:47.180522 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 01:47:47.419583 containerd[1581]: time="2025-08-13T01:47:47.419455392Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:47.531811 containerd[1581]: time="2025-08-13T01:47:47.531741020Z" level=error msg="Failed to destroy network for sandbox \"b1e35efd35073feaa7508cf1e48ec0d8598d066a687a565c4c9862b88152349c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:47.537242 sshd[4793]: Connection closed by 147.75.109.163 port 34882 Aug 13 01:47:47.537897 sshd-session[4791]: pam_unix(sshd:session): session closed for user core Aug 13 01:47:47.539577 systemd[1]: run-netns-cni\x2dd0371dcf\x2d0aa9\x2dbb17\x2d05c6\x2da5ef10796b31.mount: Deactivated successfully. Aug 13 01:47:47.542846 containerd[1581]: time="2025-08-13T01:47:47.542284036Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1e35efd35073feaa7508cf1e48ec0d8598d066a687a565c4c9862b88152349c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:47.543400 kubelet[2796]: E0813 01:47:47.543122 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1e35efd35073feaa7508cf1e48ec0d8598d066a687a565c4c9862b88152349c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:47.543839 kubelet[2796]: E0813 01:47:47.543472 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1e35efd35073feaa7508cf1e48ec0d8598d066a687a565c4c9862b88152349c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:47:47.543839 kubelet[2796]: E0813 01:47:47.543500 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b1e35efd35073feaa7508cf1e48ec0d8598d066a687a565c4c9862b88152349c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:47:47.543839 kubelet[2796]: E0813 01:47:47.543771 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b1e35efd35073feaa7508cf1e48ec0d8598d066a687a565c4c9862b88152349c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" podUID="4720bedc-4719-4a57-b2ff-e5b21f7acb7f" Aug 13 01:47:47.550751 systemd-logind[1544]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:47:47.551806 systemd[1]: sshd@9-172.232.7.32:22-147.75.109.163:34882.service: Deactivated successfully. Aug 13 01:47:47.557133 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:47:47.561494 systemd-logind[1544]: Removed session 10. Aug 13 01:47:48.958868 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1707749871.mount: Deactivated successfully. Aug 13 01:47:48.991052 containerd[1581]: time="2025-08-13T01:47:48.990979257Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:47:48.992056 containerd[1581]: time="2025-08-13T01:47:48.991944837Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:47:48.992627 containerd[1581]: time="2025-08-13T01:47:48.992596746Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:47:48.994348 containerd[1581]: time="2025-08-13T01:47:48.994317097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:47:48.994885 containerd[1581]: time="2025-08-13T01:47:48.994859874Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 4.590833373s" Aug 13 01:47:48.994969 containerd[1581]: time="2025-08-13T01:47:48.994954187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 01:47:49.009442 containerd[1581]: time="2025-08-13T01:47:49.009363622Z" level=info msg="CreateContainer within sandbox \"0f435f25441e44df4f853443af31ae924eb3be47f3acfdb6f44191e270c281aa\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 01:47:49.016218 containerd[1581]: time="2025-08-13T01:47:49.015157317Z" level=info msg="Container 25e44c63aa87dccfc7e64ca4849fb58d599b9a44e3eb277cd3b525f61d131e3e: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:47:49.036822 containerd[1581]: time="2025-08-13T01:47:49.036784632Z" level=info msg="CreateContainer within sandbox \"0f435f25441e44df4f853443af31ae924eb3be47f3acfdb6f44191e270c281aa\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"25e44c63aa87dccfc7e64ca4849fb58d599b9a44e3eb277cd3b525f61d131e3e\"" Aug 13 01:47:49.037254 containerd[1581]: time="2025-08-13T01:47:49.037218745Z" level=info msg="StartContainer for \"25e44c63aa87dccfc7e64ca4849fb58d599b9a44e3eb277cd3b525f61d131e3e\"" Aug 13 01:47:49.038872 containerd[1581]: time="2025-08-13T01:47:49.038847934Z" level=info msg="connecting to shim 25e44c63aa87dccfc7e64ca4849fb58d599b9a44e3eb277cd3b525f61d131e3e" address="unix:///run/containerd/s/dc3839d04e2486c075d052b3345cd3c69314ac334e1df61c3c6a200c95ea44c9" protocol=ttrpc version=3 Aug 13 01:47:49.059807 systemd[1]: Started cri-containerd-25e44c63aa87dccfc7e64ca4849fb58d599b9a44e3eb277cd3b525f61d131e3e.scope - libcontainer container 25e44c63aa87dccfc7e64ca4849fb58d599b9a44e3eb277cd3b525f61d131e3e. Aug 13 01:47:49.106942 containerd[1581]: time="2025-08-13T01:47:49.106869323Z" level=info msg="StartContainer for \"25e44c63aa87dccfc7e64ca4849fb58d599b9a44e3eb277cd3b525f61d131e3e\" returns successfully" Aug 13 01:47:49.186153 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 01:47:49.186284 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 01:47:49.399411 kubelet[2796]: E0813 01:47:49.399348 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:47:49.404806 containerd[1581]: time="2025-08-13T01:47:49.404739287Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vrr8,Uid:cbf6d4b0-f3bc-4a92-9977-6d91de60b65f,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:49.509386 systemd-networkd[1477]: calicb18b69f12c: Link UP Aug 13 01:47:49.511177 systemd-networkd[1477]: calicb18b69f12c: Gained carrier Aug 13 01:47:49.533710 containerd[1581]: 2025-08-13 01:47:49.432 [INFO][4895] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:47:49.533710 containerd[1581]: 2025-08-13 01:47:49.442 [INFO][4895] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--32-k8s-coredns--7c65d6cfc9--6vrr8-eth0 coredns-7c65d6cfc9- kube-system cbf6d4b0-f3bc-4a92-9977-6d91de60b65f 811 0 2025-08-13 01:45:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-7-32 coredns-7c65d6cfc9-6vrr8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calicb18b69f12c [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6vrr8" WorkloadEndpoint="172--232--7--32-k8s-coredns--7c65d6cfc9--6vrr8-" Aug 13 01:47:49.533710 containerd[1581]: 2025-08-13 01:47:49.442 [INFO][4895] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6vrr8" WorkloadEndpoint="172--232--7--32-k8s-coredns--7c65d6cfc9--6vrr8-eth0" Aug 13 01:47:49.533710 containerd[1581]: 2025-08-13 01:47:49.465 [INFO][4910] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" HandleID="k8s-pod-network.1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" Workload="172--232--7--32-k8s-coredns--7c65d6cfc9--6vrr8-eth0" Aug 13 01:47:49.534674 containerd[1581]: 2025-08-13 01:47:49.465 [INFO][4910] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" HandleID="k8s-pod-network.1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" Workload="172--232--7--32-k8s-coredns--7c65d6cfc9--6vrr8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f130), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-7-32", "pod":"coredns-7c65d6cfc9-6vrr8", "timestamp":"2025-08-13 01:47:49.465167195 +0000 UTC"}, Hostname:"172-232-7-32", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:47:49.534674 containerd[1581]: 2025-08-13 01:47:49.465 [INFO][4910] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:47:49.534674 containerd[1581]: 2025-08-13 01:47:49.465 [INFO][4910] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:47:49.534674 containerd[1581]: 2025-08-13 01:47:49.466 [INFO][4910] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-32' Aug 13 01:47:49.534674 containerd[1581]: 2025-08-13 01:47:49.472 [INFO][4910] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" host="172-232-7-32" Aug 13 01:47:49.534674 containerd[1581]: 2025-08-13 01:47:49.476 [INFO][4910] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-32" Aug 13 01:47:49.534674 containerd[1581]: 2025-08-13 01:47:49.480 [INFO][4910] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="172-232-7-32" Aug 13 01:47:49.534674 containerd[1581]: 2025-08-13 01:47:49.481 [INFO][4910] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="172-232-7-32" Aug 13 01:47:49.534674 containerd[1581]: 2025-08-13 01:47:49.483 [INFO][4910] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="172-232-7-32" Aug 13 01:47:49.534674 containerd[1581]: 2025-08-13 01:47:49.483 [INFO][4910] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" host="172-232-7-32" Aug 13 01:47:49.534940 containerd[1581]: 2025-08-13 01:47:49.484 [INFO][4910] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978 Aug 13 01:47:49.534940 containerd[1581]: 2025-08-13 01:47:49.489 [INFO][4910] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" host="172-232-7-32" Aug 13 01:47:49.534940 containerd[1581]: 2025-08-13 01:47:49.494 [INFO][4910] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.103.193/26] block=192.168.103.192/26 handle="k8s-pod-network.1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" host="172-232-7-32" Aug 13 01:47:49.534940 containerd[1581]: 2025-08-13 01:47:49.494 [INFO][4910] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.193/26] handle="k8s-pod-network.1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" host="172-232-7-32" Aug 13 01:47:49.534940 containerd[1581]: 2025-08-13 01:47:49.494 [INFO][4910] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:47:49.534940 containerd[1581]: 2025-08-13 01:47:49.494 [INFO][4910] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.193/26] IPv6=[] ContainerID="1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" HandleID="k8s-pod-network.1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" Workload="172--232--7--32-k8s-coredns--7c65d6cfc9--6vrr8-eth0" Aug 13 01:47:49.535074 containerd[1581]: 2025-08-13 01:47:49.499 [INFO][4895] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6vrr8" WorkloadEndpoint="172--232--7--32-k8s-coredns--7c65d6cfc9--6vrr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--32-k8s-coredns--7c65d6cfc9--6vrr8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cbf6d4b0-f3bc-4a92-9977-6d91de60b65f", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 45, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-32", ContainerID:"", Pod:"coredns-7c65d6cfc9-6vrr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb18b69f12c", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:47:49.535074 containerd[1581]: 2025-08-13 01:47:49.500 [INFO][4895] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.193/32] ContainerID="1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6vrr8" WorkloadEndpoint="172--232--7--32-k8s-coredns--7c65d6cfc9--6vrr8-eth0" Aug 13 01:47:49.535074 containerd[1581]: 2025-08-13 01:47:49.500 [INFO][4895] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calicb18b69f12c ContainerID="1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6vrr8" WorkloadEndpoint="172--232--7--32-k8s-coredns--7c65d6cfc9--6vrr8-eth0" Aug 13 01:47:49.535074 containerd[1581]: 2025-08-13 01:47:49.511 [INFO][4895] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6vrr8" WorkloadEndpoint="172--232--7--32-k8s-coredns--7c65d6cfc9--6vrr8-eth0" Aug 13 01:47:49.535074 containerd[1581]: 2025-08-13 01:47:49.512 [INFO][4895] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6vrr8" WorkloadEndpoint="172--232--7--32-k8s-coredns--7c65d6cfc9--6vrr8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--32-k8s-coredns--7c65d6cfc9--6vrr8-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"cbf6d4b0-f3bc-4a92-9977-6d91de60b65f", ResourceVersion:"811", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 45, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-32", ContainerID:"1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978", Pod:"coredns-7c65d6cfc9-6vrr8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calicb18b69f12c", MAC:"92:3f:9d:5e:3e:85", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:47:49.535074 containerd[1581]: 2025-08-13 01:47:49.525 [INFO][4895] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" Namespace="kube-system" Pod="coredns-7c65d6cfc9-6vrr8" WorkloadEndpoint="172--232--7--32-k8s-coredns--7c65d6cfc9--6vrr8-eth0" Aug 13 01:47:49.550788 containerd[1581]: time="2025-08-13T01:47:49.550708454Z" level=info msg="connecting to shim 1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978" address="unix:///run/containerd/s/9e238394f7147fd701c5238f8892b9508b65b206a7dd2432ef025407ad0973c6" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:47:49.581810 systemd[1]: Started cri-containerd-1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978.scope - libcontainer container 1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978. Aug 13 01:47:49.634489 containerd[1581]: time="2025-08-13T01:47:49.634430767Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-6vrr8,Uid:cbf6d4b0-f3bc-4a92-9977-6d91de60b65f,Namespace:kube-system,Attempt:0,} returns sandbox id \"1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978\"" Aug 13 01:47:49.636620 kubelet[2796]: E0813 01:47:49.636586 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:47:49.639205 containerd[1581]: time="2025-08-13T01:47:49.638954834Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:47:50.063638 kubelet[2796]: I0813 01:47:50.063551 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-8j6cb" podStartSLOduration=2.01898362 podStartE2EDuration="2m1.063519643s" podCreationTimestamp="2025-08-13 01:45:49 +0000 UTC" firstStartedPulling="2025-08-13 01:45:49.951519888 +0000 UTC m=+22.706088835" lastFinishedPulling="2025-08-13 01:47:48.996055931 +0000 UTC m=+141.750624858" observedRunningTime="2025-08-13 01:47:50.052455586 +0000 UTC m=+142.807024513" watchObservedRunningTime="2025-08-13 01:47:50.063519643 +0000 UTC m=+142.818088570" Aug 13 01:47:50.139757 containerd[1581]: time="2025-08-13T01:47:50.139610909Z" level=info msg="TaskExit event in podsandbox handler container_id:\"25e44c63aa87dccfc7e64ca4849fb58d599b9a44e3eb277cd3b525f61d131e3e\" id:\"446753ef5bb56f29208ae33871bb9c21b4975b0b6691682f9fd11afa20861459\" pid:4983 exit_status:1 exited_at:{seconds:1755049670 nanos:139084573}" Aug 13 01:47:50.385467 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3646947436.mount: Deactivated successfully. Aug 13 01:47:51.057951 systemd-networkd[1477]: calicb18b69f12c: Gained IPv6LL Aug 13 01:47:51.714720 containerd[1581]: time="2025-08-13T01:47:51.714625697Z" level=info msg="TaskExit event in podsandbox handler container_id:\"25e44c63aa87dccfc7e64ca4849fb58d599b9a44e3eb277cd3b525f61d131e3e\" id:\"7569784c1f88ac6957be040951fbccdaef7251de5ca367568fe94724ad3af6f8\" pid:5106 exit_status:1 exited_at:{seconds:1755049671 nanos:713991197}" Aug 13 01:47:52.215858 systemd-networkd[1477]: vxlan.calico: Link UP Aug 13 01:47:52.215878 systemd-networkd[1477]: vxlan.calico: Gained carrier Aug 13 01:47:52.403792 containerd[1581]: time="2025-08-13T01:47:52.403663973Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:47:52.405460 containerd[1581]: time="2025-08-13T01:47:52.405438919Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 01:47:52.406672 containerd[1581]: time="2025-08-13T01:47:52.406409948Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:47:52.408990 containerd[1581]: time="2025-08-13T01:47:52.408970487Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:47:52.410581 containerd[1581]: time="2025-08-13T01:47:52.410560166Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 2.771537089s" Aug 13 01:47:52.411195 containerd[1581]: time="2025-08-13T01:47:52.411168405Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:47:52.422468 containerd[1581]: time="2025-08-13T01:47:52.422186653Z" level=info msg="CreateContainer within sandbox \"1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:47:52.432691 containerd[1581]: time="2025-08-13T01:47:52.432213191Z" level=info msg="Container 2b7c135ce75cd0eb2350f0ef734fcd56975d26583313b5afba7969381c280774: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:47:52.438691 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount324267572.mount: Deactivated successfully. Aug 13 01:47:52.459759 containerd[1581]: time="2025-08-13T01:47:52.459671716Z" level=info msg="CreateContainer within sandbox \"1f1fe1f5b70806104763baca552558d17d907c08f343692c086755c516134978\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"2b7c135ce75cd0eb2350f0ef734fcd56975d26583313b5afba7969381c280774\"" Aug 13 01:47:52.461280 containerd[1581]: time="2025-08-13T01:47:52.461144642Z" level=info msg="StartContainer for \"2b7c135ce75cd0eb2350f0ef734fcd56975d26583313b5afba7969381c280774\"" Aug 13 01:47:52.462966 containerd[1581]: time="2025-08-13T01:47:52.462945747Z" level=info msg="connecting to shim 2b7c135ce75cd0eb2350f0ef734fcd56975d26583313b5afba7969381c280774" address="unix:///run/containerd/s/9e238394f7147fd701c5238f8892b9508b65b206a7dd2432ef025407ad0973c6" protocol=ttrpc version=3 Aug 13 01:47:52.522414 systemd[1]: Started cri-containerd-2b7c135ce75cd0eb2350f0ef734fcd56975d26583313b5afba7969381c280774.scope - libcontainer container 2b7c135ce75cd0eb2350f0ef734fcd56975d26583313b5afba7969381c280774. Aug 13 01:47:52.575849 containerd[1581]: time="2025-08-13T01:47:52.575776477Z" level=info msg="StartContainer for \"2b7c135ce75cd0eb2350f0ef734fcd56975d26583313b5afba7969381c280774\" returns successfully" Aug 13 01:47:52.600123 systemd[1]: Started sshd@10-172.232.7.32:22-147.75.109.163:50478.service - OpenSSH per-connection server daemon (147.75.109.163:50478). Aug 13 01:47:52.956695 sshd[5277]: Accepted publickey for core from 147.75.109.163 port 50478 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:47:52.958664 sshd-session[5277]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:47:52.964414 systemd-logind[1544]: New session 11 of user core. Aug 13 01:47:52.971865 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 01:47:53.067862 kubelet[2796]: E0813 01:47:53.067788 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:47:53.109665 kubelet[2796]: I0813 01:47:53.109560 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-6vrr8" podStartSLOduration=138.33312471 podStartE2EDuration="2m21.10953944s" podCreationTimestamp="2025-08-13 01:45:32 +0000 UTC" firstStartedPulling="2025-08-13 01:47:49.638721287 +0000 UTC m=+142.393290214" lastFinishedPulling="2025-08-13 01:47:52.415136017 +0000 UTC m=+145.169704944" observedRunningTime="2025-08-13 01:47:53.079876713 +0000 UTC m=+145.834445640" watchObservedRunningTime="2025-08-13 01:47:53.10953944 +0000 UTC m=+145.864108367" Aug 13 01:47:53.284054 sshd[5303]: Connection closed by 147.75.109.163 port 50478 Aug 13 01:47:53.284966 sshd-session[5277]: pam_unix(sshd:session): session closed for user core Aug 13 01:47:53.289619 systemd[1]: sshd@10-172.232.7.32:22-147.75.109.163:50478.service: Deactivated successfully. Aug 13 01:47:53.292095 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:47:53.292939 systemd-logind[1544]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:47:53.295032 systemd-logind[1544]: Removed session 11. Aug 13 01:47:53.617074 systemd-networkd[1477]: vxlan.calico: Gained IPv6LL Aug 13 01:47:54.069337 kubelet[2796]: E0813 01:47:54.069287 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:47:54.399337 kubelet[2796]: I0813 01:47:54.399140 2796 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:54.399337 kubelet[2796]: I0813 01:47:54.399226 2796 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:47:54.401804 kubelet[2796]: I0813 01:47:54.401784 2796 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:47:54.415744 kubelet[2796]: I0813 01:47:54.415699 2796 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:54.415913 kubelet[2796]: I0813 01:47:54.415800 2796 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-7c65d6cfc9-djvw6","calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7","calico-system/csi-node-driver-bk2p6","calico-system/calico-typha-bf6ccb678-cdfdr","kube-system/coredns-7c65d6cfc9-6vrr8","kube-system/kube-controller-manager-172-232-7-32","calico-system/calico-node-8j6cb","kube-system/kube-proxy-dmp9l","kube-system/kube-apiserver-172-232-7-32","kube-system/kube-scheduler-172-232-7-32"] Aug 13 01:47:54.415913 kubelet[2796]: E0813 01:47:54.415835 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:47:54.415913 kubelet[2796]: E0813 01:47:54.415845 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:47:54.415913 kubelet[2796]: E0813 01:47:54.415853 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:47:54.415913 kubelet[2796]: E0813 01:47:54.415865 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-bf6ccb678-cdfdr" Aug 13 01:47:54.415913 kubelet[2796]: E0813 01:47:54.415874 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:47:54.415913 kubelet[2796]: E0813 01:47:54.415884 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-32" Aug 13 01:47:54.415913 kubelet[2796]: E0813 01:47:54.415894 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-8j6cb" Aug 13 01:47:54.415913 kubelet[2796]: E0813 01:47:54.415902 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-dmp9l" Aug 13 01:47:54.416395 kubelet[2796]: E0813 01:47:54.415911 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-32" Aug 13 01:47:54.416395 kubelet[2796]: E0813 01:47:54.415931 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-32" Aug 13 01:47:54.416395 kubelet[2796]: I0813 01:47:54.415941 2796 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:47:55.071056 kubelet[2796]: E0813 01:47:55.070932 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:47:58.350441 systemd[1]: Started sshd@11-172.232.7.32:22-147.75.109.163:50128.service - OpenSSH per-connection server daemon (147.75.109.163:50128). Aug 13 01:47:58.399076 containerd[1581]: time="2025-08-13T01:47:58.399003525Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:58.550056 systemd-networkd[1477]: cali3aad0158744: Link UP Aug 13 01:47:58.552092 systemd-networkd[1477]: cali3aad0158744: Gained carrier Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.460 [INFO][5338] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--32-k8s-csi--node--driver--bk2p6-eth0 csi-node-driver- calico-system 0e8898c7-a3f5-4010-bb1f-d756673c29b2 709 0 2025-08-13 01:45:49 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:57bd658777 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-232-7-32 csi-node-driver-bk2p6 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali3aad0158744 [] [] }} ContainerID="174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" Namespace="calico-system" Pod="csi-node-driver-bk2p6" WorkloadEndpoint="172--232--7--32-k8s-csi--node--driver--bk2p6-" Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.461 [INFO][5338] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" Namespace="calico-system" Pod="csi-node-driver-bk2p6" WorkloadEndpoint="172--232--7--32-k8s-csi--node--driver--bk2p6-eth0" Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.505 [INFO][5350] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" HandleID="k8s-pod-network.174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" Workload="172--232--7--32-k8s-csi--node--driver--bk2p6-eth0" Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.505 [INFO][5350] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" HandleID="k8s-pod-network.174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" Workload="172--232--7--32-k8s-csi--node--driver--bk2p6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00011fb70), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-7-32", "pod":"csi-node-driver-bk2p6", "timestamp":"2025-08-13 01:47:58.505313017 +0000 UTC"}, Hostname:"172-232-7-32", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.505 [INFO][5350] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.505 [INFO][5350] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.505 [INFO][5350] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-32' Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.513 [INFO][5350] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" host="172-232-7-32" Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.519 [INFO][5350] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-32" Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.524 [INFO][5350] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="172-232-7-32" Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.526 [INFO][5350] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="172-232-7-32" Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.529 [INFO][5350] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="172-232-7-32" Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.529 [INFO][5350] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" host="172-232-7-32" Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.531 [INFO][5350] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7 Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.535 [INFO][5350] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" host="172-232-7-32" Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.539 [INFO][5350] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.103.194/26] block=192.168.103.192/26 handle="k8s-pod-network.174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" host="172-232-7-32" Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.540 [INFO][5350] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.194/26] handle="k8s-pod-network.174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" host="172-232-7-32" Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.540 [INFO][5350] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:47:58.577570 containerd[1581]: 2025-08-13 01:47:58.540 [INFO][5350] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.194/26] IPv6=[] ContainerID="174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" HandleID="k8s-pod-network.174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" Workload="172--232--7--32-k8s-csi--node--driver--bk2p6-eth0" Aug 13 01:47:58.578286 containerd[1581]: 2025-08-13 01:47:58.545 [INFO][5338] cni-plugin/k8s.go 418: Populated endpoint ContainerID="174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" Namespace="calico-system" Pod="csi-node-driver-bk2p6" WorkloadEndpoint="172--232--7--32-k8s-csi--node--driver--bk2p6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--32-k8s-csi--node--driver--bk2p6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e8898c7-a3f5-4010-bb1f-d756673c29b2", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-32", ContainerID:"", Pod:"csi-node-driver-bk2p6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3aad0158744", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:47:58.578286 containerd[1581]: 2025-08-13 01:47:58.545 [INFO][5338] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.194/32] ContainerID="174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" Namespace="calico-system" Pod="csi-node-driver-bk2p6" WorkloadEndpoint="172--232--7--32-k8s-csi--node--driver--bk2p6-eth0" Aug 13 01:47:58.578286 containerd[1581]: 2025-08-13 01:47:58.545 [INFO][5338] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3aad0158744 ContainerID="174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" Namespace="calico-system" Pod="csi-node-driver-bk2p6" WorkloadEndpoint="172--232--7--32-k8s-csi--node--driver--bk2p6-eth0" Aug 13 01:47:58.578286 containerd[1581]: 2025-08-13 01:47:58.551 [INFO][5338] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" Namespace="calico-system" Pod="csi-node-driver-bk2p6" WorkloadEndpoint="172--232--7--32-k8s-csi--node--driver--bk2p6-eth0" Aug 13 01:47:58.578286 containerd[1581]: 2025-08-13 01:47:58.553 [INFO][5338] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" Namespace="calico-system" Pod="csi-node-driver-bk2p6" WorkloadEndpoint="172--232--7--32-k8s-csi--node--driver--bk2p6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--32-k8s-csi--node--driver--bk2p6-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"0e8898c7-a3f5-4010-bb1f-d756673c29b2", ResourceVersion:"709", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"57bd658777", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-32", ContainerID:"174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7", Pod:"csi-node-driver-bk2p6", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.103.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali3aad0158744", MAC:"7e:f7:5d:63:42:a6", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:47:58.578286 containerd[1581]: 2025-08-13 01:47:58.566 [INFO][5338] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" Namespace="calico-system" Pod="csi-node-driver-bk2p6" WorkloadEndpoint="172--232--7--32-k8s-csi--node--driver--bk2p6-eth0" Aug 13 01:47:58.644123 containerd[1581]: time="2025-08-13T01:47:58.643962710Z" level=info msg="connecting to shim 174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7" address="unix:///run/containerd/s/8921b1e78ff2e6d0c5c3cb8331646bec238d0a2b7801955170a4cddb79b4306b" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:47:58.682892 systemd[1]: Started cri-containerd-174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7.scope - libcontainer container 174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7. Aug 13 01:47:58.703845 sshd[5335]: Accepted publickey for core from 147.75.109.163 port 50128 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:47:58.705028 sshd-session[5335]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:47:58.717501 systemd-logind[1544]: New session 12 of user core. Aug 13 01:47:58.722872 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 01:47:58.731412 containerd[1581]: time="2025-08-13T01:47:58.730193766Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bk2p6,Uid:0e8898c7-a3f5-4010-bb1f-d756673c29b2,Namespace:calico-system,Attempt:0,} returns sandbox id \"174c8053891cbceaa33519ddbd44ab0ee12dc7221e1e362f181330ca7df784a7\"" Aug 13 01:47:58.735116 containerd[1581]: time="2025-08-13T01:47:58.734734330Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 01:47:59.039755 sshd[5412]: Connection closed by 147.75.109.163 port 50128 Aug 13 01:47:59.040779 sshd-session[5335]: pam_unix(sshd:session): session closed for user core Aug 13 01:47:59.046550 systemd[1]: sshd@11-172.232.7.32:22-147.75.109.163:50128.service: Deactivated successfully. Aug 13 01:47:59.050413 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:47:59.051529 systemd-logind[1544]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:47:59.054085 systemd-logind[1544]: Removed session 12. Aug 13 01:47:59.105008 systemd[1]: Started sshd@12-172.232.7.32:22-147.75.109.163:50144.service - OpenSSH per-connection server daemon (147.75.109.163:50144). Aug 13 01:47:59.399291 containerd[1581]: time="2025-08-13T01:47:59.398798914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:59.400193 kubelet[2796]: E0813 01:47:59.400028 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:47:59.401144 containerd[1581]: time="2025-08-13T01:47:59.401051175Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:59.451435 sshd[5425]: Accepted publickey for core from 147.75.109.163 port 50144 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:47:59.456567 sshd-session[5425]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:47:59.467419 systemd-logind[1544]: New session 13 of user core. Aug 13 01:47:59.471225 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 01:47:59.581210 systemd-networkd[1477]: cali5e452bb064d: Link UP Aug 13 01:47:59.581755 systemd-networkd[1477]: cali5e452bb064d: Gained carrier Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.486 [INFO][5427] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0 calico-kube-controllers-86d5dd9ff6- calico-system 4720bedc-4719-4a57-b2ff-e5b21f7acb7f 820 0 2025-08-13 01:45:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86d5dd9ff6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-232-7-32 calico-kube-controllers-86d5dd9ff6-b6gw7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5e452bb064d [] [] }} ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" Namespace="calico-system" Pod="calico-kube-controllers-86d5dd9ff6-b6gw7" WorkloadEndpoint="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-" Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.487 [INFO][5427] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" Namespace="calico-system" Pod="calico-kube-controllers-86d5dd9ff6-b6gw7" WorkloadEndpoint="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.521 [INFO][5458] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" HandleID="k8s-pod-network.1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" Workload="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.521 [INFO][5458] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" HandleID="k8s-pod-network.1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" Workload="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4f30), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-7-32", "pod":"calico-kube-controllers-86d5dd9ff6-b6gw7", "timestamp":"2025-08-13 01:47:59.521450678 +0000 UTC"}, Hostname:"172-232-7-32", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.521 [INFO][5458] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.522 [INFO][5458] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.522 [INFO][5458] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-32' Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.538 [INFO][5458] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" host="172-232-7-32" Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.544 [INFO][5458] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-32" Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.550 [INFO][5458] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="172-232-7-32" Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.553 [INFO][5458] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="172-232-7-32" Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.555 [INFO][5458] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="172-232-7-32" Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.555 [INFO][5458] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" host="172-232-7-32" Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.557 [INFO][5458] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549 Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.563 [INFO][5458] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" host="172-232-7-32" Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.571 [INFO][5458] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.103.195/26] block=192.168.103.192/26 handle="k8s-pod-network.1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" host="172-232-7-32" Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.571 [INFO][5458] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.195/26] handle="k8s-pod-network.1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" host="172-232-7-32" Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.571 [INFO][5458] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:47:59.612242 containerd[1581]: 2025-08-13 01:47:59.571 [INFO][5458] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.195/26] IPv6=[] ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" HandleID="k8s-pod-network.1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" Workload="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" Aug 13 01:47:59.615043 containerd[1581]: 2025-08-13 01:47:59.575 [INFO][5427] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" Namespace="calico-system" Pod="calico-kube-controllers-86d5dd9ff6-b6gw7" WorkloadEndpoint="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0", GenerateName:"calico-kube-controllers-86d5dd9ff6-", Namespace:"calico-system", SelfLink:"", UID:"4720bedc-4719-4a57-b2ff-e5b21f7acb7f", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86d5dd9ff6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-32", ContainerID:"", Pod:"calico-kube-controllers-86d5dd9ff6-b6gw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5e452bb064d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:47:59.615043 containerd[1581]: 2025-08-13 01:47:59.575 [INFO][5427] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.195/32] ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" Namespace="calico-system" Pod="calico-kube-controllers-86d5dd9ff6-b6gw7" WorkloadEndpoint="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" Aug 13 01:47:59.615043 containerd[1581]: 2025-08-13 01:47:59.575 [INFO][5427] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e452bb064d ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" Namespace="calico-system" Pod="calico-kube-controllers-86d5dd9ff6-b6gw7" WorkloadEndpoint="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" Aug 13 01:47:59.615043 containerd[1581]: 2025-08-13 01:47:59.580 [INFO][5427] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" Namespace="calico-system" Pod="calico-kube-controllers-86d5dd9ff6-b6gw7" WorkloadEndpoint="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" Aug 13 01:47:59.615043 containerd[1581]: 2025-08-13 01:47:59.583 [INFO][5427] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" Namespace="calico-system" Pod="calico-kube-controllers-86d5dd9ff6-b6gw7" WorkloadEndpoint="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0", GenerateName:"calico-kube-controllers-86d5dd9ff6-", Namespace:"calico-system", SelfLink:"", UID:"4720bedc-4719-4a57-b2ff-e5b21f7acb7f", ResourceVersion:"820", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86d5dd9ff6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-32", ContainerID:"1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549", Pod:"calico-kube-controllers-86d5dd9ff6-b6gw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5e452bb064d", MAC:"96:78:aa:36:53:8a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:47:59.615043 containerd[1581]: 2025-08-13 01:47:59.603 [INFO][5427] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" Namespace="calico-system" Pod="calico-kube-controllers-86d5dd9ff6-b6gw7" WorkloadEndpoint="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" Aug 13 01:47:59.666106 containerd[1581]: time="2025-08-13T01:47:59.665611484Z" level=info msg="connecting to shim 1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" address="unix:///run/containerd/s/7072e1b3180206b76520407f6378f43cfc8b3027dead01fdf58559050d07828b" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:47:59.720211 systemd[1]: Started cri-containerd-1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549.scope - libcontainer container 1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549. Aug 13 01:47:59.725786 containerd[1581]: time="2025-08-13T01:47:59.725692311Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/9d264e5a926b6ebde73e8aae5e882ad1826cb0e6776be9851ad98dd74dac7412/data: no space left on device" Aug 13 01:47:59.725942 containerd[1581]: time="2025-08-13T01:47:59.725753243Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=2101173" Aug 13 01:47:59.726123 kubelet[2796]: E0813 01:47:59.726044 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/9d264e5a926b6ebde73e8aae5e882ad1826cb0e6776be9851ad98dd74dac7412/data: no space left on device" image="ghcr.io/flatcar/calico/csi:v3.30.2" Aug 13 01:47:59.726222 kubelet[2796]: E0813 01:47:59.726139 2796 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/9d264e5a926b6ebde73e8aae5e882ad1826cb0e6776be9851ad98dd74dac7412/data: no space left on device" image="ghcr.io/flatcar/calico/csi:v3.30.2" Aug 13 01:47:59.727074 kubelet[2796]: E0813 01:47:59.726711 2796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.2,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tg4l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/9d264e5a926b6ebde73e8aae5e882ad1826cb0e6776be9851ad98dd74dac7412/data: no space left on device" logger="UnhandledError" Aug 13 01:47:59.732665 containerd[1581]: time="2025-08-13T01:47:59.730306947Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 01:47:59.733075 containerd[1581]: time="2025-08-13T01:47:59.733023174Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.737672 containerd[1581]: time="2025-08-13T01:47:59.734478839Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.737672 containerd[1581]: time="2025-08-13T01:47:59.734607234Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.739860 containerd[1581]: time="2025-08-13T01:47:59.739807909Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.746835 containerd[1581]: time="2025-08-13T01:47:59.746796101Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.747593 containerd[1581]: time="2025-08-13T01:47:59.747543565Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.747768 containerd[1581]: time="2025-08-13T01:47:59.747696380Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.747864 containerd[1581]: time="2025-08-13T01:47:59.747841714Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.747954 containerd[1581]: time="2025-08-13T01:47:59.747930157Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.748075 containerd[1581]: time="2025-08-13T01:47:59.748051160Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.763484 containerd[1581]: time="2025-08-13T01:47:59.761141576Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log\"" error="write /var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log: no space left on device" Aug 13 01:47:59.763484 containerd[1581]: time="2025-08-13T01:47:59.761278020Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log\"" error="write /var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log: no space left on device" Aug 13 01:47:59.763484 containerd[1581]: time="2025-08-13T01:47:59.761319572Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log\"" error="write /var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log: no space left on device" Aug 13 01:47:59.763484 containerd[1581]: time="2025-08-13T01:47:59.761365803Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log\"" error="write /var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log: no space left on device" Aug 13 01:47:59.763484 containerd[1581]: time="2025-08-13T01:47:59.761388294Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log\"" error="write /var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log: no space left on device" Aug 13 01:47:59.763484 containerd[1581]: time="2025-08-13T01:47:59.761430405Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log\"" error="write /var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log: no space left on device" Aug 13 01:47:59.763484 containerd[1581]: time="2025-08-13T01:47:59.761455366Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log\"" error="write /var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log: no space left on device" Aug 13 01:47:59.763484 containerd[1581]: time="2025-08-13T01:47:59.761480537Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log\"" error="write /var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log: no space left on device" Aug 13 01:47:59.763484 containerd[1581]: time="2025-08-13T01:47:59.761526398Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log\"" error="write /var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log: no space left on device" Aug 13 01:47:59.761953 systemd[1]: cri-containerd-1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549.scope: Deactivated successfully. Aug 13 01:47:59.764800 containerd[1581]: time="2025-08-13T01:47:59.763789930Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log\"" error="write /var/log/pods/calico-system_calico-node-8j6cb_416d9de4-5101-44c9-b974-0fedf790aa67/calico-node/0.log: no space left on device" Aug 13 01:47:59.767795 containerd[1581]: time="2025-08-13T01:47:59.767591001Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.776377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549-rootfs.mount: Deactivated successfully. Aug 13 01:47:59.787443 systemd-networkd[1477]: cali87921f67839: Link UP Aug 13 01:47:59.793675 systemd-networkd[1477]: cali87921f67839: Gained carrier Aug 13 01:47:59.797401 containerd[1581]: time="2025-08-13T01:47:59.796303562Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.797401 containerd[1581]: time="2025-08-13T01:47:59.796380644Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.797401 containerd[1581]: time="2025-08-13T01:47:59.796417235Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.797401 containerd[1581]: time="2025-08-13T01:47:59.796454377Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.799861 containerd[1581]: time="2025-08-13T01:47:59.799789183Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.800680 containerd[1581]: time="2025-08-13T01:47:59.799978848Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.800973 containerd[1581]: time="2025-08-13T01:47:59.800899248Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.802974 containerd[1581]: time="2025-08-13T01:47:59.802895511Z" level=info msg="shim disconnected" id=1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549 namespace=k8s.io Aug 13 01:47:59.804233 containerd[1581]: time="2025-08-13T01:47:59.804054748Z" level=warning msg="cleaning up after shim disconnected" id=1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549 namespace=k8s.io Aug 13 01:47:59.806243 containerd[1581]: time="2025-08-13T01:47:59.804075838Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:47:59.809746 containerd[1581]: time="2025-08-13T01:47:59.808756497Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.809746 containerd[1581]: time="2025-08-13T01:47:59.809015066Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.809746 containerd[1581]: time="2025-08-13T01:47:59.809446089Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.810350 containerd[1581]: time="2025-08-13T01:47:59.809875493Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.810350 containerd[1581]: time="2025-08-13T01:47:59.809943314Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.810350 containerd[1581]: time="2025-08-13T01:47:59.809973685Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.810350 containerd[1581]: time="2025-08-13T01:47:59.809997616Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.810350 containerd[1581]: time="2025-08-13T01:47:59.810026337Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.811192 containerd[1581]: time="2025-08-13T01:47:59.810887245Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.814537 containerd[1581]: time="2025-08-13T01:47:59.814196090Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.816583 containerd[1581]: time="2025-08-13T01:47:59.816377509Z" level=error msg="Fail to write \"stdout\" log to log file \"/var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log\"" error="write /var/log/pods/calico-system_calico-typha-bf6ccb678-cdfdr_bf7dc0d3-6ff8-4c0e-929e-6b31c9f35674/calico-typha/0.log: no space left on device" Aug 13 01:47:59.837494 containerd[1581]: time="2025-08-13T01:47:59.837348465Z" level=warning msg="cleanup warnings time=\"2025-08-13T01:47:59Z\" level=warning msg=\"failed to read init pid file\" error=\"open /run/containerd/io.containerd.runtime.v2.task/k8s.io/1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549/init.pid: no such file or directory\" runtime=io.containerd.runc.v2\n" namespace=k8s.io Aug 13 01:47:59.839206 containerd[1581]: time="2025-08-13T01:47:59.838982767Z" level=error msg="copy shim log" error="read /proc/self/fd/120: file already closed" namespace=k8s.io Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.461 [INFO][5429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--32-k8s-coredns--7c65d6cfc9--djvw6-eth0 coredns-7c65d6cfc9- kube-system 981696e3-42b0-4ae8-b44b-fa439a03a402 822 0 2025-08-13 01:45:32 +0000 UTC map[k8s-app:kube-dns pod-template-hash:7c65d6cfc9 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-7-32 coredns-7c65d6cfc9-djvw6 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali87921f67839 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" Namespace="kube-system" Pod="coredns-7c65d6cfc9-djvw6" WorkloadEndpoint="172--232--7--32-k8s-coredns--7c65d6cfc9--djvw6-" Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.463 [INFO][5429] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" Namespace="kube-system" Pod="coredns-7c65d6cfc9-djvw6" WorkloadEndpoint="172--232--7--32-k8s-coredns--7c65d6cfc9--djvw6-eth0" Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.531 [INFO][5453] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" HandleID="k8s-pod-network.e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" Workload="172--232--7--32-k8s-coredns--7c65d6cfc9--djvw6-eth0" Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.531 [INFO][5453] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" HandleID="k8s-pod-network.e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" Workload="172--232--7--32-k8s-coredns--7c65d6cfc9--djvw6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002df960), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-7-32", "pod":"coredns-7c65d6cfc9-djvw6", "timestamp":"2025-08-13 01:47:59.531817637 +0000 UTC"}, Hostname:"172-232-7-32", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.532 [INFO][5453] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.571 [INFO][5453] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.572 [INFO][5453] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-32' Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.642 [INFO][5453] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" host="172-232-7-32" Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.655 [INFO][5453] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-32" Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.665 [INFO][5453] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="172-232-7-32" Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.670 [INFO][5453] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="172-232-7-32" Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.678 [INFO][5453] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="172-232-7-32" Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.682 [INFO][5453] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" host="172-232-7-32" Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.688 [INFO][5453] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911 Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.715 [INFO][5453] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" host="172-232-7-32" Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.753 [INFO][5453] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.103.196/26] block=192.168.103.192/26 handle="k8s-pod-network.e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" host="172-232-7-32" Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.754 [INFO][5453] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.196/26] handle="k8s-pod-network.e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" host="172-232-7-32" Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.755 [INFO][5453] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:47:59.853290 containerd[1581]: 2025-08-13 01:47:59.756 [INFO][5453] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.196/26] IPv6=[] ContainerID="e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" HandleID="k8s-pod-network.e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" Workload="172--232--7--32-k8s-coredns--7c65d6cfc9--djvw6-eth0" Aug 13 01:47:59.857940 containerd[1581]: 2025-08-13 01:47:59.772 [INFO][5429] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" Namespace="kube-system" Pod="coredns-7c65d6cfc9-djvw6" WorkloadEndpoint="172--232--7--32-k8s-coredns--7c65d6cfc9--djvw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--32-k8s-coredns--7c65d6cfc9--djvw6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"981696e3-42b0-4ae8-b44b-fa439a03a402", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 45, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-32", ContainerID:"", Pod:"coredns-7c65d6cfc9-djvw6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali87921f67839", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:47:59.857940 containerd[1581]: Failed to write to log, write /var/log/calico/cni/cni.log: no space left on device Aug 13 01:47:59.857940 containerd[1581]: 2025-08-13 01:47:59.773 [INFO][5429] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.196/32] ContainerID="e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" Namespace="kube-system" Pod="coredns-7c65d6cfc9-djvw6" WorkloadEndpoint="172--232--7--32-k8s-coredns--7c65d6cfc9--djvw6-eth0" Aug 13 01:47:59.857940 containerd[1581]: Failed to write to log, write /var/log/calico/cni/cni.log: no space left on device Aug 13 01:47:59.857940 containerd[1581]: 2025-08-13 01:47:59.773 [INFO][5429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali87921f67839 ContainerID="e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" Namespace="kube-system" Pod="coredns-7c65d6cfc9-djvw6" WorkloadEndpoint="172--232--7--32-k8s-coredns--7c65d6cfc9--djvw6-eth0" Aug 13 01:47:59.857940 containerd[1581]: Failed to write to log, write /var/log/calico/cni/cni.log: no space left on device Aug 13 01:47:59.857940 containerd[1581]: 2025-08-13 01:47:59.803 [INFO][5429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" Namespace="kube-system" Pod="coredns-7c65d6cfc9-djvw6" WorkloadEndpoint="172--232--7--32-k8s-coredns--7c65d6cfc9--djvw6-eth0" Aug 13 01:47:59.857940 containerd[1581]: Failed to write to log, write /var/log/calico/cni/cni.log: no space left on device Aug 13 01:47:59.857940 containerd[1581]: 2025-08-13 01:47:59.814 [INFO][5429] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" Namespace="kube-system" Pod="coredns-7c65d6cfc9-djvw6" WorkloadEndpoint="172--232--7--32-k8s-coredns--7c65d6cfc9--djvw6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--32-k8s-coredns--7c65d6cfc9--djvw6-eth0", GenerateName:"coredns-7c65d6cfc9-", Namespace:"kube-system", SelfLink:"", UID:"981696e3-42b0-4ae8-b44b-fa439a03a402", ResourceVersion:"822", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 45, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"7c65d6cfc9", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-32", ContainerID:"e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911", Pod:"coredns-7c65d6cfc9-djvw6", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.103.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali87921f67839", MAC:"ce:e1:fb:af:7c:2a", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:47:59.857940 containerd[1581]: Failed to write to log, write /var/log/calico/cni/cni.log: no space left on device Aug 13 01:47:59.857940 containerd[1581]: 2025-08-13 01:47:59.839 [INFO][5429] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" Namespace="kube-system" Pod="coredns-7c65d6cfc9-djvw6" WorkloadEndpoint="172--232--7--32-k8s-coredns--7c65d6cfc9--djvw6-eth0" Aug 13 01:47:59.931455 sshd[5451]: Connection closed by 147.75.109.163 port 50144 Aug 13 01:47:59.932497 sshd-session[5425]: pam_unix(sshd:session): session closed for user core Aug 13 01:47:59.942217 containerd[1581]: time="2025-08-13T01:47:59.941470310Z" level=info msg="connecting to shim e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911" address="unix:///run/containerd/s/b73568002e9e7b2434331e0c83a6234f5f470041dee314e87eb650c65e451ed4" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:47:59.947139 systemd[1]: sshd@12-172.232.7.32:22-147.75.109.163:50144.service: Deactivated successfully. Aug 13 01:47:59.953577 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:47:59.955986 systemd-logind[1544]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:47:59.958360 systemd-logind[1544]: Removed session 13. Aug 13 01:47:59.983549 systemd-networkd[1477]: cali5e452bb064d: Link DOWN Aug 13 01:47:59.983561 systemd-networkd[1477]: cali5e452bb064d: Lost carrier Aug 13 01:47:59.988838 systemd[1]: Started cri-containerd-e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911.scope - libcontainer container e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911. Aug 13 01:48:00.002905 systemd[1]: Started sshd@13-172.232.7.32:22-147.75.109.163:50158.service - OpenSSH per-connection server daemon (147.75.109.163:50158). Aug 13 01:48:00.151718 containerd[1581]: time="2025-08-13T01:48:00.151617530Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-7c65d6cfc9-djvw6,Uid:981696e3-42b0-4ae8-b44b-fa439a03a402,Namespace:kube-system,Attempt:0,} returns sandbox id \"e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911\"" Aug 13 01:48:00.157311 kubelet[2796]: E0813 01:48:00.155482 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:48:00.162743 containerd[1581]: time="2025-08-13T01:48:00.162710493Z" level=info msg="CreateContainer within sandbox \"e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:48:00.173258 containerd[1581]: time="2025-08-13T01:48:00.173197767Z" level=info msg="Container d950cd0f9ef586f07a4d22546817a8720a46f1d4b9b6de0a2c54c1158fb3ca33: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:48:00.184058 containerd[1581]: 2025-08-13 01:47:59.981 [INFO][5553] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" Aug 13 01:48:00.184058 containerd[1581]: 2025-08-13 01:47:59.981 [INFO][5553] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" iface="eth0" netns="/var/run/netns/cni-c28f23a1-4ddd-0b8b-3390-068a285538d5" Aug 13 01:48:00.184058 containerd[1581]: 2025-08-13 01:47:59.982 [INFO][5553] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" iface="eth0" netns="/var/run/netns/cni-c28f23a1-4ddd-0b8b-3390-068a285538d5" Aug 13 01:48:00.184058 containerd[1581]: 2025-08-13 01:47:59.991 [INFO][5553] cni-plugin/dataplane_linux.go 604: Deleted device in netns. ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" after=9.305915ms iface="eth0" netns="/var/run/netns/cni-c28f23a1-4ddd-0b8b-3390-068a285538d5" Aug 13 01:48:00.184058 containerd[1581]: 2025-08-13 01:47:59.991 [INFO][5553] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" Aug 13 01:48:00.184058 containerd[1581]: 2025-08-13 01:47:59.991 [INFO][5553] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" Aug 13 01:48:00.184058 containerd[1581]: 2025-08-13 01:48:00.084 [INFO][5597] ipam/ipam_plugin.go 412: Releasing address using handleID ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" HandleID="k8s-pod-network.1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" Workload="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" Aug 13 01:48:00.184058 containerd[1581]: 2025-08-13 01:48:00.085 [INFO][5597] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:48:00.184058 containerd[1581]: 2025-08-13 01:48:00.085 [INFO][5597] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:48:00.184058 containerd[1581]: 2025-08-13 01:48:00.166 [INFO][5597] ipam/ipam_plugin.go 431: Released address using handleID ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" HandleID="k8s-pod-network.1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" Workload="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" Aug 13 01:48:00.184058 containerd[1581]: 2025-08-13 01:48:00.166 [INFO][5597] ipam/ipam_plugin.go 440: Releasing address using workloadID ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" HandleID="k8s-pod-network.1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" Workload="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" Aug 13 01:48:00.184058 containerd[1581]: 2025-08-13 01:48:00.167 [INFO][5597] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:48:00.184058 containerd[1581]: 2025-08-13 01:48:00.174 [INFO][5553] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" Aug 13 01:48:00.187197 containerd[1581]: time="2025-08-13T01:48:00.186986547Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to start sandbox \"1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549\": failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \"proc\" to rootfs at \"/proc\": mkdirat /run/containerd/io.containerd.runtime.v2.task/k8s.io/1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549/rootfs/proc: no space left on device" Aug 13 01:48:00.190519 kubelet[2796]: E0813 01:48:00.188122 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to start sandbox \"1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549\": failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \"proc\" to rootfs at \"/proc\": mkdirat /run/containerd/io.containerd.runtime.v2.task/k8s.io/1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549/rootfs/proc: no space left on device" Aug 13 01:48:00.190519 kubelet[2796]: E0813 01:48:00.188241 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to start sandbox \"1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549\": failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \"proc\" to rootfs at \"/proc\": mkdirat /run/containerd/io.containerd.runtime.v2.task/k8s.io/1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549/rootfs/proc: no space left on device" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:48:00.190519 kubelet[2796]: E0813 01:48:00.188282 2796 kuberuntime_manager.go:1170] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to start sandbox \"1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549\": failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \"proc\" to rootfs at \"/proc\": mkdirat /run/containerd/io.containerd.runtime.v2.task/k8s.io/1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549/rootfs/proc: no space left on device" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:48:00.190519 kubelet[2796]: E0813 01:48:00.188366 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f)\\\": rpc error: code = Unknown desc = failed to start sandbox \\\"1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549\\\": failed to create containerd task: failed to create shim task: OCI runtime create failed: runc create failed: unable to start container process: error during container init: error mounting \\\"proc\\\" to rootfs at \\\"/proc\\\": mkdirat /run/containerd/io.containerd.runtime.v2.task/k8s.io/1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549/rootfs/proc: no space left on device\"" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" podUID="4720bedc-4719-4a57-b2ff-e5b21f7acb7f" Aug 13 01:48:00.193333 containerd[1581]: time="2025-08-13T01:48:00.193114342Z" level=info msg="CreateContainer within sandbox \"e717cb35a2b9e6c35ad8290f04b7c84a57058168256411a7cefc002451eb7911\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d950cd0f9ef586f07a4d22546817a8720a46f1d4b9b6de0a2c54c1158fb3ca33\"" Aug 13 01:48:00.195745 containerd[1581]: time="2025-08-13T01:48:00.195327383Z" level=info msg="StartContainer for \"d950cd0f9ef586f07a4d22546817a8720a46f1d4b9b6de0a2c54c1158fb3ca33\"" Aug 13 01:48:00.198748 containerd[1581]: time="2025-08-13T01:48:00.198604997Z" level=info msg="connecting to shim d950cd0f9ef586f07a4d22546817a8720a46f1d4b9b6de0a2c54c1158fb3ca33" address="unix:///run/containerd/s/b73568002e9e7b2434331e0c83a6234f5f470041dee314e87eb650c65e451ed4" protocol=ttrpc version=3 Aug 13 01:48:00.230844 systemd[1]: Started cri-containerd-d950cd0f9ef586f07a4d22546817a8720a46f1d4b9b6de0a2c54c1158fb3ca33.scope - libcontainer container d950cd0f9ef586f07a4d22546817a8720a46f1d4b9b6de0a2c54c1158fb3ca33. Aug 13 01:48:00.297696 containerd[1581]: time="2025-08-13T01:48:00.296966181Z" level=info msg="StartContainer for \"d950cd0f9ef586f07a4d22546817a8720a46f1d4b9b6de0a2c54c1158fb3ca33\" returns successfully" Aug 13 01:48:00.337302 systemd-networkd[1477]: cali3aad0158744: Gained IPv6LL Aug 13 01:48:00.378674 sshd[5601]: Accepted publickey for core from 147.75.109.163 port 50158 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:00.380536 sshd-session[5601]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:00.388819 systemd-logind[1544]: New session 14 of user core. Aug 13 01:48:00.391827 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 01:48:00.399917 kubelet[2796]: E0813 01:48:00.399778 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:48:00.426801 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549-shm.mount: Deactivated successfully. Aug 13 01:48:00.426953 systemd[1]: run-netns-cni\x2dc28f23a1\x2d4ddd\x2d0b8b\x2d3390\x2d068a285538d5.mount: Deactivated successfully. Aug 13 01:48:00.495058 containerd[1581]: time="2025-08-13T01:48:00.494954140Z" level=error msg="failed to cleanup \"extract-257339729-8592 sha256:a6200c63e2a03c9e19bca689383dae051e67c8fbd246c7e3961b6330b68b8256\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:48:00.495863 containerd[1581]: time="2025-08-13T01:48:00.495814547Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/ae6bd1a6716632c7ab980fa864a57306719e1a677f8205329efeaf092032e0a4/data: no space left on device" Aug 13 01:48:00.495943 containerd[1581]: time="2025-08-13T01:48:00.495922731Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=1052980" Aug 13 01:48:00.496227 kubelet[2796]: E0813 01:48:00.496153 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/ae6bd1a6716632c7ab980fa864a57306719e1a677f8205329efeaf092032e0a4/data: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2" Aug 13 01:48:00.496227 kubelet[2796]: E0813 01:48:00.496232 2796 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/ae6bd1a6716632c7ab980fa864a57306719e1a677f8205329efeaf092032e0a4/data: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2" Aug 13 01:48:00.496879 kubelet[2796]: E0813 01:48:00.496383 2796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tg4l9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-bk2p6_calico-system(0e8898c7-a3f5-4010-bb1f-d756673c29b2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/ae6bd1a6716632c7ab980fa864a57306719e1a677f8205329efeaf092032e0a4/data: no space left on device" logger="UnhandledError" Aug 13 01:48:00.498860 kubelet[2796]: E0813 01:48:00.498755 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/9d264e5a926b6ebde73e8aae5e882ad1826cb0e6776be9851ad98dd74dac7412/data: no space left on device\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/ae6bd1a6716632c7ab980fa864a57306719e1a677f8205329efeaf092032e0a4/data: no space left on device\"]" pod="calico-system/csi-node-driver-bk2p6" podUID="0e8898c7-a3f5-4010-bb1f-d756673c29b2" Aug 13 01:48:00.707054 sshd[5658]: Connection closed by 147.75.109.163 port 50158 Aug 13 01:48:00.707474 sshd-session[5601]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:00.713439 systemd[1]: sshd@13-172.232.7.32:22-147.75.109.163:50158.service: Deactivated successfully. Aug 13 01:48:00.716883 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:48:00.718067 systemd-logind[1544]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:48:00.720202 systemd-logind[1544]: Removed session 14. Aug 13 01:48:01.090057 kubelet[2796]: E0813 01:48:01.090000 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:48:01.092639 containerd[1581]: time="2025-08-13T01:48:01.092524433Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:01.096590 kubelet[2796]: E0813 01:48:01.096464 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.2\\\"\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\"\"]" pod="calico-system/csi-node-driver-bk2p6" podUID="0e8898c7-a3f5-4010-bb1f-d756673c29b2" Aug 13 01:48:01.169786 kubelet[2796]: I0813 01:48:01.169454 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7c65d6cfc9-djvw6" podStartSLOduration=149.169351541 podStartE2EDuration="2m29.169351541s" podCreationTimestamp="2025-08-13 01:45:32 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:48:01.138508384 +0000 UTC m=+153.893077331" watchObservedRunningTime="2025-08-13 01:48:01.169351541 +0000 UTC m=+153.923920468" Aug 13 01:48:01.288355 systemd-networkd[1477]: cali5e452bb064d: Link UP Aug 13 01:48:01.289155 systemd-networkd[1477]: cali5e452bb064d: Gained carrier Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.205 [INFO][5669] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0 calico-kube-controllers-86d5dd9ff6- calico-system 4720bedc-4719-4a57-b2ff-e5b21f7acb7f 5417 0 2025-08-13 01:45:49 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:86d5dd9ff6 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-232-7-32 1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549 calico-kube-controllers-86d5dd9ff6-b6gw7 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5e452bb064d [] [] }} ContainerID="96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" Namespace="calico-system" Pod="calico-kube-controllers-86d5dd9ff6-b6gw7" WorkloadEndpoint="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-" Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.205 [INFO][5669] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" Namespace="calico-system" Pod="calico-kube-controllers-86d5dd9ff6-b6gw7" WorkloadEndpoint="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.239 [INFO][5683] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" HandleID="k8s-pod-network.96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" Workload="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.239 [INFO][5683] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" HandleID="k8s-pod-network.96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" Workload="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d4ff0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-7-32", "pod":"calico-kube-controllers-86d5dd9ff6-b6gw7", "timestamp":"2025-08-13 01:48:01.239623839 +0000 UTC"}, Hostname:"172-232-7-32", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.240 [INFO][5683] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.240 [INFO][5683] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.240 [INFO][5683] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-32' Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.248 [INFO][5683] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" host="172-232-7-32" Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.253 [INFO][5683] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-32" Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.258 [INFO][5683] ipam/ipam.go 511: Trying affinity for 192.168.103.192/26 host="172-232-7-32" Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.261 [INFO][5683] ipam/ipam.go 158: Attempting to load block cidr=192.168.103.192/26 host="172-232-7-32" Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.264 [INFO][5683] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.103.192/26 host="172-232-7-32" Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.264 [INFO][5683] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.103.192/26 handle="k8s-pod-network.96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" host="172-232-7-32" Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.266 [INFO][5683] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850 Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.270 [INFO][5683] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.103.192/26 handle="k8s-pod-network.96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" host="172-232-7-32" Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.279 [INFO][5683] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.103.197/26] block=192.168.103.192/26 handle="k8s-pod-network.96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" host="172-232-7-32" Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.279 [INFO][5683] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.103.197/26] handle="k8s-pod-network.96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" host="172-232-7-32" Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.279 [INFO][5683] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:48:01.310096 containerd[1581]: 2025-08-13 01:48:01.279 [INFO][5683] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.103.197/26] IPv6=[] ContainerID="96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" HandleID="k8s-pod-network.96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" Workload="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" Aug 13 01:48:01.312404 containerd[1581]: 2025-08-13 01:48:01.283 [INFO][5669] cni-plugin/k8s.go 418: Populated endpoint ContainerID="96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" Namespace="calico-system" Pod="calico-kube-controllers-86d5dd9ff6-b6gw7" WorkloadEndpoint="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0", GenerateName:"calico-kube-controllers-86d5dd9ff6-", Namespace:"calico-system", SelfLink:"", UID:"4720bedc-4719-4a57-b2ff-e5b21f7acb7f", ResourceVersion:"5417", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86d5dd9ff6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-32", ContainerID:"1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549", Pod:"calico-kube-controllers-86d5dd9ff6-b6gw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5e452bb064d", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:48:01.312404 containerd[1581]: 2025-08-13 01:48:01.283 [INFO][5669] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.103.197/32] ContainerID="96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" Namespace="calico-system" Pod="calico-kube-controllers-86d5dd9ff6-b6gw7" WorkloadEndpoint="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" Aug 13 01:48:01.312404 containerd[1581]: 2025-08-13 01:48:01.283 [INFO][5669] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5e452bb064d ContainerID="96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" Namespace="calico-system" Pod="calico-kube-controllers-86d5dd9ff6-b6gw7" WorkloadEndpoint="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" Aug 13 01:48:01.312404 containerd[1581]: 2025-08-13 01:48:01.287 [INFO][5669] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" Namespace="calico-system" Pod="calico-kube-controllers-86d5dd9ff6-b6gw7" WorkloadEndpoint="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" Aug 13 01:48:01.312404 containerd[1581]: 2025-08-13 01:48:01.287 [INFO][5669] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" Namespace="calico-system" Pod="calico-kube-controllers-86d5dd9ff6-b6gw7" WorkloadEndpoint="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0", GenerateName:"calico-kube-controllers-86d5dd9ff6-", Namespace:"calico-system", SelfLink:"", UID:"4720bedc-4719-4a57-b2ff-e5b21f7acb7f", ResourceVersion:"5417", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 45, 49, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"86d5dd9ff6", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-32", ContainerID:"96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850", Pod:"calico-kube-controllers-86d5dd9ff6-b6gw7", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.103.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5e452bb064d", MAC:"ee:94:95:0c:e9:ed", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:48:01.312404 containerd[1581]: 2025-08-13 01:48:01.303 [INFO][5669] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" Namespace="calico-system" Pod="calico-kube-controllers-86d5dd9ff6-b6gw7" WorkloadEndpoint="172--232--7--32-k8s-calico--kube--controllers--86d5dd9ff6--b6gw7-eth0" Aug 13 01:48:01.354173 containerd[1581]: time="2025-08-13T01:48:01.354008497Z" level=info msg="connecting to shim 96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850" address="unix:///run/containerd/s/45f5d5d50ded3031a634220702ea9e43d6da95909c4a478be0997f5606403d1f" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:48:01.398900 systemd[1]: Started cri-containerd-96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850.scope - libcontainer container 96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850. Aug 13 01:48:01.457752 containerd[1581]: time="2025-08-13T01:48:01.457524259Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-86d5dd9ff6-b6gw7,Uid:4720bedc-4719-4a57-b2ff-e5b21f7acb7f,Namespace:calico-system,Attempt:0,} returns sandbox id \"96702cc8fce30c5f12688ebd296d7c614b2c0016fb8705bdb161127dbc386850\"" Aug 13 01:48:01.460357 containerd[1581]: time="2025-08-13T01:48:01.460299717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:48:01.617386 systemd-networkd[1477]: cali87921f67839: Gained IPv6LL Aug 13 01:48:02.036165 containerd[1581]: time="2025-08-13T01:48:02.036069218Z" level=error msg="failed to cleanup \"extract-953196354-eTdc sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:48:02.036866 containerd[1581]: time="2025-08-13T01:48:02.036817603Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 01:48:02.036993 containerd[1581]: time="2025-08-13T01:48:02.036918216Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=1052906" Aug 13 01:48:02.037394 kubelet[2796]: E0813 01:48:02.037319 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:48:02.037869 kubelet[2796]: E0813 01:48:02.037837 2796 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:48:02.038120 kubelet[2796]: E0813 01:48:02.038044 2796 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gsskp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-86d5dd9ff6-b6gw7_calico-system(4720bedc-4719-4a57-b2ff-e5b21f7acb7f): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 01:48:02.039765 kubelet[2796]: E0813 01:48:02.039715 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" podUID="4720bedc-4719-4a57-b2ff-e5b21f7acb7f" Aug 13 01:48:02.093941 kubelet[2796]: E0813 01:48:02.093864 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:48:02.095207 kubelet[2796]: E0813 01:48:02.095117 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" podUID="4720bedc-4719-4a57-b2ff-e5b21f7acb7f" Aug 13 01:48:02.385460 systemd-networkd[1477]: cali5e452bb064d: Gained IPv6LL Aug 13 01:48:02.907253 kubelet[2796]: W0813 01:48:02.907107 2796 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4720bedc_4719_4a57_b2ff_e5b21f7acb7f.slice/cri-containerd-1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549.scope WatchSource:0}: container "1754821e48edf53610d43b102f5d078dc304e9db655e0f20c8821dbeaaa51549" in namespace "k8s.io": not found Aug 13 01:48:03.097255 kubelet[2796]: E0813 01:48:03.097189 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22" Aug 13 01:48:03.099022 kubelet[2796]: E0813 01:48:03.098957 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\"\"" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" podUID="4720bedc-4719-4a57-b2ff-e5b21f7acb7f" Aug 13 01:48:04.440171 kubelet[2796]: I0813 01:48:04.440117 2796 eviction_manager.go:369] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:48:04.440171 kubelet[2796]: I0813 01:48:04.440173 2796 container_gc.go:88] "Attempting to delete unused containers" Aug 13 01:48:04.443511 kubelet[2796]: I0813 01:48:04.443490 2796 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:48:04.463110 kubelet[2796]: I0813 01:48:04.463057 2796 eviction_manager.go:380] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:48:04.463313 kubelet[2796]: I0813 01:48:04.463262 2796 eviction_manager.go:398] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7","calico-system/csi-node-driver-bk2p6","calico-system/calico-typha-bf6ccb678-cdfdr","kube-system/coredns-7c65d6cfc9-6vrr8","kube-system/coredns-7c65d6cfc9-djvw6","calico-system/calico-node-8j6cb","kube-system/kube-controller-manager-172-232-7-32","kube-system/kube-proxy-dmp9l","kube-system/kube-apiserver-172-232-7-32","kube-system/kube-scheduler-172-232-7-32"] Aug 13 01:48:04.463313 kubelet[2796]: E0813 01:48:04.463305 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-86d5dd9ff6-b6gw7" Aug 13 01:48:04.463439 kubelet[2796]: E0813 01:48:04.463319 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-bk2p6" Aug 13 01:48:04.463439 kubelet[2796]: E0813 01:48:04.463335 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-bf6ccb678-cdfdr" Aug 13 01:48:04.463439 kubelet[2796]: E0813 01:48:04.463348 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-6vrr8" Aug 13 01:48:04.463439 kubelet[2796]: E0813 01:48:04.463359 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-7c65d6cfc9-djvw6" Aug 13 01:48:04.463439 kubelet[2796]: E0813 01:48:04.463369 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-8j6cb" Aug 13 01:48:04.463439 kubelet[2796]: E0813 01:48:04.463381 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-32" Aug 13 01:48:04.463439 kubelet[2796]: E0813 01:48:04.463392 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-dmp9l" Aug 13 01:48:04.463439 kubelet[2796]: E0813 01:48:04.463403 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-32" Aug 13 01:48:04.463439 kubelet[2796]: E0813 01:48:04.463412 2796 eviction_manager.go:598] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-32" Aug 13 01:48:04.463439 kubelet[2796]: I0813 01:48:04.463425 2796 eviction_manager.go:427] "Eviction manager: unable to evict any pods from the node" Aug 13 01:48:05.772518 systemd[1]: Started sshd@14-172.232.7.32:22-147.75.109.163:50174.service - OpenSSH per-connection server daemon (147.75.109.163:50174). Aug 13 01:48:05.878252 containerd[1581]: time="2025-08-13T01:48:05.878199345Z" level=info msg="TaskExit event in podsandbox handler container_id:\"25e44c63aa87dccfc7e64ca4849fb58d599b9a44e3eb277cd3b525f61d131e3e\" id:\"1a7687ed3a7aa79cb3c208c5aa55f88aabb403fb0d1cca26d7830f33f5576383\" pid:5769 exited_at:{seconds:1755049685 nanos:877405289}" Aug 13 01:48:06.127003 sshd[5764]: Accepted publickey for core from 147.75.109.163 port 50174 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:06.131014 sshd-session[5764]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:06.137760 systemd-logind[1544]: New session 15 of user core. Aug 13 01:48:06.141860 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 01:48:06.454620 sshd[5781]: Connection closed by 147.75.109.163 port 50174 Aug 13 01:48:06.455814 sshd-session[5764]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:06.460868 systemd[1]: sshd@14-172.232.7.32:22-147.75.109.163:50174.service: Deactivated successfully. Aug 13 01:48:06.463825 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:48:06.468592 systemd-logind[1544]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:48:06.470558 systemd-logind[1544]: Removed session 15. Aug 13 01:48:06.525247 systemd[1]: Started sshd@15-172.232.7.32:22-147.75.109.163:50188.service - OpenSSH per-connection server daemon (147.75.109.163:50188). Aug 13 01:48:06.894942 sshd[5795]: Accepted publickey for core from 147.75.109.163 port 50188 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:06.896705 sshd-session[5795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:06.902325 systemd-logind[1544]: New session 16 of user core. Aug 13 01:48:06.909845 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 01:48:07.222155 sshd[5797]: Connection closed by 147.75.109.163 port 50188 Aug 13 01:48:07.223314 sshd-session[5795]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:07.230492 systemd[1]: sshd@15-172.232.7.32:22-147.75.109.163:50188.service: Deactivated successfully. Aug 13 01:48:07.234097 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:48:07.236910 systemd-logind[1544]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:48:07.238816 systemd-logind[1544]: Removed session 16. Aug 13 01:48:08.398071 kubelet[2796]: E0813 01:48:08.398003 2796 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.21 172.232.0.13 172.232.0.22"