Aug 13 01:44:47.985580 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 01:44:47.985621 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:44:47.985631 kernel: BIOS-provided physical RAM map: Aug 13 01:44:47.985640 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 01:44:47.985646 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 01:44:47.985652 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 01:44:47.985659 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 01:44:47.985670 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 01:44:47.985676 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 01:44:47.985682 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 01:44:47.985689 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 01:44:47.985695 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 01:44:47.985704 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 01:44:47.985711 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 01:44:47.985718 kernel: NX (Execute Disable) protection: active Aug 13 01:44:47.985725 kernel: APIC: Static calls initialized Aug 13 01:44:47.985732 kernel: SMBIOS 2.8 present. Aug 13 01:44:47.985741 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 01:44:47.985748 kernel: DMI: Memory slots populated: 1/1 Aug 13 01:44:47.985755 kernel: Hypervisor detected: KVM Aug 13 01:44:47.985762 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 01:44:47.985769 kernel: kvm-clock: using sched offset of 9084844480 cycles Aug 13 01:44:47.985776 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 01:44:47.985784 kernel: tsc: Detected 2000.000 MHz processor Aug 13 01:44:47.985791 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:44:47.985798 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:44:47.985805 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 01:44:47.985814 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 01:44:47.985821 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:44:47.985828 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 01:44:47.985835 kernel: Using GB pages for direct mapping Aug 13 01:44:47.985842 kernel: ACPI: Early table checksum verification disabled Aug 13 01:44:47.985849 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 01:44:47.985857 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:47.985864 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:47.985871 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:47.985880 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 01:44:47.985887 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:47.985894 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:47.985901 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:47.985912 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:47.985919 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 01:44:47.985929 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 01:44:47.985936 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 01:44:47.985943 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 01:44:47.985966 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 01:44:47.985974 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 01:44:47.985981 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 01:44:47.985989 kernel: No NUMA configuration found Aug 13 01:44:47.985996 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 01:44:47.986006 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Aug 13 01:44:47.986014 kernel: Zone ranges: Aug 13 01:44:47.986021 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:44:47.986029 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 01:44:47.986036 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:44:47.986043 kernel: Device empty Aug 13 01:44:47.986050 kernel: Movable zone start for each node Aug 13 01:44:47.986058 kernel: Early memory node ranges Aug 13 01:44:47.986065 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 01:44:47.986075 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 01:44:47.986082 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:44:47.986090 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 01:44:47.986097 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:44:47.986104 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 01:44:47.986112 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 01:44:47.986119 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 01:44:47.986133 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 01:44:47.986141 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:44:47.986150 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 01:44:47.986158 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 01:44:47.986165 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:44:47.986173 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 01:44:47.986180 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 01:44:47.986187 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:44:47.986195 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 01:44:47.986202 kernel: TSC deadline timer available Aug 13 01:44:47.986209 kernel: CPU topo: Max. logical packages: 1 Aug 13 01:44:47.986220 kernel: CPU topo: Max. logical dies: 1 Aug 13 01:44:47.986228 kernel: CPU topo: Max. dies per package: 1 Aug 13 01:44:47.986235 kernel: CPU topo: Max. threads per core: 1 Aug 13 01:44:47.986243 kernel: CPU topo: Num. cores per package: 2 Aug 13 01:44:47.986250 kernel: CPU topo: Num. threads per package: 2 Aug 13 01:44:47.986257 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 01:44:47.986264 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 01:44:47.986272 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 01:44:47.986279 kernel: kvm-guest: setup PV sched yield Aug 13 01:44:47.986286 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 01:44:47.986296 kernel: Booting paravirtualized kernel on KVM Aug 13 01:44:47.986303 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:44:47.986311 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 01:44:47.986318 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 01:44:47.986325 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 01:44:47.986333 kernel: pcpu-alloc: [0] 0 1 Aug 13 01:44:47.986340 kernel: kvm-guest: PV spinlocks enabled Aug 13 01:44:47.986347 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 01:44:47.986355 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:44:47.986366 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:44:47.986373 kernel: random: crng init done Aug 13 01:44:47.986380 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 01:44:47.986388 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:44:47.986395 kernel: Fallback order for Node 0: 0 Aug 13 01:44:47.986402 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Aug 13 01:44:47.986410 kernel: Policy zone: Normal Aug 13 01:44:47.986417 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:44:47.986427 kernel: software IO TLB: area num 2. Aug 13 01:44:47.986434 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 01:44:47.986441 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 01:44:47.986449 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 01:44:47.986456 kernel: Dynamic Preempt: voluntary Aug 13 01:44:47.986463 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 01:44:47.986471 kernel: rcu: RCU event tracing is enabled. Aug 13 01:44:47.986479 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 01:44:47.986486 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 01:44:47.986495 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:44:47.986503 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:44:47.986510 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:44:47.986517 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 01:44:47.986524 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:44:47.986539 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:44:47.986549 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:44:47.986556 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 01:44:47.986564 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 01:44:47.986571 kernel: Console: colour VGA+ 80x25 Aug 13 01:44:47.986579 kernel: printk: legacy console [tty0] enabled Aug 13 01:44:47.986586 kernel: printk: legacy console [ttyS0] enabled Aug 13 01:44:47.986596 kernel: ACPI: Core revision 20240827 Aug 13 01:44:47.986604 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 01:44:47.986611 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:44:47.986619 kernel: x2apic enabled Aug 13 01:44:47.986626 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 01:44:47.986636 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 01:44:47.986644 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 01:44:47.986651 kernel: kvm-guest: setup PV IPIs Aug 13 01:44:47.986658 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:44:47.986666 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Aug 13 01:44:47.986674 kernel: Calibrating delay loop (skipped) preset value.. 4000.00 BogoMIPS (lpj=2000000) Aug 13 01:44:47.986681 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 01:44:47.986688 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 01:44:47.986698 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 01:44:47.986706 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:44:47.986714 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 01:44:47.986897 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 01:44:47.986905 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 01:44:47.986912 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:44:47.986920 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 01:44:47.986927 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 01:44:47.986935 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 01:44:47.986946 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 01:44:47.987487 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 01:44:47.987496 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:44:47.987504 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:44:47.987511 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:44:47.987519 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:44:47.987526 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 01:44:47.987534 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:44:47.987546 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 01:44:47.987554 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 01:44:47.987561 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:44:47.987569 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:44:47.987576 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 01:44:47.987584 kernel: landlock: Up and running. Aug 13 01:44:47.987591 kernel: SELinux: Initializing. Aug 13 01:44:47.987599 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:44:47.987606 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:44:47.987616 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 01:44:47.987624 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 01:44:47.987632 kernel: ... version: 0 Aug 13 01:44:47.987639 kernel: ... bit width: 48 Aug 13 01:44:47.987647 kernel: ... generic registers: 6 Aug 13 01:44:47.987654 kernel: ... value mask: 0000ffffffffffff Aug 13 01:44:47.987662 kernel: ... max period: 00007fffffffffff Aug 13 01:44:47.987669 kernel: ... fixed-purpose events: 0 Aug 13 01:44:47.987677 kernel: ... event mask: 000000000000003f Aug 13 01:44:47.987684 kernel: signal: max sigframe size: 3376 Aug 13 01:44:47.987694 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:44:47.987702 kernel: rcu: Max phase no-delay instances is 400. Aug 13 01:44:47.987710 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 01:44:47.987717 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:44:47.987762 kernel: smpboot: x86: Booting SMP configuration: Aug 13 01:44:47.987770 kernel: .... node #0, CPUs: #1 Aug 13 01:44:47.987778 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:44:47.987786 kernel: smpboot: Total of 2 processors activated (8000.00 BogoMIPS) Aug 13 01:44:47.987793 kernel: Memory: 3961808K/4193772K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 227288K reserved, 0K cma-reserved) Aug 13 01:44:47.987805 kernel: devtmpfs: initialized Aug 13 01:44:47.987813 kernel: x86/mm: Memory block size: 128MB Aug 13 01:44:47.987820 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:44:47.987828 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 01:44:47.987836 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:44:47.987843 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:44:47.987851 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:44:47.987858 kernel: audit: type=2000 audit(1755049483.736:1): state=initialized audit_enabled=0 res=1 Aug 13 01:44:47.987866 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:44:47.987876 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:44:47.987883 kernel: cpuidle: using governor menu Aug 13 01:44:47.987891 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:44:47.987898 kernel: dca service started, version 1.12.1 Aug 13 01:44:47.987906 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Aug 13 01:44:47.987913 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 01:44:47.987921 kernel: PCI: Using configuration type 1 for base access Aug 13 01:44:47.987929 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:44:47.987939 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:44:47.987946 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 01:44:47.990695 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:44:47.990706 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 01:44:47.990713 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:44:47.990721 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:44:47.990729 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:44:47.990737 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 01:44:47.990745 kernel: ACPI: Interpreter enabled Aug 13 01:44:47.990758 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 01:44:47.990766 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:44:47.990774 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:44:47.990782 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 01:44:47.990790 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 01:44:47.990798 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 01:44:47.991081 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:44:47.991222 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 01:44:47.991358 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 01:44:47.991368 kernel: PCI host bridge to bus 0000:00 Aug 13 01:44:47.991521 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:44:47.991644 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:44:47.991760 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:44:47.991875 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 01:44:47.992027 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 01:44:47.992155 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 01:44:47.992271 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 01:44:47.992436 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 01:44:47.992602 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 01:44:47.992735 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Aug 13 01:44:47.992861 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Aug 13 01:44:47.993089 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Aug 13 01:44:47.993220 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:44:47.993364 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 01:44:47.993493 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Aug 13 01:44:47.993620 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Aug 13 01:44:47.993746 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 01:44:47.993883 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 01:44:47.996433 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Aug 13 01:44:47.996572 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Aug 13 01:44:47.996700 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 01:44:47.996826 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Aug 13 01:44:47.997014 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 01:44:47.997149 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 01:44:47.997285 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 01:44:47.997420 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Aug 13 01:44:47.997545 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Aug 13 01:44:47.997764 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 01:44:47.997900 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Aug 13 01:44:47.997912 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 01:44:47.997920 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 01:44:47.997928 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:44:47.997940 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 01:44:47.997964 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 01:44:47.998227 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 01:44:47.998236 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 01:44:47.998244 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 01:44:47.998252 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 01:44:47.998260 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 01:44:47.998268 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 01:44:47.998275 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 01:44:47.998291 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 01:44:47.998302 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 01:44:47.998312 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 01:44:47.998320 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 01:44:47.998328 kernel: iommu: Default domain type: Translated Aug 13 01:44:47.998336 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:44:47.998344 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:44:47.998352 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:44:47.998360 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 01:44:47.998371 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 01:44:47.999535 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 01:44:47.999720 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 01:44:47.999854 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:44:47.999864 kernel: vgaarb: loaded Aug 13 01:44:47.999873 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 01:44:47.999881 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 01:44:47.999888 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 01:44:47.999902 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:44:47.999909 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:44:47.999917 kernel: pnp: PnP ACPI init Aug 13 01:44:48.001139 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 01:44:48.001155 kernel: pnp: PnP ACPI: found 5 devices Aug 13 01:44:48.001164 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:44:48.001172 kernel: NET: Registered PF_INET protocol family Aug 13 01:44:48.001181 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:44:48.001189 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 01:44:48.001202 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:44:48.001210 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:44:48.001218 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 01:44:48.001226 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 01:44:48.001234 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:44:48.001242 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:44:48.001250 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:44:48.001258 kernel: NET: Registered PF_XDP protocol family Aug 13 01:44:48.001386 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:44:48.001504 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:44:48.001620 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:44:48.001736 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 01:44:48.001853 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 01:44:48.001992 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 01:44:48.005005 kernel: PCI: CLS 0 bytes, default 64 Aug 13 01:44:48.005018 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 01:44:48.005026 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 01:44:48.005040 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85c9bff6, max_idle_ns: 881590591483 ns Aug 13 01:44:48.005047 kernel: Initialise system trusted keyrings Aug 13 01:44:48.005056 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 01:44:48.005064 kernel: Key type asymmetric registered Aug 13 01:44:48.005072 kernel: Asymmetric key parser 'x509' registered Aug 13 01:44:48.005080 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 01:44:48.005087 kernel: io scheduler mq-deadline registered Aug 13 01:44:48.005096 kernel: io scheduler kyber registered Aug 13 01:44:48.005104 kernel: io scheduler bfq registered Aug 13 01:44:48.005114 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:44:48.005123 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 01:44:48.005131 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 01:44:48.005139 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:44:48.005147 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:44:48.005155 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 01:44:48.005163 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:44:48.005171 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:44:48.005360 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 01:44:48.005382 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1 Aug 13 01:44:48.005528 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 01:44:48.005688 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T01:44:47 UTC (1755049487) Aug 13 01:44:48.005919 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 01:44:48.005930 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 01:44:48.005939 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:44:48.005947 kernel: Segment Routing with IPv6 Aug 13 01:44:48.005973 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:44:48.005986 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:44:48.005994 kernel: Key type dns_resolver registered Aug 13 01:44:48.006002 kernel: IPI shorthand broadcast: enabled Aug 13 01:44:48.006011 kernel: sched_clock: Marking stable (4890004130, 220418270)->(5197228270, -86805870) Aug 13 01:44:48.006019 kernel: registered taskstats version 1 Aug 13 01:44:48.006027 kernel: Loading compiled-in X.509 certificates Aug 13 01:44:48.006036 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 01:44:48.006044 kernel: Demotion targets for Node 0: null Aug 13 01:44:48.006052 kernel: Key type .fscrypt registered Aug 13 01:44:48.006062 kernel: Key type fscrypt-provisioning registered Aug 13 01:44:48.006071 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:44:48.006079 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:44:48.006087 kernel: ima: No architecture policies found Aug 13 01:44:48.006095 kernel: clk: Disabling unused clocks Aug 13 01:44:48.006103 kernel: Warning: unable to open an initial console. Aug 13 01:44:48.006112 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 01:44:48.006120 kernel: Write protecting the kernel read-only data: 24576k Aug 13 01:44:48.006131 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 01:44:48.006139 kernel: Run /init as init process Aug 13 01:44:48.006147 kernel: with arguments: Aug 13 01:44:48.006155 kernel: /init Aug 13 01:44:48.006163 kernel: with environment: Aug 13 01:44:48.006171 kernel: HOME=/ Aug 13 01:44:48.006233 kernel: TERM=linux Aug 13 01:44:48.006244 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:44:48.006261 systemd[1]: Successfully made /usr/ read-only. Aug 13 01:44:48.006275 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:44:48.006285 systemd[1]: Detected virtualization kvm. Aug 13 01:44:48.006294 systemd[1]: Detected architecture x86-64. Aug 13 01:44:48.006302 systemd[1]: Running in initrd. Aug 13 01:44:48.006311 systemd[1]: No hostname configured, using default hostname. Aug 13 01:44:48.006320 systemd[1]: Hostname set to . Aug 13 01:44:48.006329 systemd[1]: Initializing machine ID from random generator. Aug 13 01:44:48.006341 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:44:48.006349 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:44:48.006358 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:44:48.006368 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 01:44:48.006378 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:44:48.006387 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 01:44:48.006396 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 01:44:48.006409 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 01:44:48.006418 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 01:44:48.006427 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:44:48.006436 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:44:48.006445 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:44:48.006454 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:44:48.006462 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:44:48.006471 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:44:48.006482 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:44:48.006491 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:44:48.006499 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 01:44:48.006508 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 01:44:48.006517 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:44:48.006525 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:44:48.006534 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:44:48.006543 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:44:48.006556 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 01:44:48.006565 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:44:48.006573 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 01:44:48.006582 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 01:44:48.006591 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:44:48.006600 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:44:48.006611 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:44:48.006620 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:44:48.006661 systemd-journald[206]: Collecting audit messages is disabled. Aug 13 01:44:48.006683 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 01:44:48.006724 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:44:48.006734 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:44:48.006743 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:44:48.006753 systemd-journald[206]: Journal started Aug 13 01:44:48.006779 systemd-journald[206]: Runtime Journal (/run/log/journal/080411225167491db892c0be1ae81fb6) is 8M, max 78.5M, 70.5M free. Aug 13 01:44:47.998841 systemd-modules-load[208]: Inserted module 'overlay' Aug 13 01:44:48.015980 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:44:48.048100 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:44:48.062101 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:44:48.082006 systemd-tmpfiles[220]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 01:44:48.094480 kernel: Bridge firewalling registered Aug 13 01:44:48.094849 systemd-modules-load[208]: Inserted module 'br_netfilter' Aug 13 01:44:48.097223 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:44:48.150391 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:44:48.151242 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:44:48.152516 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:44:48.155992 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:44:48.159066 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:44:48.168883 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:44:48.178721 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:44:48.185348 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:44:48.187447 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:44:48.196081 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:44:48.199350 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 01:44:48.220514 dracut-cmdline[246]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:44:48.232509 systemd-resolved[239]: Positive Trust Anchors: Aug 13 01:44:48.232541 systemd-resolved[239]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:44:48.232568 systemd-resolved[239]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:44:48.239922 systemd-resolved[239]: Defaulting to hostname 'linux'. Aug 13 01:44:48.241547 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:44:48.244190 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:44:48.319007 kernel: SCSI subsystem initialized Aug 13 01:44:48.327977 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:44:48.338982 kernel: iscsi: registered transport (tcp) Aug 13 01:44:48.361309 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:44:48.361391 kernel: QLogic iSCSI HBA Driver Aug 13 01:44:48.385453 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:44:48.405096 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:44:48.408457 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:44:48.476663 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 01:44:48.478689 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 01:44:48.526987 kernel: raid6: avx2x4 gen() 29463 MB/s Aug 13 01:44:48.544991 kernel: raid6: avx2x2 gen() 29393 MB/s Aug 13 01:44:48.563405 kernel: raid6: avx2x1 gen() 17980 MB/s Aug 13 01:44:48.563457 kernel: raid6: using algorithm avx2x4 gen() 29463 MB/s Aug 13 01:44:48.582439 kernel: raid6: .... xor() 4455 MB/s, rmw enabled Aug 13 01:44:48.582500 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:44:48.618046 kernel: xor: automatically using best checksumming function avx Aug 13 01:44:48.772991 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 01:44:48.781231 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:44:48.783828 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:44:48.814915 systemd-udevd[454]: Using default interface naming scheme 'v255'. Aug 13 01:44:48.821110 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:44:48.826491 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 01:44:48.855939 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Aug 13 01:44:48.891771 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:44:48.895066 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:44:48.985740 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:44:48.989316 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 01:44:49.073981 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:44:49.077983 kernel: libata version 3.00 loaded. Aug 13 01:44:49.080048 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Aug 13 01:44:49.089990 kernel: scsi host0: Virtio SCSI HBA Aug 13 01:44:49.101008 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 01:44:49.144156 kernel: AES CTR mode by8 optimization enabled Aug 13 01:44:49.158991 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:44:49.259147 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:44:49.294773 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:44:49.296976 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input3 Aug 13 01:44:49.309092 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:44:49.313406 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:44:49.324912 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 01:44:49.325277 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 01:44:49.336680 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 01:44:49.336914 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 01:44:49.337106 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 01:44:49.342471 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 01:44:49.347059 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 01:44:49.347323 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 01:44:49.347524 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 01:44:49.347693 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 01:44:49.347849 kernel: scsi host1: ahci Aug 13 01:44:49.350976 kernel: scsi host2: ahci Aug 13 01:44:49.351975 kernel: scsi host3: ahci Aug 13 01:44:49.352988 kernel: scsi host4: ahci Aug 13 01:44:49.353191 kernel: scsi host5: ahci Aug 13 01:44:49.354217 kernel: scsi host6: ahci Aug 13 01:44:49.354547 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Aug 13 01:44:49.354565 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Aug 13 01:44:49.354577 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Aug 13 01:44:49.354596 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Aug 13 01:44:49.354607 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Aug 13 01:44:49.354618 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Aug 13 01:44:49.369930 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:44:49.369997 kernel: GPT:9289727 != 9297919 Aug 13 01:44:49.370011 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:44:49.370023 kernel: GPT:9289727 != 9297919 Aug 13 01:44:49.370034 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:44:49.370054 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:44:49.370980 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 01:44:49.448182 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:44:49.667421 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 01:44:49.667537 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 01:44:49.667553 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 01:44:49.670579 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 01:44:49.672112 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 01:44:49.672983 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 01:44:49.744481 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 01:44:49.753598 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 01:44:49.761091 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 01:44:49.761731 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 01:44:49.763349 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 01:44:49.773459 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:44:49.775800 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:44:49.776505 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:44:49.778116 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:44:49.780445 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 01:44:49.784129 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 01:44:49.805726 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:44:49.810176 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:44:49.810251 disk-uuid[631]: Primary Header is updated. Aug 13 01:44:49.810251 disk-uuid[631]: Secondary Entries is updated. Aug 13 01:44:49.810251 disk-uuid[631]: Secondary Header is updated. Aug 13 01:44:50.832269 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:44:50.833027 disk-uuid[639]: The operation has completed successfully. Aug 13 01:44:50.894413 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:44:50.894553 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 01:44:50.929183 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 01:44:50.938580 sh[653]: Success Aug 13 01:44:50.959266 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:44:50.959318 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:44:50.962406 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 01:44:50.975021 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 01:44:51.027297 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 01:44:51.030046 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 01:44:51.054482 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 01:44:51.068389 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 01:44:51.068470 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (665) Aug 13 01:44:51.074279 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 01:44:51.074319 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:44:51.076084 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 01:44:51.086328 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 01:44:51.088089 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:44:51.089413 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 01:44:51.090430 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 01:44:51.094097 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 01:44:51.132069 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (699) Aug 13 01:44:51.132123 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:44:51.135569 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:44:51.135608 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:44:51.144980 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:44:51.145822 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 01:44:51.149106 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 01:44:51.216174 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:44:51.220116 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:44:51.362698 systemd-networkd[835]: lo: Link UP Aug 13 01:44:51.363511 systemd-networkd[835]: lo: Gained carrier Aug 13 01:44:51.389874 systemd-networkd[835]: Enumeration completed Aug 13 01:44:51.392314 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:44:51.392319 systemd-networkd[835]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:44:51.394886 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:44:51.396301 systemd[1]: Reached target network.target - Network. Aug 13 01:44:51.407293 systemd-networkd[835]: eth0: Link UP Aug 13 01:44:51.407969 systemd-networkd[835]: eth0: Gained carrier Aug 13 01:44:51.407989 systemd-networkd[835]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:44:51.630826 ignition[760]: Ignition 2.21.0 Aug 13 01:44:51.630845 ignition[760]: Stage: fetch-offline Aug 13 01:44:51.630882 ignition[760]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:51.630892 ignition[760]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:51.631018 ignition[760]: parsed url from cmdline: "" Aug 13 01:44:51.631023 ignition[760]: no config URL provided Aug 13 01:44:51.631028 ignition[760]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:44:51.634220 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:44:51.631038 ignition[760]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:44:51.636029 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 01:44:51.631043 ignition[760]: failed to fetch config: resource requires networking Aug 13 01:44:51.631534 ignition[760]: Ignition finished successfully Aug 13 01:44:51.711639 ignition[844]: Ignition 2.21.0 Aug 13 01:44:51.711656 ignition[844]: Stage: fetch Aug 13 01:44:51.711856 ignition[844]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:51.711868 ignition[844]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:51.711982 ignition[844]: parsed url from cmdline: "" Aug 13 01:44:51.711986 ignition[844]: no config URL provided Aug 13 01:44:51.711992 ignition[844]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:44:51.712002 ignition[844]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:44:51.712032 ignition[844]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 01:44:51.712363 ignition[844]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:44:51.913322 ignition[844]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 01:44:51.913811 ignition[844]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:44:51.964098 systemd-networkd[835]: eth0: DHCPv4 address 172.232.7.67/24, gateway 172.232.7.1 acquired from 23.215.118.19 Aug 13 01:44:52.313924 ignition[844]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 01:44:52.423548 ignition[844]: PUT result: OK Aug 13 01:44:52.423619 ignition[844]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 01:44:52.553708 ignition[844]: GET result: OK Aug 13 01:44:52.553978 ignition[844]: parsing config with SHA512: b7b5636a825bb143b465ef99b4e0c91f372e043f8eea44c4c4f225842a6221b5d3b82873bdacb90b5b84e670986f3b1cf840b76bb9269817e9b58bc8008b053c Aug 13 01:44:52.559267 unknown[844]: fetched base config from "system" Aug 13 01:44:52.559281 unknown[844]: fetched base config from "system" Aug 13 01:44:52.559690 ignition[844]: fetch: fetch complete Aug 13 01:44:52.559289 unknown[844]: fetched user config from "akamai" Aug 13 01:44:52.559697 ignition[844]: fetch: fetch passed Aug 13 01:44:52.559752 ignition[844]: Ignition finished successfully Aug 13 01:44:52.564281 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 01:44:52.588337 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 01:44:52.648262 ignition[851]: Ignition 2.21.0 Aug 13 01:44:52.648281 ignition[851]: Stage: kargs Aug 13 01:44:52.648478 ignition[851]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:52.648491 ignition[851]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:52.650178 ignition[851]: kargs: kargs passed Aug 13 01:44:52.650459 ignition[851]: Ignition finished successfully Aug 13 01:44:52.653298 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 01:44:52.656443 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 01:44:52.684871 ignition[857]: Ignition 2.21.0 Aug 13 01:44:52.684888 ignition[857]: Stage: disks Aug 13 01:44:52.685036 ignition[857]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:52.688111 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 01:44:52.685047 ignition[857]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:52.688932 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 01:44:52.685600 ignition[857]: disks: disks passed Aug 13 01:44:52.689904 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 01:44:52.685641 ignition[857]: Ignition finished successfully Aug 13 01:44:52.691161 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:44:52.692588 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:44:52.694107 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:44:52.696208 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 01:44:52.730971 systemd-fsck[866]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 01:44:52.733276 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 01:44:52.737065 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 01:44:52.874014 kernel: EXT4-fs (sda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 01:44:52.874285 systemd-networkd[835]: eth0: Gained IPv6LL Aug 13 01:44:52.876913 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 01:44:52.878490 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 01:44:52.881318 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:44:52.885031 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 01:44:52.886511 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 01:44:52.886557 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:44:52.886610 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:44:52.896723 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 01:44:52.900282 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 01:44:52.904026 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (874) Aug 13 01:44:52.908998 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:44:52.909083 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:44:52.910120 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:44:52.917448 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:44:52.962052 initrd-setup-root[898]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:44:52.968016 initrd-setup-root[905]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:44:52.974015 initrd-setup-root[912]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:44:52.980416 initrd-setup-root[919]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:44:53.097656 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 01:44:53.100091 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 01:44:53.101616 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 01:44:53.134142 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 01:44:53.138249 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:44:53.160809 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 01:44:53.217136 ignition[987]: INFO : Ignition 2.21.0 Aug 13 01:44:53.217136 ignition[987]: INFO : Stage: mount Aug 13 01:44:53.219480 ignition[987]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:53.219480 ignition[987]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:53.219480 ignition[987]: INFO : mount: mount passed Aug 13 01:44:53.219480 ignition[987]: INFO : Ignition finished successfully Aug 13 01:44:53.220673 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 01:44:53.225063 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 01:44:53.880530 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:44:53.931212 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (998) Aug 13 01:44:53.944308 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:44:53.944401 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:44:53.944427 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:44:53.965707 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:44:54.100323 ignition[1015]: INFO : Ignition 2.21.0 Aug 13 01:44:54.100323 ignition[1015]: INFO : Stage: files Aug 13 01:44:54.102732 ignition[1015]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:54.102732 ignition[1015]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:54.102732 ignition[1015]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:44:54.112922 ignition[1015]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:44:54.112922 ignition[1015]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:44:54.119835 ignition[1015]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:44:54.121196 ignition[1015]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:44:54.123147 ignition[1015]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:44:54.121794 unknown[1015]: wrote ssh authorized keys file for user: core Aug 13 01:44:54.129429 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 01:44:54.129429 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-amd64.tar.gz: attempt #1 Aug 13 01:44:54.282088 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 01:44:56.293053 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-amd64.tar.gz" Aug 13 01:44:56.293053 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:44:56.317312 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:44:56.317312 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:44:56.317312 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:44:56.317312 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:44:56.317312 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:44:56.317312 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:44:56.317312 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:44:56.334878 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:44:56.338087 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:44:56.338087 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:44:56.338087 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:44:56.338087 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:44:56.338087 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-x86-64.raw: attempt #1 Aug 13 01:44:57.010439 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 01:44:58.370268 ignition[1015]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-x86-64.raw" Aug 13 01:44:58.370268 ignition[1015]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 01:44:58.372709 ignition[1015]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:44:58.372709 ignition[1015]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:44:58.374775 ignition[1015]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 01:44:58.374775 ignition[1015]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 13 01:44:58.374775 ignition[1015]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:44:58.374775 ignition[1015]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:44:58.374775 ignition[1015]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 13 01:44:58.374775 ignition[1015]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:44:58.374775 ignition[1015]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:44:58.374775 ignition[1015]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:44:58.374775 ignition[1015]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:44:58.374775 ignition[1015]: INFO : files: files passed Aug 13 01:44:58.374775 ignition[1015]: INFO : Ignition finished successfully Aug 13 01:44:58.376877 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 01:44:58.381619 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 01:44:58.386571 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 01:44:58.403559 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:44:58.403717 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 01:44:58.411686 initrd-setup-root-after-ignition[1045]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:44:58.411686 initrd-setup-root-after-ignition[1045]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:44:58.414634 initrd-setup-root-after-ignition[1049]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:44:58.416073 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:44:58.417355 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 01:44:58.418753 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 01:44:58.464770 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:44:58.464911 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 01:44:58.466288 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 01:44:58.467270 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 01:44:58.468661 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 01:44:58.469628 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 01:44:58.511812 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:44:58.514538 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 01:44:58.532975 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:44:58.535263 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:44:58.536690 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 01:44:58.537369 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:44:58.537562 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:44:58.538394 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 01:44:58.539032 systemd[1]: Stopped target basic.target - Basic System. Aug 13 01:44:58.540249 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 01:44:58.541338 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:44:58.542763 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 01:44:58.543905 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:44:58.545224 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 01:44:58.546397 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:44:58.547888 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 01:44:58.549137 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 01:44:58.550456 systemd[1]: Stopped target swap.target - Swaps. Aug 13 01:44:58.551562 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:44:58.551703 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:44:58.553439 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:44:58.554234 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:44:58.555395 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 01:44:58.555550 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:44:58.556428 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:44:58.556545 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 01:44:58.558127 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:44:58.558292 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:44:58.559624 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:44:58.559724 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 01:44:58.563142 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 01:44:58.566142 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 01:44:58.567567 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:44:58.569117 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:44:58.570978 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:44:58.571153 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:44:58.576422 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:44:58.576547 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 01:44:58.599648 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:44:58.602633 ignition[1069]: INFO : Ignition 2.21.0 Aug 13 01:44:58.604366 ignition[1069]: INFO : Stage: umount Aug 13 01:44:58.604366 ignition[1069]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:58.604366 ignition[1069]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:58.607992 ignition[1069]: INFO : umount: umount passed Aug 13 01:44:58.607992 ignition[1069]: INFO : Ignition finished successfully Aug 13 01:44:58.604790 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:44:58.604972 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 01:44:58.607657 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:44:58.607779 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 01:44:58.608886 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:44:58.609002 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 01:44:58.631598 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:44:58.631676 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 01:44:58.632611 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 01:44:58.632704 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 01:44:58.633694 systemd[1]: Stopped target network.target - Network. Aug 13 01:44:58.634800 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:44:58.634862 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:44:58.635967 systemd[1]: Stopped target paths.target - Path Units. Aug 13 01:44:58.636901 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:44:58.636984 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:44:58.638275 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 01:44:58.639476 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 01:44:58.640541 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:44:58.640600 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:44:58.641579 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:44:58.641630 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:44:58.642632 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:44:58.642694 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 01:44:58.643687 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 01:44:58.643745 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 01:44:58.644938 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:44:58.645031 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 01:44:58.646549 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 01:44:58.647970 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 01:44:58.656472 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:44:58.656656 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 01:44:58.662485 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 01:44:58.662917 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:44:58.663181 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 01:44:58.665169 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 01:44:58.665976 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 01:44:58.666548 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:44:58.666609 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:44:58.668651 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 01:44:58.670522 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:44:58.670587 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:44:58.672880 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:44:58.673497 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:44:58.675144 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:44:58.675199 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 01:44:58.677222 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 01:44:58.677994 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:44:58.680129 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:44:58.682570 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:44:58.682726 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:44:58.693694 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:44:58.694642 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:44:58.696386 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:44:58.697249 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 01:44:58.698607 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:44:58.698681 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 01:44:58.699338 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:44:58.699389 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:44:58.700454 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:44:58.700510 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:44:58.702137 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:44:58.702188 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 01:44:58.703308 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:44:58.703383 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:44:58.706077 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 01:44:58.708075 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 01:44:58.708144 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:44:58.711145 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:44:58.711207 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:44:58.713036 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 01:44:58.713089 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:44:58.714179 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:44:58.714234 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:44:58.715114 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:44:58.715166 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:44:58.719419 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 13 01:44:58.719482 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Aug 13 01:44:58.719531 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 01:44:58.719632 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:44:58.726530 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:44:58.726702 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 01:44:58.728185 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 01:44:58.729831 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 01:44:58.749467 systemd[1]: Switching root. Aug 13 01:44:58.790930 systemd-journald[206]: Journal stopped Aug 13 01:45:00.573402 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Aug 13 01:45:00.573463 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:45:00.573486 kernel: SELinux: policy capability open_perms=1 Aug 13 01:45:00.573510 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:45:00.573524 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:45:00.573539 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:45:00.573556 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:45:00.573572 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:45:00.573587 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:45:00.573602 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 01:45:00.573621 kernel: audit: type=1403 audit(1755049499.005:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:45:00.573638 systemd[1]: Successfully loaded SELinux policy in 88.196ms. Aug 13 01:45:00.573656 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 32.681ms. Aug 13 01:45:00.573675 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:45:00.573694 systemd[1]: Detected virtualization kvm. Aug 13 01:45:00.573714 systemd[1]: Detected architecture x86-64. Aug 13 01:45:00.573731 systemd[1]: Detected first boot. Aug 13 01:45:00.573749 systemd[1]: Initializing machine ID from random generator. Aug 13 01:45:00.573765 zram_generator::config[1114]: No configuration found. Aug 13 01:45:00.573784 kernel: Guest personality initialized and is inactive Aug 13 01:45:00.573799 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 01:45:00.573815 kernel: Initialized host personality Aug 13 01:45:00.573833 kernel: NET: Registered PF_VSOCK protocol family Aug 13 01:45:00.573849 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:45:00.573869 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 01:45:00.573886 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:45:00.573902 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 01:45:00.573919 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:45:00.573935 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 01:45:00.575995 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 01:45:00.576024 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 01:45:00.576042 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 01:45:00.576059 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 01:45:00.576077 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 01:45:00.576094 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 01:45:00.576111 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 01:45:00.576135 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:45:00.576153 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:45:00.576169 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 01:45:00.576188 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 01:45:00.576214 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 01:45:00.576233 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:45:00.576250 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 01:45:00.576267 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:45:00.576288 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:45:00.576307 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 01:45:00.576324 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 01:45:00.576341 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 01:45:00.576358 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 01:45:00.576376 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:45:00.576393 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:45:00.576410 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:45:00.576430 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:45:00.576448 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 01:45:00.576465 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 01:45:00.576482 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 01:45:00.576500 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:45:00.576522 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:45:00.576539 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:45:00.576557 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 01:45:00.576574 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 01:45:00.576591 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 01:45:00.576608 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 01:45:00.576626 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:45:00.576643 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 01:45:00.576663 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 01:45:00.576680 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 01:45:00.576699 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:45:00.576717 systemd[1]: Reached target machines.target - Containers. Aug 13 01:45:00.576736 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 01:45:00.576754 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:45:00.576772 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:45:00.576789 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 01:45:00.576811 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:45:00.576829 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:45:00.576846 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:45:00.576864 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 01:45:00.576881 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:45:00.576899 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:45:00.576916 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:45:00.576936 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 01:45:00.576975 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:45:00.576997 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:45:00.577015 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:45:00.577032 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:45:00.577049 kernel: loop: module loaded Aug 13 01:45:00.577067 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:45:00.577083 kernel: fuse: init (API version 7.41) Aug 13 01:45:00.577100 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:45:00.577117 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 01:45:00.577138 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 01:45:00.577155 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:45:00.577173 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:45:00.577190 systemd[1]: Stopped verity-setup.service. Aug 13 01:45:00.577208 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:45:00.577225 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 01:45:00.577242 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 01:45:00.577259 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 01:45:00.577279 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 01:45:00.577296 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 01:45:00.577313 kernel: ACPI: bus type drm_connector registered Aug 13 01:45:00.577329 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 01:45:00.577347 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 01:45:00.577365 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:45:00.577382 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:45:00.577400 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 01:45:00.577417 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:45:00.577438 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:45:00.577455 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:45:00.577472 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:45:00.577489 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:45:00.577507 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:45:00.577524 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:45:00.577541 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 01:45:00.577558 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:45:00.577575 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:45:00.577595 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:45:00.577613 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 01:45:00.577637 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 01:45:00.577658 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 01:45:00.577676 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:45:00.577694 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:45:00.577715 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 01:45:00.577733 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 01:45:00.577809 systemd-journald[1198]: Collecting audit messages is disabled. Aug 13 01:45:00.577864 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:45:00.577889 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 01:45:00.577907 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:45:00.577925 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 01:45:00.577943 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:45:00.579993 systemd-journald[1198]: Journal started Aug 13 01:45:00.580034 systemd-journald[1198]: Runtime Journal (/run/log/journal/f6162ac0bcda4aadb5f0f6863e1e25af) is 8M, max 78.5M, 70.5M free. Aug 13 01:44:59.903690 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:44:59.924372 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 01:44:59.925043 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:45:00.608748 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:45:00.608834 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 01:45:00.626991 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:45:00.637007 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:45:00.641927 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:45:00.644515 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 01:45:00.645314 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 01:45:00.646567 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 01:45:00.670052 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 01:45:00.700620 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 01:45:00.705331 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:45:00.707925 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 01:45:00.755432 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 01:45:00.777054 kernel: loop0: detected capacity change from 0 to 146240 Aug 13 01:45:00.805744 systemd-journald[1198]: Time spent on flushing to /var/log/journal/f6162ac0bcda4aadb5f0f6863e1e25af is 43.291ms for 1004 entries. Aug 13 01:45:00.805744 systemd-journald[1198]: System Journal (/var/log/journal/f6162ac0bcda4aadb5f0f6863e1e25af) is 8M, max 195.6M, 187.6M free. Aug 13 01:45:00.902090 systemd-journald[1198]: Received client request to flush runtime journal. Aug 13 01:45:00.902133 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:45:00.809970 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:45:00.810865 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 01:45:00.841059 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:45:00.858084 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:45:00.905053 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 01:45:00.913004 kernel: loop1: detected capacity change from 0 to 8 Aug 13 01:45:00.911782 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Aug 13 01:45:00.911805 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Aug 13 01:45:00.919996 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:45:00.932045 kernel: loop2: detected capacity change from 0 to 229808 Aug 13 01:45:00.931421 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 01:45:00.989990 kernel: loop3: detected capacity change from 0 to 113872 Aug 13 01:45:01.017085 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 01:45:01.027430 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:45:01.042001 kernel: loop4: detected capacity change from 0 to 146240 Aug 13 01:45:01.109000 kernel: loop5: detected capacity change from 0 to 8 Aug 13 01:45:01.115159 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Aug 13 01:45:01.115178 systemd-tmpfiles[1262]: ACLs are not supported, ignoring. Aug 13 01:45:01.131222 kernel: loop6: detected capacity change from 0 to 229808 Aug 13 01:45:01.138419 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:45:01.162009 kernel: loop7: detected capacity change from 0 to 113872 Aug 13 01:45:01.178002 (sd-merge)[1263]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 01:45:01.180315 (sd-merge)[1263]: Merged extensions into '/usr'. Aug 13 01:45:01.211908 systemd[1]: Reload requested from client PID 1220 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 01:45:01.212091 systemd[1]: Reloading... Aug 13 01:45:01.480051 zram_generator::config[1302]: No configuration found. Aug 13 01:45:01.550253 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:45:01.654416 systemd[1]: Reloading finished in 441 ms. Aug 13 01:45:01.705394 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 01:45:01.738723 systemd[1]: Starting ensure-sysext.service... Aug 13 01:45:01.744321 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:45:01.849192 systemd[1]: Reload requested from client PID 1333 ('systemctl') (unit ensure-sysext.service)... Aug 13 01:45:01.849360 systemd[1]: Reloading... Aug 13 01:45:01.974683 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 01:45:01.976562 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 01:45:01.979126 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:45:01.979401 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 01:45:01.981632 systemd-tmpfiles[1334]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:45:01.983299 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Aug 13 01:45:01.983376 systemd-tmpfiles[1334]: ACLs are not supported, ignoring. Aug 13 01:45:02.114989 zram_generator::config[1361]: No configuration found. Aug 13 01:45:02.156677 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:45:02.156885 systemd-tmpfiles[1334]: Skipping /boot Aug 13 01:45:02.231850 ldconfig[1216]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:45:02.249622 systemd-tmpfiles[1334]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:45:02.249831 systemd-tmpfiles[1334]: Skipping /boot Aug 13 01:45:02.364265 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:45:02.447228 systemd[1]: Reloading finished in 597 ms. Aug 13 01:45:02.463107 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 01:45:02.478201 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:45:02.489146 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:45:02.501122 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 01:45:02.504725 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 01:45:02.511308 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:45:02.514692 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 01:45:02.519993 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 01:45:02.532080 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:45:02.542632 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 01:45:02.552110 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:45:02.552345 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:45:02.554882 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:45:02.558567 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:45:02.562583 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:45:02.563370 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:45:02.563469 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:45:02.563553 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:45:02.564726 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 01:45:02.573726 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:45:02.573949 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:45:02.574179 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:45:02.574267 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:45:02.574345 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:45:02.581086 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:45:02.581313 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:45:02.588746 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:45:02.591172 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:45:02.591311 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:45:02.591467 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:45:02.592911 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 01:45:02.629462 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 01:45:02.636014 systemd[1]: Finished ensure-sysext.service. Aug 13 01:45:02.640048 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 01:45:02.641782 systemd-udevd[1413]: Using default interface naming scheme 'v255'. Aug 13 01:45:02.667246 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:45:02.667855 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:45:02.670282 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:45:02.671645 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:45:02.675070 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:45:02.675342 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:45:02.680127 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:45:02.680218 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:45:02.683162 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:45:02.683454 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:45:02.699917 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 01:45:02.713140 augenrules[1449]: No rules Aug 13 01:45:02.715601 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:45:02.716075 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:45:02.720604 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 01:45:02.722009 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 01:45:02.724237 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:45:02.729620 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:45:02.736884 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:45:03.050939 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 01:45:03.257066 systemd-networkd[1462]: lo: Link UP Aug 13 01:45:03.257084 systemd-networkd[1462]: lo: Gained carrier Aug 13 01:45:03.261187 systemd-networkd[1462]: Enumeration completed Aug 13 01:45:03.261329 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:45:03.265478 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 01:45:03.273665 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 01:45:03.275509 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 01:45:03.277122 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 01:45:03.284201 systemd-networkd[1462]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:45:03.284218 systemd-networkd[1462]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:45:03.289842 systemd-networkd[1462]: eth0: Link UP Aug 13 01:45:03.290194 systemd-networkd[1462]: eth0: Gained carrier Aug 13 01:45:03.290986 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:45:03.290218 systemd-networkd[1462]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:45:03.316405 systemd-resolved[1410]: Positive Trust Anchors: Aug 13 01:45:03.316433 systemd-resolved[1410]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:45:03.316469 systemd-resolved[1410]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:45:03.325025 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input4 Aug 13 01:45:03.328104 systemd-resolved[1410]: Defaulting to hostname 'linux'. Aug 13 01:45:03.331055 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 01:45:03.333945 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:45:03.335665 systemd[1]: Reached target network.target - Network. Aug 13 01:45:03.336296 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:45:03.338048 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:45:03.338771 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 01:45:03.340073 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 01:45:03.340728 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 01:45:03.342573 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 01:45:03.343523 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 01:45:03.345340 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 01:45:03.346085 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:45:03.346128 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:45:03.348033 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:45:03.350156 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 01:45:03.353556 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 01:45:03.359285 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 01:45:03.361396 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 01:45:03.363468 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 01:45:03.372998 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 01:45:03.375665 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 01:45:03.377773 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 01:45:03.379968 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:45:03.380987 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:45:03.381801 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:45:03.384444 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:45:03.384486 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:45:03.385738 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 01:45:03.392933 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 01:45:03.398836 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 01:45:03.399131 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 01:45:03.398224 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 01:45:03.403208 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 01:45:03.407078 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 01:45:03.409658 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 01:45:03.412022 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 01:45:03.414742 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 01:45:03.428201 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 01:45:03.435293 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 01:45:03.454630 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 01:45:03.474020 jq[1512]: false Aug 13 01:45:03.542169 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 01:45:03.553658 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 01:45:03.557207 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:45:03.557832 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:45:03.562178 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 01:45:03.567907 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 01:45:03.577906 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 01:45:03.578845 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:45:03.580142 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 01:45:03.622508 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:45:03.622845 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 01:45:03.675362 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:45:03.675018 oslogin_cache_refresh[1514]: Refreshing passwd entry cache Aug 13 01:45:03.677409 jq[1539]: true Aug 13 01:45:03.677622 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Refreshing passwd entry cache Aug 13 01:45:03.675644 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 01:45:03.679989 extend-filesystems[1513]: Found /dev/sda6 Aug 13 01:45:03.683564 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Failure getting users, quitting Aug 13 01:45:03.683564 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:45:03.683564 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Refreshing group entry cache Aug 13 01:45:03.681379 oslogin_cache_refresh[1514]: Failure getting users, quitting Aug 13 01:45:03.681398 oslogin_cache_refresh[1514]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:45:03.681445 oslogin_cache_refresh[1514]: Refreshing group entry cache Aug 13 01:45:03.686131 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Failure getting groups, quitting Aug 13 01:45:03.686131 google_oslogin_nss_cache[1514]: oslogin_cache_refresh[1514]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:45:03.684042 oslogin_cache_refresh[1514]: Failure getting groups, quitting Aug 13 01:45:03.684054 oslogin_cache_refresh[1514]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:45:03.694681 update_engine[1537]: I20250813 01:45:03.694599 1537 main.cc:92] Flatcar Update Engine starting Aug 13 01:45:03.700355 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 01:45:03.703338 extend-filesystems[1513]: Found /dev/sda9 Aug 13 01:45:03.705365 (ntainerd)[1556]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 01:45:03.705454 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 01:45:03.733193 dbus-daemon[1509]: [system] SELinux support is enabled Aug 13 01:45:03.733527 tar[1542]: linux-amd64/LICENSE Aug 13 01:45:03.733527 tar[1542]: linux-amd64/helm Aug 13 01:45:03.734977 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 01:45:03.740057 extend-filesystems[1513]: Checking size of /dev/sda9 Aug 13 01:45:03.739434 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:45:03.739467 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 01:45:03.742158 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:45:03.742178 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 01:45:03.835062 systemd[1]: Started update-engine.service - Update Engine. Aug 13 01:45:03.839375 update_engine[1537]: I20250813 01:45:03.839022 1537 update_check_scheduler.cc:74] Next update check in 5m33s Aug 13 01:45:03.875860 jq[1559]: true Aug 13 01:45:03.946109 extend-filesystems[1513]: Resized partition /dev/sda9 Aug 13 01:45:03.997097 extend-filesystems[1571]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 01:45:03.989758 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 01:45:03.997943 coreos-metadata[1508]: Aug 13 01:45:03.968 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:45:04.058455 sshd_keygen[1558]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:45:04.061980 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 01:45:04.068987 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 01:45:04.101750 extend-filesystems[1571]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 01:45:04.101750 extend-filesystems[1571]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 01:45:04.101750 extend-filesystems[1571]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 01:45:04.104230 extend-filesystems[1513]: Resized filesystem in /dev/sda9 Aug 13 01:45:04.106095 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:45:04.106441 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 01:45:04.329297 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 01:45:04.346386 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 01:45:04.351095 bash[1596]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:45:04.357688 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:45:04.369845 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 01:45:04.398425 systemd[1]: Starting sshkeys.service... Aug 13 01:45:04.409224 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 01:45:04.431974 kernel: EDAC MC: Ver: 3.0.0 Aug 13 01:45:04.447175 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:45:04.480553 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:45:04.480904 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 01:45:04.485238 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 01:45:04.593684 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 01:45:04.682202 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 01:45:04.703655 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 01:45:04.707361 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 01:45:04.712279 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 01:45:04.713133 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 01:45:04.723121 locksmithd[1568]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:45:04.887869 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 01:45:04.983079 systemd-networkd[1462]: eth0: DHCPv4 address 172.232.7.67/24, gateway 172.232.7.1 acquired from 23.215.118.19 Aug 13 01:45:04.983295 dbus-daemon[1509]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1462 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 01:45:04.990020 systemd-timesyncd[1430]: Network configuration changed, trying to establish connection. Aug 13 01:45:04.997111 coreos-metadata[1508]: Aug 13 01:45:04.996 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 01:45:04.999380 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 01:45:05.046315 coreos-metadata[1617]: Aug 13 01:45:05.046 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:45:05.103048 systemd-networkd[1462]: eth0: Gained IPv6LL Aug 13 01:45:05.109828 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 01:45:05.111628 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 01:45:05.137066 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:05.140121 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 01:45:05.164575 coreos-metadata[1508]: Aug 13 01:45:05.164 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 01:45:05.198133 coreos-metadata[1617]: Aug 13 01:45:05.197 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 01:45:05.739732 systemd-resolved[1410]: Clock change detected. Flushing caches. Aug 13 01:45:05.740239 systemd-timesyncd[1430]: Contacted time server 198.71.50.75:123 (0.flatcar.pool.ntp.org). Aug 13 01:45:05.740304 systemd-timesyncd[1430]: Initial clock synchronization to Wed 2025-08-13 01:45:05.739655 UTC. Aug 13 01:45:05.870167 coreos-metadata[1617]: Aug 13 01:45:05.869 INFO Fetch successful Aug 13 01:45:05.892883 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 01:45:05.902895 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:45:05.915792 containerd[1556]: time="2025-08-13T01:45:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 01:45:05.922714 update-ssh-keys[1650]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:45:05.925390 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 01:45:05.932548 systemd[1]: Finished sshkeys.service. Aug 13 01:45:05.939794 containerd[1556]: time="2025-08-13T01:45:05.938069550Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 01:45:05.942141 coreos-metadata[1508]: Aug 13 01:45:05.941 INFO Fetch successful Aug 13 01:45:05.942141 coreos-metadata[1508]: Aug 13 01:45:05.942 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 01:45:06.013096 containerd[1556]: time="2025-08-13T01:45:06.012986620Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.51µs" Aug 13 01:45:06.013737 containerd[1556]: time="2025-08-13T01:45:06.013200240Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 01:45:06.013737 containerd[1556]: time="2025-08-13T01:45:06.013228830Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 01:45:06.013737 containerd[1556]: time="2025-08-13T01:45:06.013471080Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 01:45:06.013737 containerd[1556]: time="2025-08-13T01:45:06.013488270Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 01:45:06.013737 containerd[1556]: time="2025-08-13T01:45:06.013523780Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:45:06.013737 containerd[1556]: time="2025-08-13T01:45:06.013603790Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:45:06.013737 containerd[1556]: time="2025-08-13T01:45:06.013618170Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:45:06.034590 containerd[1556]: time="2025-08-13T01:45:06.033256930Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:45:06.034590 containerd[1556]: time="2025-08-13T01:45:06.033298690Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:45:06.034590 containerd[1556]: time="2025-08-13T01:45:06.033315520Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:45:06.034590 containerd[1556]: time="2025-08-13T01:45:06.033345060Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 01:45:06.034590 containerd[1556]: time="2025-08-13T01:45:06.033557560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 01:45:06.034590 containerd[1556]: time="2025-08-13T01:45:06.033872980Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:45:06.034590 containerd[1556]: time="2025-08-13T01:45:06.033922680Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:45:06.034590 containerd[1556]: time="2025-08-13T01:45:06.033935560Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 01:45:06.034590 containerd[1556]: time="2025-08-13T01:45:06.033980070Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 01:45:06.034590 containerd[1556]: time="2025-08-13T01:45:06.034205960Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 01:45:06.034590 containerd[1556]: time="2025-08-13T01:45:06.034281310Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:45:06.056482 containerd[1556]: time="2025-08-13T01:45:06.054452020Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 01:45:06.056482 containerd[1556]: time="2025-08-13T01:45:06.054916980Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 01:45:06.056482 containerd[1556]: time="2025-08-13T01:45:06.054958450Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 01:45:06.056482 containerd[1556]: time="2025-08-13T01:45:06.054974280Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 01:45:06.056482 containerd[1556]: time="2025-08-13T01:45:06.054992120Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 01:45:06.056482 containerd[1556]: time="2025-08-13T01:45:06.055006860Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 01:45:06.056482 containerd[1556]: time="2025-08-13T01:45:06.055021460Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 01:45:06.056482 containerd[1556]: time="2025-08-13T01:45:06.055033540Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 01:45:06.056482 containerd[1556]: time="2025-08-13T01:45:06.055048310Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 01:45:06.056482 containerd[1556]: time="2025-08-13T01:45:06.055063620Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 01:45:06.056482 containerd[1556]: time="2025-08-13T01:45:06.055090400Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 01:45:06.056482 containerd[1556]: time="2025-08-13T01:45:06.055107850Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 01:45:06.056482 containerd[1556]: time="2025-08-13T01:45:06.055279030Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 01:45:06.056482 containerd[1556]: time="2025-08-13T01:45:06.055309040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 01:45:06.056820 containerd[1556]: time="2025-08-13T01:45:06.055327660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 01:45:06.056820 containerd[1556]: time="2025-08-13T01:45:06.055341890Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 01:45:06.056820 containerd[1556]: time="2025-08-13T01:45:06.055355380Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 01:45:06.056820 containerd[1556]: time="2025-08-13T01:45:06.055366830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 01:45:06.056820 containerd[1556]: time="2025-08-13T01:45:06.055382660Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 01:45:06.056820 containerd[1556]: time="2025-08-13T01:45:06.055409200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 01:45:06.056820 containerd[1556]: time="2025-08-13T01:45:06.055432440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 01:45:06.056820 containerd[1556]: time="2025-08-13T01:45:06.055446070Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 01:45:06.056820 containerd[1556]: time="2025-08-13T01:45:06.055463110Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 01:45:06.056820 containerd[1556]: time="2025-08-13T01:45:06.055550870Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 01:45:06.056820 containerd[1556]: time="2025-08-13T01:45:06.055572240Z" level=info msg="Start snapshots syncer" Aug 13 01:45:06.056820 containerd[1556]: time="2025-08-13T01:45:06.055596290Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 01:45:06.057063 containerd[1556]: time="2025-08-13T01:45:06.055980060Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 01:45:06.057063 containerd[1556]: time="2025-08-13T01:45:06.056030780Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 01:45:06.120609 containerd[1556]: time="2025-08-13T01:45:06.112011170Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 01:45:06.120609 containerd[1556]: time="2025-08-13T01:45:06.119400780Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 01:45:06.120609 containerd[1556]: time="2025-08-13T01:45:06.119457150Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 01:45:06.120609 containerd[1556]: time="2025-08-13T01:45:06.119471500Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 01:45:06.120609 containerd[1556]: time="2025-08-13T01:45:06.119482220Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 01:45:06.120609 containerd[1556]: time="2025-08-13T01:45:06.119494830Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 01:45:06.120609 containerd[1556]: time="2025-08-13T01:45:06.119505490Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 01:45:06.120609 containerd[1556]: time="2025-08-13T01:45:06.119537040Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 01:45:06.120609 containerd[1556]: time="2025-08-13T01:45:06.119569180Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 01:45:06.120609 containerd[1556]: time="2025-08-13T01:45:06.119579930Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 01:45:06.120609 containerd[1556]: time="2025-08-13T01:45:06.119609910Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 01:45:06.120609 containerd[1556]: time="2025-08-13T01:45:06.119646920Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:45:06.120609 containerd[1556]: time="2025-08-13T01:45:06.119661620Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:45:06.120609 containerd[1556]: time="2025-08-13T01:45:06.119687780Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:45:06.120985 containerd[1556]: time="2025-08-13T01:45:06.119697530Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:45:06.120985 containerd[1556]: time="2025-08-13T01:45:06.119705540Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 01:45:06.120985 containerd[1556]: time="2025-08-13T01:45:06.119714590Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 01:45:06.120985 containerd[1556]: time="2025-08-13T01:45:06.119726210Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 01:45:06.120985 containerd[1556]: time="2025-08-13T01:45:06.119776510Z" level=info msg="runtime interface created" Aug 13 01:45:06.120985 containerd[1556]: time="2025-08-13T01:45:06.119783910Z" level=info msg="created NRI interface" Aug 13 01:45:06.120985 containerd[1556]: time="2025-08-13T01:45:06.119792710Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 01:45:06.120985 containerd[1556]: time="2025-08-13T01:45:06.119825430Z" level=info msg="Connect containerd service" Aug 13 01:45:06.120985 containerd[1556]: time="2025-08-13T01:45:06.119892750Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 01:45:06.122702 containerd[1556]: time="2025-08-13T01:45:06.122037130Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:45:06.263352 coreos-metadata[1508]: Aug 13 01:45:06.261 INFO Fetch successful Aug 13 01:45:06.440447 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 01:45:06.480476 systemd[1]: Started sshd@0-172.232.7.67:22-147.75.109.163:48482.service - OpenSSH per-connection server daemon (147.75.109.163:48482). Aug 13 01:45:06.520883 systemd-logind[1532]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 01:45:06.520936 systemd-logind[1532]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:45:06.521366 systemd-logind[1532]: New seat seat0. Aug 13 01:45:06.528194 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 01:45:06.741438 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 01:45:06.775253 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 01:45:06.776168 dbus-daemon[1509]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.8' (uid=0 pid=1631 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 01:45:06.782305 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 01:45:07.048845 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 01:45:07.067544 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 01:45:07.083407 containerd[1556]: time="2025-08-13T01:45:07.083363270Z" level=info msg="Start subscribing containerd event" Aug 13 01:45:07.083944 containerd[1556]: time="2025-08-13T01:45:07.083899630Z" level=info msg="Start recovering state" Aug 13 01:45:07.084121 containerd[1556]: time="2025-08-13T01:45:07.084104040Z" level=info msg="Start event monitor" Aug 13 01:45:07.084184 containerd[1556]: time="2025-08-13T01:45:07.084170280Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:45:07.084324 containerd[1556]: time="2025-08-13T01:45:07.084251470Z" level=info msg="Start streaming server" Aug 13 01:45:07.084390 containerd[1556]: time="2025-08-13T01:45:07.084376910Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 01:45:07.084439 containerd[1556]: time="2025-08-13T01:45:07.084428220Z" level=info msg="runtime interface starting up..." Aug 13 01:45:07.084493 containerd[1556]: time="2025-08-13T01:45:07.084482060Z" level=info msg="starting plugins..." Aug 13 01:45:07.084568 containerd[1556]: time="2025-08-13T01:45:07.084535280Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 01:45:07.085300 containerd[1556]: time="2025-08-13T01:45:07.085234730Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:45:07.085611 containerd[1556]: time="2025-08-13T01:45:07.085591850Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:45:07.086824 containerd[1556]: time="2025-08-13T01:45:07.086805470Z" level=info msg="containerd successfully booted in 1.171844s" Aug 13 01:45:07.087568 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 01:45:07.247142 polkitd[1679]: Started polkitd version 126 Aug 13 01:45:07.256513 polkitd[1679]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 01:45:07.257589 polkitd[1679]: Loading rules from directory /run/polkit-1/rules.d Aug 13 01:45:07.257710 polkitd[1679]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:45:07.258779 polkitd[1679]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 13 01:45:07.258872 polkitd[1679]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:45:07.258979 polkitd[1679]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 01:45:07.265169 polkitd[1679]: Finished loading, compiling and executing 2 rules Aug 13 01:45:07.276141 sshd[1674]: Accepted publickey for core from 147.75.109.163 port 48482 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:45:07.265533 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 01:45:07.268335 dbus-daemon[1509]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 01:45:07.271408 polkitd[1679]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 01:45:07.273166 sshd-session[1674]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:45:07.285455 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 01:45:07.303085 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 01:45:07.362444 systemd-logind[1532]: New session 1 of user core. Aug 13 01:45:07.366570 systemd-hostnamed[1631]: Hostname set to <172-232-7-67> (transient) Aug 13 01:45:07.378227 systemd-resolved[1410]: System hostname changed to '172-232-7-67'. Aug 13 01:45:07.388772 tar[1542]: linux-amd64/README.md Aug 13 01:45:07.446488 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 01:45:07.463278 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 01:45:07.469249 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 01:45:07.562568 (systemd)[1707]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:45:07.567401 systemd-logind[1532]: New session c1 of user core. Aug 13 01:45:07.704176 systemd[1707]: Queued start job for default target default.target. Aug 13 01:45:07.714531 systemd[1707]: Created slice app.slice - User Application Slice. Aug 13 01:45:07.714798 systemd[1707]: Reached target paths.target - Paths. Aug 13 01:45:07.714854 systemd[1707]: Reached target timers.target - Timers. Aug 13 01:45:07.717873 systemd[1707]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 01:45:07.730453 systemd[1707]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 01:45:07.730592 systemd[1707]: Reached target sockets.target - Sockets. Aug 13 01:45:07.730640 systemd[1707]: Reached target basic.target - Basic System. Aug 13 01:45:07.730686 systemd[1707]: Reached target default.target - Main User Target. Aug 13 01:45:07.730721 systemd[1707]: Startup finished in 152ms. Aug 13 01:45:07.731208 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 01:45:07.738887 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 01:45:07.997138 systemd[1]: Started sshd@1-172.232.7.67:22-147.75.109.163:48490.service - OpenSSH per-connection server daemon (147.75.109.163:48490). Aug 13 01:45:08.361398 sshd[1718]: Accepted publickey for core from 147.75.109.163 port 48490 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:45:08.375973 sshd-session[1718]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:45:08.394788 systemd-logind[1532]: New session 2 of user core. Aug 13 01:45:08.401962 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 01:45:08.695608 sshd[1720]: Connection closed by 147.75.109.163 port 48490 Aug 13 01:45:08.696334 sshd-session[1718]: pam_unix(sshd:session): session closed for user core Aug 13 01:45:08.701177 systemd-logind[1532]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:45:08.702129 systemd[1]: sshd@1-172.232.7.67:22-147.75.109.163:48490.service: Deactivated successfully. Aug 13 01:45:08.704568 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:45:08.708276 systemd-logind[1532]: Removed session 2. Aug 13 01:45:08.805024 systemd[1]: Started sshd@2-172.232.7.67:22-147.75.109.163:45176.service - OpenSSH per-connection server daemon (147.75.109.163:45176). Aug 13 01:45:09.217894 sshd[1726]: Accepted publickey for core from 147.75.109.163 port 45176 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:45:09.214344 sshd-session[1726]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:45:09.222792 systemd-logind[1532]: New session 3 of user core. Aug 13 01:45:09.252084 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 01:45:09.477807 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:09.482854 sshd[1728]: Connection closed by 147.75.109.163 port 45176 Aug 13 01:45:09.483604 sshd-session[1726]: pam_unix(sshd:session): session closed for user core Aug 13 01:45:09.484707 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 01:45:09.497519 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:45:09.532360 systemd[1]: Startup finished in 5.016s (kernel) + 11.264s (initrd) + 10.142s (userspace) = 26.423s. Aug 13 01:45:09.537420 systemd[1]: sshd@2-172.232.7.67:22-147.75.109.163:45176.service: Deactivated successfully. Aug 13 01:45:09.557265 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:45:09.565668 systemd-logind[1532]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:45:09.569378 systemd-logind[1532]: Removed session 3. Aug 13 01:45:10.754871 kubelet[1734]: E0813 01:45:10.754804 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:45:10.759096 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:45:10.759368 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:45:10.760116 systemd[1]: kubelet.service: Consumed 3.222s CPU time, 266.1M memory peak. Aug 13 01:45:19.547242 systemd[1]: Started sshd@3-172.232.7.67:22-147.75.109.163:60728.service - OpenSSH per-connection server daemon (147.75.109.163:60728). Aug 13 01:45:19.903921 sshd[1750]: Accepted publickey for core from 147.75.109.163 port 60728 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:45:19.905712 sshd-session[1750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:45:19.911656 systemd-logind[1532]: New session 4 of user core. Aug 13 01:45:19.922927 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 01:45:20.156197 sshd[1752]: Connection closed by 147.75.109.163 port 60728 Aug 13 01:45:20.156953 sshd-session[1750]: pam_unix(sshd:session): session closed for user core Aug 13 01:45:20.160843 systemd-logind[1532]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:45:20.161353 systemd[1]: sshd@3-172.232.7.67:22-147.75.109.163:60728.service: Deactivated successfully. Aug 13 01:45:20.164160 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:45:20.166027 systemd-logind[1532]: Removed session 4. Aug 13 01:45:20.217426 systemd[1]: Started sshd@4-172.232.7.67:22-147.75.109.163:60734.service - OpenSSH per-connection server daemon (147.75.109.163:60734). Aug 13 01:45:20.570974 sshd[1758]: Accepted publickey for core from 147.75.109.163 port 60734 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:45:20.572472 sshd-session[1758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:45:20.578532 systemd-logind[1532]: New session 5 of user core. Aug 13 01:45:20.587908 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 01:45:20.814374 sshd[1760]: Connection closed by 147.75.109.163 port 60734 Aug 13 01:45:20.815163 sshd-session[1758]: pam_unix(sshd:session): session closed for user core Aug 13 01:45:20.819432 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:45:20.820164 systemd[1]: sshd@4-172.232.7.67:22-147.75.109.163:60734.service: Deactivated successfully. Aug 13 01:45:20.822480 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:45:20.824136 systemd-logind[1532]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:45:20.826715 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:20.828392 systemd-logind[1532]: Removed session 5. Aug 13 01:45:20.879220 systemd[1]: Started sshd@5-172.232.7.67:22-147.75.109.163:60736.service - OpenSSH per-connection server daemon (147.75.109.163:60736). Aug 13 01:45:21.017252 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:21.031239 (kubelet)[1776]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:45:21.164195 kubelet[1776]: E0813 01:45:21.164049 1776 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:45:21.169644 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:45:21.169876 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:45:21.170279 systemd[1]: kubelet.service: Consumed 333ms CPU time, 109.1M memory peak. Aug 13 01:45:21.229315 sshd[1769]: Accepted publickey for core from 147.75.109.163 port 60736 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:45:21.230970 sshd-session[1769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:45:21.236646 systemd-logind[1532]: New session 6 of user core. Aug 13 01:45:21.242879 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 01:45:21.478799 sshd[1784]: Connection closed by 147.75.109.163 port 60736 Aug 13 01:45:21.479433 sshd-session[1769]: pam_unix(sshd:session): session closed for user core Aug 13 01:45:21.484111 systemd-logind[1532]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:45:21.485006 systemd[1]: sshd@5-172.232.7.67:22-147.75.109.163:60736.service: Deactivated successfully. Aug 13 01:45:21.487180 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:45:21.489247 systemd-logind[1532]: Removed session 6. Aug 13 01:45:21.537989 systemd[1]: Started sshd@6-172.232.7.67:22-147.75.109.163:60738.service - OpenSSH per-connection server daemon (147.75.109.163:60738). Aug 13 01:45:21.877143 sshd[1790]: Accepted publickey for core from 147.75.109.163 port 60738 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:45:21.879006 sshd-session[1790]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:45:21.885390 systemd-logind[1532]: New session 7 of user core. Aug 13 01:45:21.891004 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 01:45:22.083925 sudo[1793]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 01:45:22.084351 sudo[1793]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:45:22.100012 sudo[1793]: pam_unix(sudo:session): session closed for user root Aug 13 01:45:22.150432 sshd[1792]: Connection closed by 147.75.109.163 port 60738 Aug 13 01:45:22.151737 sshd-session[1790]: pam_unix(sshd:session): session closed for user core Aug 13 01:45:22.157399 systemd[1]: sshd@6-172.232.7.67:22-147.75.109.163:60738.service: Deactivated successfully. Aug 13 01:45:22.159583 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:45:22.160419 systemd-logind[1532]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:45:22.162498 systemd-logind[1532]: Removed session 7. Aug 13 01:45:22.224500 systemd[1]: Started sshd@7-172.232.7.67:22-147.75.109.163:60750.service - OpenSSH per-connection server daemon (147.75.109.163:60750). Aug 13 01:45:22.566963 sshd[1799]: Accepted publickey for core from 147.75.109.163 port 60750 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:45:22.568695 sshd-session[1799]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:45:22.574470 systemd-logind[1532]: New session 8 of user core. Aug 13 01:45:22.579876 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 01:45:22.762960 sudo[1803]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 01:45:22.763293 sudo[1803]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:45:22.768674 sudo[1803]: pam_unix(sudo:session): session closed for user root Aug 13 01:45:22.775898 sudo[1802]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 01:45:22.776280 sudo[1802]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:45:22.787875 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:45:22.832250 augenrules[1825]: No rules Aug 13 01:45:22.833993 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:45:22.834291 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:45:22.835985 sudo[1802]: pam_unix(sudo:session): session closed for user root Aug 13 01:45:22.886316 sshd[1801]: Connection closed by 147.75.109.163 port 60750 Aug 13 01:45:22.886980 sshd-session[1799]: pam_unix(sshd:session): session closed for user core Aug 13 01:45:22.891538 systemd[1]: sshd@7-172.232.7.67:22-147.75.109.163:60750.service: Deactivated successfully. Aug 13 01:45:22.893560 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:45:22.894678 systemd-logind[1532]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:45:22.896545 systemd-logind[1532]: Removed session 8. Aug 13 01:45:22.948214 systemd[1]: Started sshd@8-172.232.7.67:22-147.75.109.163:60760.service - OpenSSH per-connection server daemon (147.75.109.163:60760). Aug 13 01:45:23.298342 sshd[1834]: Accepted publickey for core from 147.75.109.163 port 60760 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:45:23.300040 sshd-session[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:45:23.304807 systemd-logind[1532]: New session 9 of user core. Aug 13 01:45:23.311894 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 01:45:23.495287 sudo[1837]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:45:23.495768 sudo[1837]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:45:24.559791 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 01:45:24.602473 (dockerd)[1855]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 01:45:25.665643 dockerd[1855]: time="2025-08-13T01:45:25.665494630Z" level=info msg="Starting up" Aug 13 01:45:25.667242 dockerd[1855]: time="2025-08-13T01:45:25.667213770Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 01:45:25.767154 dockerd[1855]: time="2025-08-13T01:45:25.767106340Z" level=info msg="Loading containers: start." Aug 13 01:45:25.780793 kernel: Initializing XFRM netlink socket Aug 13 01:45:26.103313 systemd-networkd[1462]: docker0: Link UP Aug 13 01:45:26.107845 dockerd[1855]: time="2025-08-13T01:45:26.107805230Z" level=info msg="Loading containers: done." Aug 13 01:45:26.139810 dockerd[1855]: time="2025-08-13T01:45:26.139187610Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:45:26.139810 dockerd[1855]: time="2025-08-13T01:45:26.139301230Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 01:45:26.139810 dockerd[1855]: time="2025-08-13T01:45:26.139459160Z" level=info msg="Initializing buildkit" Aug 13 01:45:26.139629 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1325000421-merged.mount: Deactivated successfully. Aug 13 01:45:26.178786 dockerd[1855]: time="2025-08-13T01:45:26.178673710Z" level=info msg="Completed buildkit initialization" Aug 13 01:45:26.183288 dockerd[1855]: time="2025-08-13T01:45:26.182987430Z" level=info msg="Daemon has completed initialization" Aug 13 01:45:26.183288 dockerd[1855]: time="2025-08-13T01:45:26.183082790Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:45:26.183180 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 01:45:27.625353 containerd[1556]: time="2025-08-13T01:45:27.625267110Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\"" Aug 13 01:45:28.562237 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount876939067.mount: Deactivated successfully. Aug 13 01:45:31.017255 containerd[1556]: time="2025-08-13T01:45:31.016524670Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:31.017962 containerd[1556]: time="2025-08-13T01:45:31.017934750Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.3: active requests=0, bytes read=30078237" Aug 13 01:45:31.018445 containerd[1556]: time="2025-08-13T01:45:31.018401740Z" level=info msg="ImageCreate event name:\"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:31.021543 containerd[1556]: time="2025-08-13T01:45:31.021510480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:31.022828 containerd[1556]: time="2025-08-13T01:45:31.022761820Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.3\" with image id \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:125a8b488def5ea24e2de5682ab1abf063163aae4d89ce21811a45f3ecf23816\", size \"30075037\" in 3.39740751s" Aug 13 01:45:31.022897 containerd[1556]: time="2025-08-13T01:45:31.022850440Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.3\" returns image reference \"sha256:a92b4b92a991677d355596cc4aa9b0b12cbc38e8cbdc1e476548518ae045bc4a\"" Aug 13 01:45:31.024249 containerd[1556]: time="2025-08-13T01:45:31.024220830Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\"" Aug 13 01:45:31.211583 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Aug 13 01:45:31.213867 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:31.506283 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:31.517122 (kubelet)[2116]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:45:31.640720 kubelet[2116]: E0813 01:45:31.640656 2116 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:45:31.644627 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:45:31.644853 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:45:31.645353 systemd[1]: kubelet.service: Consumed 384ms CPU time, 110.7M memory peak. Aug 13 01:45:33.455548 containerd[1556]: time="2025-08-13T01:45:33.455479970Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:33.456588 containerd[1556]: time="2025-08-13T01:45:33.456446430Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.3: active requests=0, bytes read=26019361" Aug 13 01:45:33.457316 containerd[1556]: time="2025-08-13T01:45:33.457279750Z" level=info msg="ImageCreate event name:\"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:33.459835 containerd[1556]: time="2025-08-13T01:45:33.459799460Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:33.460670 containerd[1556]: time="2025-08-13T01:45:33.460644530Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.3\" with image id \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:96091626e37c5d5920ee6c3203b783cc01a08f287ec0713aeb7809bb62ccea90\", size \"27646922\" in 2.43639502s" Aug 13 01:45:33.460771 containerd[1556]: time="2025-08-13T01:45:33.460736540Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.3\" returns image reference \"sha256:bf97fadcef43049604abcf0caf4f35229fbee25bd0cdb6fdc1d2bbb4f03d9660\"" Aug 13 01:45:33.461597 containerd[1556]: time="2025-08-13T01:45:33.461569530Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\"" Aug 13 01:45:35.438220 containerd[1556]: time="2025-08-13T01:45:35.437291520Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:35.438220 containerd[1556]: time="2025-08-13T01:45:35.438186270Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.3: active requests=0, bytes read=20155013" Aug 13 01:45:35.438684 containerd[1556]: time="2025-08-13T01:45:35.438661200Z" level=info msg="ImageCreate event name:\"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:35.440637 containerd[1556]: time="2025-08-13T01:45:35.440610150Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:35.441679 containerd[1556]: time="2025-08-13T01:45:35.441645940Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.3\" with image id \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f3a2ffdd7483168205236f7762e9a1933f17dd733bc0188b52bddab9c0762868\", size \"21782592\" in 1.98004595s" Aug 13 01:45:35.441802 containerd[1556]: time="2025-08-13T01:45:35.441783660Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.3\" returns image reference \"sha256:41376797d5122e388663ab6d0ad583e58cff63e1a0f1eebfb31d615d8f1c1c87\"" Aug 13 01:45:35.442469 containerd[1556]: time="2025-08-13T01:45:35.442425680Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\"" Aug 13 01:45:37.078116 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount717204542.mount: Deactivated successfully. Aug 13 01:45:37.395304 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 01:45:38.056434 containerd[1556]: time="2025-08-13T01:45:38.056368579Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:38.057587 containerd[1556]: time="2025-08-13T01:45:38.057388137Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.3: active requests=0, bytes read=31892666" Aug 13 01:45:38.058235 containerd[1556]: time="2025-08-13T01:45:38.058188367Z" level=info msg="ImageCreate event name:\"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:38.059727 containerd[1556]: time="2025-08-13T01:45:38.059698269Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:38.061767 containerd[1556]: time="2025-08-13T01:45:38.060226293Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.3\" with image id \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\", repo tag \"registry.k8s.io/kube-proxy:v1.33.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:c69929cfba9e38305eb1e20ca859aeb90e0d2a7326eab9bb1e8298882fe626cd\", size \"31891685\" in 2.617769973s" Aug 13 01:45:38.061767 containerd[1556]: time="2025-08-13T01:45:38.060266893Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.3\" returns image reference \"sha256:af855adae796077ff822e22c0102f686b2ca7b7c51948889b1825388eaac9234\"" Aug 13 01:45:38.062491 containerd[1556]: time="2025-08-13T01:45:38.062434377Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 01:45:38.772994 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1415684458.mount: Deactivated successfully. Aug 13 01:45:40.209900 containerd[1556]: time="2025-08-13T01:45:40.209842047Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:40.210837 containerd[1556]: time="2025-08-13T01:45:40.210733947Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 01:45:40.211341 containerd[1556]: time="2025-08-13T01:45:40.211312721Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:40.213777 containerd[1556]: time="2025-08-13T01:45:40.213476829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:40.214624 containerd[1556]: time="2025-08-13T01:45:40.214512128Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 2.152036871s" Aug 13 01:45:40.214624 containerd[1556]: time="2025-08-13T01:45:40.214541038Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 01:45:40.215644 containerd[1556]: time="2025-08-13T01:45:40.215620996Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:45:40.940697 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2050078718.mount: Deactivated successfully. Aug 13 01:45:40.946811 containerd[1556]: time="2025-08-13T01:45:40.946699825Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:45:40.947516 containerd[1556]: time="2025-08-13T01:45:40.947470557Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 01:45:40.948143 containerd[1556]: time="2025-08-13T01:45:40.948083191Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:45:40.949810 containerd[1556]: time="2025-08-13T01:45:40.949736883Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:45:40.950418 containerd[1556]: time="2025-08-13T01:45:40.950380777Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 734.734991ms" Aug 13 01:45:40.950418 containerd[1556]: time="2025-08-13T01:45:40.950416386Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:45:40.951183 containerd[1556]: time="2025-08-13T01:45:40.951144409Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 01:45:41.662843 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3586195567.mount: Deactivated successfully. Aug 13 01:45:41.664430 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Aug 13 01:45:41.666998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:42.069805 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:42.096174 (kubelet)[2219]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:45:42.351339 kubelet[2219]: E0813 01:45:42.350876 2219 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:45:42.355604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:45:42.355891 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:45:42.356436 systemd[1]: kubelet.service: Consumed 430ms CPU time, 110.3M memory peak. Aug 13 01:45:44.468438 containerd[1556]: time="2025-08-13T01:45:44.467730437Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:44.469094 containerd[1556]: time="2025-08-13T01:45:44.469039286Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=58247175" Aug 13 01:45:44.469623 containerd[1556]: time="2025-08-13T01:45:44.469588961Z" level=info msg="ImageCreate event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:44.472726 containerd[1556]: time="2025-08-13T01:45:44.472687296Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:44.474065 containerd[1556]: time="2025-08-13T01:45:44.474028206Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"58938593\" in 3.522850907s" Aug 13 01:45:44.474117 containerd[1556]: time="2025-08-13T01:45:44.474093155Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 01:45:48.042815 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:48.043166 systemd[1]: kubelet.service: Consumed 430ms CPU time, 110.3M memory peak. Aug 13 01:45:48.046848 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:48.080522 systemd[1]: Reload requested from client PID 2300 ('systemctl') (unit session-9.scope)... Aug 13 01:45:48.080569 systemd[1]: Reloading... Aug 13 01:45:48.247790 zram_generator::config[2340]: No configuration found. Aug 13 01:45:48.457252 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:45:48.579525 systemd[1]: Reloading finished in 498 ms. Aug 13 01:45:48.650287 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 01:45:48.650411 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 01:45:48.650771 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:48.650818 systemd[1]: kubelet.service: Consumed 266ms CPU time, 98.3M memory peak. Aug 13 01:45:48.653193 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:48.823682 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:48.831098 (kubelet)[2397]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:45:48.958214 kubelet[2397]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:45:48.958214 kubelet[2397]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:45:48.958214 kubelet[2397]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:45:48.958649 kubelet[2397]: I0813 01:45:48.958315 2397 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:45:49.121961 update_engine[1537]: I20250813 01:45:49.120685 1537 update_attempter.cc:509] Updating boot flags... Aug 13 01:45:49.411839 kubelet[2397]: I0813 01:45:49.410061 2397 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 01:45:49.411839 kubelet[2397]: I0813 01:45:49.410101 2397 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:45:49.411839 kubelet[2397]: I0813 01:45:49.410470 2397 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 01:45:49.469057 kubelet[2397]: E0813 01:45:49.469008 2397 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.232.7.67:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.232.7.67:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Aug 13 01:45:49.472163 kubelet[2397]: I0813 01:45:49.472141 2397 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:45:49.479812 kubelet[2397]: I0813 01:45:49.479692 2397 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:45:49.489259 kubelet[2397]: I0813 01:45:49.489118 2397 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:45:49.490906 kubelet[2397]: I0813 01:45:49.490781 2397 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:45:49.491066 kubelet[2397]: I0813 01:45:49.490828 2397 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-7-67","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:45:49.491313 kubelet[2397]: I0813 01:45:49.491088 2397 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:45:49.491313 kubelet[2397]: I0813 01:45:49.491104 2397 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 01:45:49.492168 kubelet[2397]: I0813 01:45:49.492105 2397 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:45:49.494994 kubelet[2397]: I0813 01:45:49.494552 2397 kubelet.go:480] "Attempting to sync node with API server" Aug 13 01:45:49.494994 kubelet[2397]: I0813 01:45:49.494588 2397 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:45:49.494994 kubelet[2397]: I0813 01:45:49.494636 2397 kubelet.go:386] "Adding apiserver pod source" Aug 13 01:45:49.494994 kubelet[2397]: I0813 01:45:49.494671 2397 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:45:49.503794 kubelet[2397]: E0813 01:45:49.502833 2397 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.232.7.67:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-7-67&limit=500&resourceVersion=0\": dial tcp 172.232.7.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 01:45:49.503794 kubelet[2397]: I0813 01:45:49.503162 2397 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:45:49.504047 kubelet[2397]: I0813 01:45:49.504011 2397 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 01:45:49.505816 kubelet[2397]: W0813 01:45:49.505045 2397 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:45:49.511061 kubelet[2397]: I0813 01:45:49.510836 2397 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:45:49.511150 kubelet[2397]: I0813 01:45:49.511137 2397 server.go:1289] "Started kubelet" Aug 13 01:45:49.528972 kubelet[2397]: I0813 01:45:49.528842 2397 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:45:49.529765 kubelet[2397]: I0813 01:45:49.529726 2397 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:45:49.530038 kubelet[2397]: I0813 01:45:49.530012 2397 server.go:317] "Adding debug handlers to kubelet server" Aug 13 01:45:49.534659 kubelet[2397]: I0813 01:45:49.534594 2397 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:45:49.535003 kubelet[2397]: I0813 01:45:49.534975 2397 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:45:49.535234 kubelet[2397]: E0813 01:45:49.535201 2397 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.232.7.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.7.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 01:45:49.536437 kubelet[2397]: E0813 01:45:49.535292 2397 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.232.7.67:6443/api/v1/namespaces/default/events\": dial tcp 172.232.7.67:6443: connect: connection refused" event="&Event{ObjectMeta:{172-232-7-67.185b304441d0ed86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-7-67,UID:172-232-7-67,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-7-67,},FirstTimestamp:2025-08-13 01:45:49.51106087 +0000 UTC m=+0.657757083,LastTimestamp:2025-08-13 01:45:49.51106087 +0000 UTC m=+0.657757083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-7-67,}" Aug 13 01:45:49.539390 kubelet[2397]: I0813 01:45:49.539347 2397 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:45:49.541205 kubelet[2397]: I0813 01:45:49.540803 2397 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:45:49.541205 kubelet[2397]: E0813 01:45:49.541108 2397 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-232-7-67\" not found" Aug 13 01:45:49.543348 kubelet[2397]: E0813 01:45:49.543309 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.7.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-7-67?timeout=10s\": dial tcp 172.232.7.67:6443: connect: connection refused" interval="200ms" Aug 13 01:45:49.544599 kubelet[2397]: E0813 01:45:49.544560 2397 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:45:49.545995 kubelet[2397]: I0813 01:45:49.545978 2397 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:45:49.546132 kubelet[2397]: I0813 01:45:49.546103 2397 factory.go:223] Registration of the containerd container factory successfully Aug 13 01:45:49.546132 kubelet[2397]: I0813 01:45:49.546124 2397 factory.go:223] Registration of the systemd container factory successfully Aug 13 01:45:49.546263 kubelet[2397]: I0813 01:45:49.546236 2397 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:45:49.546313 kubelet[2397]: I0813 01:45:49.546300 2397 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:45:49.556127 kubelet[2397]: I0813 01:45:49.556099 2397 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 01:45:49.557585 kubelet[2397]: I0813 01:45:49.557569 2397 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 01:45:49.557696 kubelet[2397]: I0813 01:45:49.557684 2397 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 01:45:49.557900 kubelet[2397]: I0813 01:45:49.557883 2397 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:45:49.557992 kubelet[2397]: I0813 01:45:49.557980 2397 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 01:45:49.558110 kubelet[2397]: E0813 01:45:49.558087 2397 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:45:49.565582 kubelet[2397]: E0813 01:45:49.565539 2397 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.232.7.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.7.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 01:45:49.565721 kubelet[2397]: E0813 01:45:49.565646 2397 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.232.7.67:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.7.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Aug 13 01:45:49.575441 kubelet[2397]: I0813 01:45:49.575381 2397 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:45:49.575441 kubelet[2397]: I0813 01:45:49.575396 2397 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:45:49.575441 kubelet[2397]: I0813 01:45:49.575418 2397 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:45:49.576898 kubelet[2397]: I0813 01:45:49.576882 2397 policy_none.go:49] "None policy: Start" Aug 13 01:45:49.576997 kubelet[2397]: I0813 01:45:49.576985 2397 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:45:49.577142 kubelet[2397]: I0813 01:45:49.577094 2397 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:45:49.584212 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 01:45:49.605924 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 01:45:49.623379 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 01:45:49.632225 kubelet[2397]: E0813 01:45:49.632196 2397 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 01:45:49.633228 kubelet[2397]: I0813 01:45:49.633194 2397 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:45:49.633294 kubelet[2397]: I0813 01:45:49.633246 2397 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:45:49.634314 kubelet[2397]: I0813 01:45:49.634276 2397 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:45:49.639733 kubelet[2397]: E0813 01:45:49.639709 2397 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:45:49.639899 kubelet[2397]: E0813 01:45:49.639884 2397 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-232-7-67\" not found" Aug 13 01:45:49.674597 systemd[1]: Created slice kubepods-burstable-pod2f18c7ad9d9abdb87b917c85995ab2f1.slice - libcontainer container kubepods-burstable-pod2f18c7ad9d9abdb87b917c85995ab2f1.slice. Aug 13 01:45:49.683858 kubelet[2397]: E0813 01:45:49.683816 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-67\" not found" node="172-232-7-67" Aug 13 01:45:49.687782 systemd[1]: Created slice kubepods-burstable-podd075cdf63b258a635b5feef3e2046613.slice - libcontainer container kubepods-burstable-podd075cdf63b258a635b5feef3e2046613.slice. Aug 13 01:45:49.714561 kubelet[2397]: E0813 01:45:49.714490 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-67\" not found" node="172-232-7-67" Aug 13 01:45:49.719705 systemd[1]: Created slice kubepods-burstable-pod79aedd0d8e040180fbedc6c210d4f0d2.slice - libcontainer container kubepods-burstable-pod79aedd0d8e040180fbedc6c210d4f0d2.slice. Aug 13 01:45:49.722674 kubelet[2397]: E0813 01:45:49.722640 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-67\" not found" node="172-232-7-67" Aug 13 01:45:49.735216 kubelet[2397]: I0813 01:45:49.735184 2397 kubelet_node_status.go:75] "Attempting to register node" node="172-232-7-67" Aug 13 01:45:49.735609 kubelet[2397]: E0813 01:45:49.735585 2397 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.7.67:6443/api/v1/nodes\": dial tcp 172.232.7.67:6443: connect: connection refused" node="172-232-7-67" Aug 13 01:45:49.744270 kubelet[2397]: E0813 01:45:49.744222 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.7.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-7-67?timeout=10s\": dial tcp 172.232.7.67:6443: connect: connection refused" interval="400ms" Aug 13 01:45:49.847114 kubelet[2397]: I0813 01:45:49.847033 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f18c7ad9d9abdb87b917c85995ab2f1-kubeconfig\") pod \"kube-scheduler-172-232-7-67\" (UID: \"2f18c7ad9d9abdb87b917c85995ab2f1\") " pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:45:49.847114 kubelet[2397]: I0813 01:45:49.847088 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d075cdf63b258a635b5feef3e2046613-ca-certs\") pod \"kube-apiserver-172-232-7-67\" (UID: \"d075cdf63b258a635b5feef3e2046613\") " pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:45:49.847114 kubelet[2397]: I0813 01:45:49.847111 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d075cdf63b258a635b5feef3e2046613-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-7-67\" (UID: \"d075cdf63b258a635b5feef3e2046613\") " pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:45:49.847114 kubelet[2397]: I0813 01:45:49.847132 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79aedd0d8e040180fbedc6c210d4f0d2-ca-certs\") pod \"kube-controller-manager-172-232-7-67\" (UID: \"79aedd0d8e040180fbedc6c210d4f0d2\") " pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:45:49.847407 kubelet[2397]: I0813 01:45:49.847153 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/79aedd0d8e040180fbedc6c210d4f0d2-flexvolume-dir\") pod \"kube-controller-manager-172-232-7-67\" (UID: \"79aedd0d8e040180fbedc6c210d4f0d2\") " pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:45:49.847407 kubelet[2397]: I0813 01:45:49.847196 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79aedd0d8e040180fbedc6c210d4f0d2-k8s-certs\") pod \"kube-controller-manager-172-232-7-67\" (UID: \"79aedd0d8e040180fbedc6c210d4f0d2\") " pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:45:49.847407 kubelet[2397]: I0813 01:45:49.847213 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d075cdf63b258a635b5feef3e2046613-k8s-certs\") pod \"kube-apiserver-172-232-7-67\" (UID: \"d075cdf63b258a635b5feef3e2046613\") " pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:45:49.847407 kubelet[2397]: I0813 01:45:49.847229 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/79aedd0d8e040180fbedc6c210d4f0d2-kubeconfig\") pod \"kube-controller-manager-172-232-7-67\" (UID: \"79aedd0d8e040180fbedc6c210d4f0d2\") " pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:45:49.847407 kubelet[2397]: I0813 01:45:49.847251 2397 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79aedd0d8e040180fbedc6c210d4f0d2-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-7-67\" (UID: \"79aedd0d8e040180fbedc6c210d4f0d2\") " pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:45:49.938671 kubelet[2397]: I0813 01:45:49.938490 2397 kubelet_node_status.go:75] "Attempting to register node" node="172-232-7-67" Aug 13 01:45:49.939202 kubelet[2397]: E0813 01:45:49.939175 2397 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.7.67:6443/api/v1/nodes\": dial tcp 172.232.7.67:6443: connect: connection refused" node="172-232-7-67" Aug 13 01:45:49.985803 kubelet[2397]: E0813 01:45:49.985722 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:49.987176 containerd[1556]: time="2025-08-13T01:45:49.986591035Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-7-67,Uid:2f18c7ad9d9abdb87b917c85995ab2f1,Namespace:kube-system,Attempt:0,}" Aug 13 01:45:50.016556 kubelet[2397]: E0813 01:45:50.016250 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:50.017295 containerd[1556]: time="2025-08-13T01:45:50.017265722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-7-67,Uid:d075cdf63b258a635b5feef3e2046613,Namespace:kube-system,Attempt:0,}" Aug 13 01:45:50.023573 kubelet[2397]: E0813 01:45:50.023548 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:50.024428 containerd[1556]: time="2025-08-13T01:45:50.024402913Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-7-67,Uid:79aedd0d8e040180fbedc6c210d4f0d2,Namespace:kube-system,Attempt:0,}" Aug 13 01:45:50.049108 containerd[1556]: time="2025-08-13T01:45:50.048715910Z" level=info msg="connecting to shim 4fddfd637777d66d603483af1c8de6c7a6dc7638d6086e55764e9a22040f8ca4" address="unix:///run/containerd/s/d11e12c99e6ee4bca200933d9e48db3a2a9e55c11e5fee2588fab5f367b3f6cd" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:45:50.152072 kubelet[2397]: E0813 01:45:50.151833 2397 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.232.7.67:6443/api/v1/namespaces/default/events\": dial tcp 172.232.7.67:6443: connect: connection refused" event="&Event{ObjectMeta:{172-232-7-67.185b304441d0ed86 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-7-67,UID:172-232-7-67,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-7-67,},FirstTimestamp:2025-08-13 01:45:49.51106087 +0000 UTC m=+0.657757083,LastTimestamp:2025-08-13 01:45:49.51106087 +0000 UTC m=+0.657757083,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-7-67,}" Aug 13 01:45:50.152072 kubelet[2397]: E0813 01:45:50.151970 2397 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.7.67:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-7-67?timeout=10s\": dial tcp 172.232.7.67:6443: connect: connection refused" interval="800ms" Aug 13 01:45:50.166038 containerd[1556]: time="2025-08-13T01:45:50.165926197Z" level=info msg="connecting to shim 82ad7c8e7965f9f675f02efc4ef995213b33ab2cc1a63e6c47a15e7ef03c4350" address="unix:///run/containerd/s/e6fd060b36971b6fb3188e3eb188354e11ddd94dc8138808ee0da7d327e5d3ff" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:45:50.169191 containerd[1556]: time="2025-08-13T01:45:50.169162219Z" level=info msg="connecting to shim 07d1d0391857b928080a0b255dc5d0cc2f4f3767c606ccc25fa823c79046d812" address="unix:///run/containerd/s/1ad20f449de19a47dba48f5064529b5f651d36ff36c32c4426e0e05fba932fc9" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:45:50.250249 systemd[1]: Started cri-containerd-4fddfd637777d66d603483af1c8de6c7a6dc7638d6086e55764e9a22040f8ca4.scope - libcontainer container 4fddfd637777d66d603483af1c8de6c7a6dc7638d6086e55764e9a22040f8ca4. Aug 13 01:45:50.263533 systemd[1]: Started cri-containerd-82ad7c8e7965f9f675f02efc4ef995213b33ab2cc1a63e6c47a15e7ef03c4350.scope - libcontainer container 82ad7c8e7965f9f675f02efc4ef995213b33ab2cc1a63e6c47a15e7ef03c4350. Aug 13 01:45:50.331115 systemd[1]: Started cri-containerd-07d1d0391857b928080a0b255dc5d0cc2f4f3767c606ccc25fa823c79046d812.scope - libcontainer container 07d1d0391857b928080a0b255dc5d0cc2f4f3767c606ccc25fa823c79046d812. Aug 13 01:45:50.359463 kubelet[2397]: I0813 01:45:50.350814 2397 kubelet_node_status.go:75] "Attempting to register node" node="172-232-7-67" Aug 13 01:45:50.359463 kubelet[2397]: E0813 01:45:50.351258 2397 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.7.67:6443/api/v1/nodes\": dial tcp 172.232.7.67:6443: connect: connection refused" node="172-232-7-67" Aug 13 01:45:50.404542 containerd[1556]: time="2025-08-13T01:45:50.404487461Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-7-67,Uid:2f18c7ad9d9abdb87b917c85995ab2f1,Namespace:kube-system,Attempt:0,} returns sandbox id \"4fddfd637777d66d603483af1c8de6c7a6dc7638d6086e55764e9a22040f8ca4\"" Aug 13 01:45:50.418734 kubelet[2397]: E0813 01:45:50.408771 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:50.432932 containerd[1556]: time="2025-08-13T01:45:50.432887615Z" level=info msg="CreateContainer within sandbox \"4fddfd637777d66d603483af1c8de6c7a6dc7638d6086e55764e9a22040f8ca4\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:45:50.440469 containerd[1556]: time="2025-08-13T01:45:50.440425374Z" level=info msg="Container 2d101dcb31f49c538452eb0456a3edf189b30f2a97b14d874d92a20a2c6c54ec: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:50.449267 containerd[1556]: time="2025-08-13T01:45:50.449215516Z" level=info msg="CreateContainer within sandbox \"4fddfd637777d66d603483af1c8de6c7a6dc7638d6086e55764e9a22040f8ca4\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"2d101dcb31f49c538452eb0456a3edf189b30f2a97b14d874d92a20a2c6c54ec\"" Aug 13 01:45:50.462129 containerd[1556]: time="2025-08-13T01:45:50.462090745Z" level=info msg="StartContainer for \"2d101dcb31f49c538452eb0456a3edf189b30f2a97b14d874d92a20a2c6c54ec\"" Aug 13 01:45:50.463458 containerd[1556]: time="2025-08-13T01:45:50.463432608Z" level=info msg="connecting to shim 2d101dcb31f49c538452eb0456a3edf189b30f2a97b14d874d92a20a2c6c54ec" address="unix:///run/containerd/s/d11e12c99e6ee4bca200933d9e48db3a2a9e55c11e5fee2588fab5f367b3f6cd" protocol=ttrpc version=3 Aug 13 01:45:50.469846 containerd[1556]: time="2025-08-13T01:45:50.469819513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-7-67,Uid:79aedd0d8e040180fbedc6c210d4f0d2,Namespace:kube-system,Attempt:0,} returns sandbox id \"82ad7c8e7965f9f675f02efc4ef995213b33ab2cc1a63e6c47a15e7ef03c4350\"" Aug 13 01:45:50.471568 kubelet[2397]: E0813 01:45:50.471546 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:50.478686 containerd[1556]: time="2025-08-13T01:45:50.478654415Z" level=info msg="CreateContainer within sandbox \"82ad7c8e7965f9f675f02efc4ef995213b33ab2cc1a63e6c47a15e7ef03c4350\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:45:50.488363 containerd[1556]: time="2025-08-13T01:45:50.488337462Z" level=info msg="Container 82bd2ae4a8f0759932f3b8b4744d9ea8052155a62594e8a9307d5e495a955593: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:50.489257 systemd[1]: Started cri-containerd-2d101dcb31f49c538452eb0456a3edf189b30f2a97b14d874d92a20a2c6c54ec.scope - libcontainer container 2d101dcb31f49c538452eb0456a3edf189b30f2a97b14d874d92a20a2c6c54ec. Aug 13 01:45:50.493002 containerd[1556]: time="2025-08-13T01:45:50.492911106Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-7-67,Uid:d075cdf63b258a635b5feef3e2046613,Namespace:kube-system,Attempt:0,} returns sandbox id \"07d1d0391857b928080a0b255dc5d0cc2f4f3767c606ccc25fa823c79046d812\"" Aug 13 01:45:50.494274 kubelet[2397]: E0813 01:45:50.494238 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:50.494729 containerd[1556]: time="2025-08-13T01:45:50.494702817Z" level=info msg="CreateContainer within sandbox \"82ad7c8e7965f9f675f02efc4ef995213b33ab2cc1a63e6c47a15e7ef03c4350\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"82bd2ae4a8f0759932f3b8b4744d9ea8052155a62594e8a9307d5e495a955593\"" Aug 13 01:45:50.497187 containerd[1556]: time="2025-08-13T01:45:50.497145433Z" level=info msg="StartContainer for \"82bd2ae4a8f0759932f3b8b4744d9ea8052155a62594e8a9307d5e495a955593\"" Aug 13 01:45:50.499770 containerd[1556]: time="2025-08-13T01:45:50.499700349Z" level=info msg="CreateContainer within sandbox \"07d1d0391857b928080a0b255dc5d0cc2f4f3767c606ccc25fa823c79046d812\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:45:50.501318 containerd[1556]: time="2025-08-13T01:45:50.501204571Z" level=info msg="connecting to shim 82bd2ae4a8f0759932f3b8b4744d9ea8052155a62594e8a9307d5e495a955593" address="unix:///run/containerd/s/e6fd060b36971b6fb3188e3eb188354e11ddd94dc8138808ee0da7d327e5d3ff" protocol=ttrpc version=3 Aug 13 01:45:50.510821 containerd[1556]: time="2025-08-13T01:45:50.510779149Z" level=info msg="Container 7bf5ef9179c1930d03bf50b33265eb181ceb25c1bc93bf8026e6beed853e02db: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:50.522371 containerd[1556]: time="2025-08-13T01:45:50.522333705Z" level=info msg="CreateContainer within sandbox \"07d1d0391857b928080a0b255dc5d0cc2f4f3767c606ccc25fa823c79046d812\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"7bf5ef9179c1930d03bf50b33265eb181ceb25c1bc93bf8026e6beed853e02db\"" Aug 13 01:45:50.525787 containerd[1556]: time="2025-08-13T01:45:50.524805772Z" level=info msg="StartContainer for \"7bf5ef9179c1930d03bf50b33265eb181ceb25c1bc93bf8026e6beed853e02db\"" Aug 13 01:45:50.528447 containerd[1556]: time="2025-08-13T01:45:50.528286893Z" level=info msg="connecting to shim 7bf5ef9179c1930d03bf50b33265eb181ceb25c1bc93bf8026e6beed853e02db" address="unix:///run/containerd/s/1ad20f449de19a47dba48f5064529b5f651d36ff36c32c4426e0e05fba932fc9" protocol=ttrpc version=3 Aug 13 01:45:50.530004 kubelet[2397]: E0813 01:45:50.529594 2397 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.232.7.67:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-7-67&limit=500&resourceVersion=0\": dial tcp 172.232.7.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Aug 13 01:45:50.534078 systemd[1]: Started cri-containerd-82bd2ae4a8f0759932f3b8b4744d9ea8052155a62594e8a9307d5e495a955593.scope - libcontainer container 82bd2ae4a8f0759932f3b8b4744d9ea8052155a62594e8a9307d5e495a955593. Aug 13 01:45:50.548815 kubelet[2397]: E0813 01:45:50.548733 2397 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.232.7.67:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.7.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Aug 13 01:45:50.551341 kubelet[2397]: E0813 01:45:50.551296 2397 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.232.7.67:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.7.67:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Aug 13 01:45:50.564282 systemd[1]: Started cri-containerd-7bf5ef9179c1930d03bf50b33265eb181ceb25c1bc93bf8026e6beed853e02db.scope - libcontainer container 7bf5ef9179c1930d03bf50b33265eb181ceb25c1bc93bf8026e6beed853e02db. Aug 13 01:45:50.631800 containerd[1556]: time="2025-08-13T01:45:50.631354529Z" level=info msg="StartContainer for \"2d101dcb31f49c538452eb0456a3edf189b30f2a97b14d874d92a20a2c6c54ec\" returns successfully" Aug 13 01:45:50.651812 containerd[1556]: time="2025-08-13T01:45:50.651766367Z" level=info msg="StartContainer for \"82bd2ae4a8f0759932f3b8b4744d9ea8052155a62594e8a9307d5e495a955593\" returns successfully" Aug 13 01:45:50.725448 containerd[1556]: time="2025-08-13T01:45:50.725388114Z" level=info msg="StartContainer for \"7bf5ef9179c1930d03bf50b33265eb181ceb25c1bc93bf8026e6beed853e02db\" returns successfully" Aug 13 01:45:51.156358 kubelet[2397]: I0813 01:45:51.156200 2397 kubelet_node_status.go:75] "Attempting to register node" node="172-232-7-67" Aug 13 01:45:51.630850 kubelet[2397]: E0813 01:45:51.630544 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-67\" not found" node="172-232-7-67" Aug 13 01:45:51.630850 kubelet[2397]: E0813 01:45:51.630693 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:51.632663 kubelet[2397]: E0813 01:45:51.632425 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-67\" not found" node="172-232-7-67" Aug 13 01:45:51.632663 kubelet[2397]: E0813 01:45:51.632520 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:51.637426 kubelet[2397]: E0813 01:45:51.637409 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-67\" not found" node="172-232-7-67" Aug 13 01:45:51.637609 kubelet[2397]: E0813 01:45:51.637595 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:52.644249 kubelet[2397]: E0813 01:45:52.643390 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-67\" not found" node="172-232-7-67" Aug 13 01:45:52.644249 kubelet[2397]: E0813 01:45:52.643645 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:52.644249 kubelet[2397]: E0813 01:45:52.644019 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-67\" not found" node="172-232-7-67" Aug 13 01:45:52.644249 kubelet[2397]: E0813 01:45:52.644174 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:52.645051 kubelet[2397]: E0813 01:45:52.644999 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-67\" not found" node="172-232-7-67" Aug 13 01:45:52.645242 kubelet[2397]: E0813 01:45:52.645221 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:53.644175 kubelet[2397]: E0813 01:45:53.642869 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-67\" not found" node="172-232-7-67" Aug 13 01:45:53.644175 kubelet[2397]: E0813 01:45:53.643018 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:53.644175 kubelet[2397]: E0813 01:45:53.643919 2397 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-67\" not found" node="172-232-7-67" Aug 13 01:45:53.644175 kubelet[2397]: E0813 01:45:53.644072 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:53.881276 kubelet[2397]: E0813 01:45:53.881179 2397 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-232-7-67\" not found" node="172-232-7-67" Aug 13 01:45:53.954836 kubelet[2397]: I0813 01:45:53.954668 2397 kubelet_node_status.go:78] "Successfully registered node" node="172-232-7-67" Aug 13 01:45:53.954836 kubelet[2397]: E0813 01:45:53.954735 2397 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-232-7-67\": node \"172-232-7-67\" not found" Aug 13 01:45:54.043122 kubelet[2397]: I0813 01:45:54.043068 2397 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:45:54.053371 kubelet[2397]: E0813 01:45:54.053324 2397 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-232-7-67\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:45:54.053371 kubelet[2397]: I0813 01:45:54.053377 2397 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:45:54.060581 kubelet[2397]: E0813 01:45:54.060395 2397 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-7-67\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:45:54.060581 kubelet[2397]: I0813 01:45:54.060422 2397 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:45:54.062949 kubelet[2397]: E0813 01:45:54.062920 2397 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-232-7-67\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:45:54.526026 kubelet[2397]: I0813 01:45:54.525976 2397 apiserver.go:52] "Watching apiserver" Aug 13 01:45:54.546413 kubelet[2397]: I0813 01:45:54.546348 2397 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:45:54.643205 kubelet[2397]: I0813 01:45:54.643161 2397 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:45:54.645304 kubelet[2397]: E0813 01:45:54.645261 2397 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-232-7-67\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:45:54.645672 kubelet[2397]: E0813 01:45:54.645436 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:55.744184 kubelet[2397]: I0813 01:45:55.744130 2397 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:45:55.750554 kubelet[2397]: E0813 01:45:55.750512 2397 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:55.799627 systemd[1]: Reload requested from client PID 2696 ('systemctl') (unit session-9.scope)... Aug 13 01:45:55.799656 systemd[1]: Reloading... Aug 13 01:45:55.953782 zram_generator::config[2735]: No configuration found. Aug 13 01:45:56.092498 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:45:56.253612 systemd[1]: Reloading finished in 453 ms. Aug 13 01:45:56.278723 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:56.303263 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:45:56.303583 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:56.303636 systemd[1]: kubelet.service: Consumed 1.117s CPU time, 129.5M memory peak. Aug 13 01:45:56.309994 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:56.509293 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:56.520336 (kubelet)[2790]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:45:56.621797 kubelet[2790]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:45:56.621797 kubelet[2790]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:45:56.621797 kubelet[2790]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:45:56.622766 kubelet[2790]: I0813 01:45:56.621715 2790 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:45:56.629336 kubelet[2790]: I0813 01:45:56.629298 2790 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Aug 13 01:45:56.629336 kubelet[2790]: I0813 01:45:56.629332 2790 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:45:56.629785 kubelet[2790]: I0813 01:45:56.629761 2790 server.go:956] "Client rotation is on, will bootstrap in background" Aug 13 01:45:56.631934 kubelet[2790]: I0813 01:45:56.631905 2790 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Aug 13 01:45:56.637195 kubelet[2790]: I0813 01:45:56.636854 2790 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:45:56.642709 kubelet[2790]: I0813 01:45:56.642690 2790 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:45:56.646858 kubelet[2790]: I0813 01:45:56.646837 2790 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:45:56.647273 kubelet[2790]: I0813 01:45:56.647236 2790 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:45:56.647461 kubelet[2790]: I0813 01:45:56.647335 2790 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-7-67","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:45:56.647711 kubelet[2790]: I0813 01:45:56.647606 2790 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:45:56.647711 kubelet[2790]: I0813 01:45:56.647621 2790 container_manager_linux.go:303] "Creating device plugin manager" Aug 13 01:45:56.647711 kubelet[2790]: I0813 01:45:56.647676 2790 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:45:56.648018 kubelet[2790]: I0813 01:45:56.648002 2790 kubelet.go:480] "Attempting to sync node with API server" Aug 13 01:45:56.648098 kubelet[2790]: I0813 01:45:56.648086 2790 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:45:56.648162 kubelet[2790]: I0813 01:45:56.648152 2790 kubelet.go:386] "Adding apiserver pod source" Aug 13 01:45:56.648227 kubelet[2790]: I0813 01:45:56.648216 2790 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:45:56.653887 kubelet[2790]: I0813 01:45:56.653860 2790 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:45:56.654401 kubelet[2790]: I0813 01:45:56.654351 2790 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Aug 13 01:45:56.664471 kubelet[2790]: I0813 01:45:56.663598 2790 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:45:56.664471 kubelet[2790]: I0813 01:45:56.663648 2790 server.go:1289] "Started kubelet" Aug 13 01:45:56.671060 kubelet[2790]: I0813 01:45:56.671039 2790 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:45:56.672718 kubelet[2790]: I0813 01:45:56.672698 2790 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:45:56.675772 kubelet[2790]: I0813 01:45:56.671365 2790 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:45:56.676768 kubelet[2790]: I0813 01:45:56.676069 2790 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:45:56.679345 kubelet[2790]: I0813 01:45:56.679133 2790 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:45:56.683200 kubelet[2790]: I0813 01:45:56.671324 2790 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:45:56.684233 kubelet[2790]: I0813 01:45:56.684215 2790 server.go:317] "Adding debug handlers to kubelet server" Aug 13 01:45:56.685962 kubelet[2790]: I0813 01:45:56.685943 2790 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:45:56.686395 kubelet[2790]: I0813 01:45:56.686380 2790 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:45:56.688403 kubelet[2790]: E0813 01:45:56.688371 2790 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:45:56.691450 kubelet[2790]: I0813 01:45:56.691411 2790 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Aug 13 01:45:56.693088 kubelet[2790]: I0813 01:45:56.693069 2790 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Aug 13 01:45:56.693167 kubelet[2790]: I0813 01:45:56.693155 2790 status_manager.go:230] "Starting to sync pod status with apiserver" Aug 13 01:45:56.693247 kubelet[2790]: I0813 01:45:56.693235 2790 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:45:56.693494 kubelet[2790]: I0813 01:45:56.693284 2790 kubelet.go:2436] "Starting kubelet main sync loop" Aug 13 01:45:56.693494 kubelet[2790]: E0813 01:45:56.693328 2790 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:45:56.693617 kubelet[2790]: I0813 01:45:56.693522 2790 factory.go:223] Registration of the containerd container factory successfully Aug 13 01:45:56.693617 kubelet[2790]: I0813 01:45:56.693537 2790 factory.go:223] Registration of the systemd container factory successfully Aug 13 01:45:56.693617 kubelet[2790]: I0813 01:45:56.693603 2790 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:45:56.773572 kubelet[2790]: I0813 01:45:56.773432 2790 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:45:56.773572 kubelet[2790]: I0813 01:45:56.773480 2790 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:45:56.773572 kubelet[2790]: I0813 01:45:56.773505 2790 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:45:56.773855 kubelet[2790]: I0813 01:45:56.773702 2790 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:45:56.774947 kubelet[2790]: I0813 01:45:56.773722 2790 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:45:56.775002 kubelet[2790]: I0813 01:45:56.774951 2790 policy_none.go:49] "None policy: Start" Aug 13 01:45:56.775002 kubelet[2790]: I0813 01:45:56.774970 2790 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:45:56.775002 kubelet[2790]: I0813 01:45:56.774986 2790 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:45:56.775406 kubelet[2790]: I0813 01:45:56.775136 2790 state_mem.go:75] "Updated machine memory state" Aug 13 01:45:56.784633 kubelet[2790]: E0813 01:45:56.784410 2790 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Aug 13 01:45:56.784825 kubelet[2790]: I0813 01:45:56.784807 2790 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:45:56.785126 kubelet[2790]: I0813 01:45:56.785044 2790 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:45:56.786090 kubelet[2790]: I0813 01:45:56.786073 2790 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:45:56.789566 kubelet[2790]: E0813 01:45:56.789535 2790 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:45:56.798316 kubelet[2790]: I0813 01:45:56.798288 2790 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:45:56.804474 kubelet[2790]: I0813 01:45:56.804260 2790 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:45:56.805731 kubelet[2790]: I0813 01:45:56.805685 2790 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:45:56.836625 kubelet[2790]: E0813 01:45:56.836541 2790 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-7-67\" already exists" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:45:56.888760 kubelet[2790]: I0813 01:45:56.888690 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/d075cdf63b258a635b5feef3e2046613-ca-certs\") pod \"kube-apiserver-172-232-7-67\" (UID: \"d075cdf63b258a635b5feef3e2046613\") " pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:45:56.889081 kubelet[2790]: I0813 01:45:56.889042 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/d075cdf63b258a635b5feef3e2046613-k8s-certs\") pod \"kube-apiserver-172-232-7-67\" (UID: \"d075cdf63b258a635b5feef3e2046613\") " pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:45:56.889343 kubelet[2790]: I0813 01:45:56.889273 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/d075cdf63b258a635b5feef3e2046613-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-7-67\" (UID: \"d075cdf63b258a635b5feef3e2046613\") " pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:45:56.889510 kubelet[2790]: I0813 01:45:56.889488 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/79aedd0d8e040180fbedc6c210d4f0d2-flexvolume-dir\") pod \"kube-controller-manager-172-232-7-67\" (UID: \"79aedd0d8e040180fbedc6c210d4f0d2\") " pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:45:56.889683 kubelet[2790]: I0813 01:45:56.889634 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/79aedd0d8e040180fbedc6c210d4f0d2-k8s-certs\") pod \"kube-controller-manager-172-232-7-67\" (UID: \"79aedd0d8e040180fbedc6c210d4f0d2\") " pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:45:56.889823 kubelet[2790]: I0813 01:45:56.889805 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/79aedd0d8e040180fbedc6c210d4f0d2-kubeconfig\") pod \"kube-controller-manager-172-232-7-67\" (UID: \"79aedd0d8e040180fbedc6c210d4f0d2\") " pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:45:56.889962 kubelet[2790]: I0813 01:45:56.889944 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/79aedd0d8e040180fbedc6c210d4f0d2-ca-certs\") pod \"kube-controller-manager-172-232-7-67\" (UID: \"79aedd0d8e040180fbedc6c210d4f0d2\") " pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:45:56.890140 kubelet[2790]: I0813 01:45:56.890119 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/79aedd0d8e040180fbedc6c210d4f0d2-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-7-67\" (UID: \"79aedd0d8e040180fbedc6c210d4f0d2\") " pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:45:56.890277 kubelet[2790]: I0813 01:45:56.890258 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2f18c7ad9d9abdb87b917c85995ab2f1-kubeconfig\") pod \"kube-scheduler-172-232-7-67\" (UID: \"2f18c7ad9d9abdb87b917c85995ab2f1\") " pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:45:56.927246 kubelet[2790]: I0813 01:45:56.927193 2790 kubelet_node_status.go:75] "Attempting to register node" node="172-232-7-67" Aug 13 01:45:56.940135 kubelet[2790]: I0813 01:45:56.940041 2790 kubelet_node_status.go:124] "Node was previously registered" node="172-232-7-67" Aug 13 01:45:56.940564 kubelet[2790]: I0813 01:45:56.940526 2790 kubelet_node_status.go:78] "Successfully registered node" node="172-232-7-67" Aug 13 01:45:57.138436 kubelet[2790]: E0813 01:45:57.137226 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:57.138436 kubelet[2790]: E0813 01:45:57.137404 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:57.138436 kubelet[2790]: E0813 01:45:57.137505 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:57.652984 kubelet[2790]: I0813 01:45:57.652932 2790 apiserver.go:52] "Watching apiserver" Aug 13 01:45:57.700140 kubelet[2790]: I0813 01:45:57.699948 2790 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:45:57.729051 kubelet[2790]: E0813 01:45:57.727883 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:57.729628 kubelet[2790]: E0813 01:45:57.729598 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:57.729910 kubelet[2790]: I0813 01:45:57.729893 2790 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:45:57.745142 kubelet[2790]: E0813 01:45:57.745028 2790 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-7-67\" already exists" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:45:57.745429 kubelet[2790]: E0813 01:45:57.745388 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:57.822485 kubelet[2790]: I0813 01:45:57.822414 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-232-7-67" podStartSLOduration=1.8223416000000001 podStartE2EDuration="1.8223416s" podCreationTimestamp="2025-08-13 01:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:45:57.805041311 +0000 UTC m=+1.259809676" watchObservedRunningTime="2025-08-13 01:45:57.8223416 +0000 UTC m=+1.277109955" Aug 13 01:45:57.834081 kubelet[2790]: I0813 01:45:57.834013 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-232-7-67" podStartSLOduration=2.83399732 podStartE2EDuration="2.83399732s" podCreationTimestamp="2025-08-13 01:45:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:45:57.823154818 +0000 UTC m=+1.277923183" watchObservedRunningTime="2025-08-13 01:45:57.83399732 +0000 UTC m=+1.288765675" Aug 13 01:45:57.834246 kubelet[2790]: I0813 01:45:57.834098 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-232-7-67" podStartSLOduration=1.83409339 podStartE2EDuration="1.83409339s" podCreationTimestamp="2025-08-13 01:45:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:45:57.83391534 +0000 UTC m=+1.288683695" watchObservedRunningTime="2025-08-13 01:45:57.83409339 +0000 UTC m=+1.288861745" Aug 13 01:45:58.729891 kubelet[2790]: E0813 01:45:58.729849 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:58.730683 kubelet[2790]: E0813 01:45:58.730442 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:45:59.884592 kubelet[2790]: E0813 01:45:59.884493 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:01.142907 kubelet[2790]: I0813 01:46:01.142849 2790 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:46:01.143410 kubelet[2790]: I0813 01:46:01.143384 2790 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:46:01.143449 containerd[1556]: time="2025-08-13T01:46:01.143223245Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:46:01.943823 systemd[1]: Created slice kubepods-besteffort-pod67d61509_353a_43af_907e_0ee2cfd68dc7.slice - libcontainer container kubepods-besteffort-pod67d61509_353a_43af_907e_0ee2cfd68dc7.slice. Aug 13 01:46:01.947526 kubelet[2790]: W0813 01:46:01.947467 2790 helpers.go:245] readString: Failed to read "/sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67d61509_353a_43af_907e_0ee2cfd68dc7.slice/cpuset.cpus.effective": read /sys/fs/cgroup/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod67d61509_353a_43af_907e_0ee2cfd68dc7.slice/cpuset.cpus.effective: no such device Aug 13 01:46:01.977025 kubelet[2790]: I0813 01:46:01.976949 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/67d61509-353a-43af-907e-0ee2cfd68dc7-kube-proxy\") pod \"kube-proxy-mjdwx\" (UID: \"67d61509-353a-43af-907e-0ee2cfd68dc7\") " pod="kube-system/kube-proxy-mjdwx" Aug 13 01:46:01.977276 kubelet[2790]: I0813 01:46:01.977221 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hskmq\" (UniqueName: \"kubernetes.io/projected/67d61509-353a-43af-907e-0ee2cfd68dc7-kube-api-access-hskmq\") pod \"kube-proxy-mjdwx\" (UID: \"67d61509-353a-43af-907e-0ee2cfd68dc7\") " pod="kube-system/kube-proxy-mjdwx" Aug 13 01:46:01.977429 kubelet[2790]: I0813 01:46:01.977314 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67d61509-353a-43af-907e-0ee2cfd68dc7-xtables-lock\") pod \"kube-proxy-mjdwx\" (UID: \"67d61509-353a-43af-907e-0ee2cfd68dc7\") " pod="kube-system/kube-proxy-mjdwx" Aug 13 01:46:01.977429 kubelet[2790]: I0813 01:46:01.977343 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67d61509-353a-43af-907e-0ee2cfd68dc7-lib-modules\") pod \"kube-proxy-mjdwx\" (UID: \"67d61509-353a-43af-907e-0ee2cfd68dc7\") " pod="kube-system/kube-proxy-mjdwx" Aug 13 01:46:02.055941 systemd[1]: Created slice kubepods-besteffort-podd9ece072_77bc_4878_9cdf_811be2efec7d.slice - libcontainer container kubepods-besteffort-podd9ece072_77bc_4878_9cdf_811be2efec7d.slice. Aug 13 01:46:02.178392 kubelet[2790]: I0813 01:46:02.178317 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d9ece072-77bc-4878-9cdf-811be2efec7d-var-lib-calico\") pod \"tigera-operator-747864d56d-n7zrt\" (UID: \"d9ece072-77bc-4878-9cdf-811be2efec7d\") " pod="tigera-operator/tigera-operator-747864d56d-n7zrt" Aug 13 01:46:02.178392 kubelet[2790]: I0813 01:46:02.178390 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwpls\" (UniqueName: \"kubernetes.io/projected/d9ece072-77bc-4878-9cdf-811be2efec7d-kube-api-access-hwpls\") pod \"tigera-operator-747864d56d-n7zrt\" (UID: \"d9ece072-77bc-4878-9cdf-811be2efec7d\") " pod="tigera-operator/tigera-operator-747864d56d-n7zrt" Aug 13 01:46:02.252643 kubelet[2790]: E0813 01:46:02.252562 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:02.253407 containerd[1556]: time="2025-08-13T01:46:02.253323899Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mjdwx,Uid:67d61509-353a-43af-907e-0ee2cfd68dc7,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:02.293018 containerd[1556]: time="2025-08-13T01:46:02.292396051Z" level=info msg="connecting to shim 29beb7298bc36dcae95fec8a27a3795971a53cdfe6abce9af7f444dc60415eac" address="unix:///run/containerd/s/67b6a7a3c83f7eaccb60762419b8ded71d30f56824fcacb092f9531ec57a5ee3" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:46:02.363143 containerd[1556]: time="2025-08-13T01:46:02.363098782Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-n7zrt,Uid:d9ece072-77bc-4878-9cdf-811be2efec7d,Namespace:tigera-operator,Attempt:0,}" Aug 13 01:46:02.367353 systemd[1]: Started cri-containerd-29beb7298bc36dcae95fec8a27a3795971a53cdfe6abce9af7f444dc60415eac.scope - libcontainer container 29beb7298bc36dcae95fec8a27a3795971a53cdfe6abce9af7f444dc60415eac. Aug 13 01:46:02.444444 containerd[1556]: time="2025-08-13T01:46:02.444275808Z" level=info msg="connecting to shim d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f" address="unix:///run/containerd/s/9a1e041fb25d47e7862e2334957eb56a57c0db7654f90685814673dfc2bd0561" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:46:02.466522 containerd[1556]: time="2025-08-13T01:46:02.466469302Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-mjdwx,Uid:67d61509-353a-43af-907e-0ee2cfd68dc7,Namespace:kube-system,Attempt:0,} returns sandbox id \"29beb7298bc36dcae95fec8a27a3795971a53cdfe6abce9af7f444dc60415eac\"" Aug 13 01:46:02.468564 kubelet[2790]: E0813 01:46:02.468528 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:02.477102 containerd[1556]: time="2025-08-13T01:46:02.477016895Z" level=info msg="CreateContainer within sandbox \"29beb7298bc36dcae95fec8a27a3795971a53cdfe6abce9af7f444dc60415eac\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:46:02.487919 systemd[1]: Started cri-containerd-d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f.scope - libcontainer container d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f. Aug 13 01:46:02.512520 containerd[1556]: time="2025-08-13T01:46:02.511434348Z" level=info msg="Container 81e52e943171741dd5182ebf33c08f4d53bb223cd9ae2fcb01fa836e3f6dc5f1: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:46:02.521175 containerd[1556]: time="2025-08-13T01:46:02.521051974Z" level=info msg="CreateContainer within sandbox \"29beb7298bc36dcae95fec8a27a3795971a53cdfe6abce9af7f444dc60415eac\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"81e52e943171741dd5182ebf33c08f4d53bb223cd9ae2fcb01fa836e3f6dc5f1\"" Aug 13 01:46:02.522711 containerd[1556]: time="2025-08-13T01:46:02.522662920Z" level=info msg="StartContainer for \"81e52e943171741dd5182ebf33c08f4d53bb223cd9ae2fcb01fa836e3f6dc5f1\"" Aug 13 01:46:02.529467 containerd[1556]: time="2025-08-13T01:46:02.529433003Z" level=info msg="connecting to shim 81e52e943171741dd5182ebf33c08f4d53bb223cd9ae2fcb01fa836e3f6dc5f1" address="unix:///run/containerd/s/67b6a7a3c83f7eaccb60762419b8ded71d30f56824fcacb092f9531ec57a5ee3" protocol=ttrpc version=3 Aug 13 01:46:02.607260 systemd[1]: Started cri-containerd-81e52e943171741dd5182ebf33c08f4d53bb223cd9ae2fcb01fa836e3f6dc5f1.scope - libcontainer container 81e52e943171741dd5182ebf33c08f4d53bb223cd9ae2fcb01fa836e3f6dc5f1. Aug 13 01:46:02.616675 containerd[1556]: time="2025-08-13T01:46:02.615636835Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-n7zrt,Uid:d9ece072-77bc-4878-9cdf-811be2efec7d,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f\"" Aug 13 01:46:02.619204 containerd[1556]: time="2025-08-13T01:46:02.619007787Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 01:46:02.676810 containerd[1556]: time="2025-08-13T01:46:02.676772511Z" level=info msg="StartContainer for \"81e52e943171741dd5182ebf33c08f4d53bb223cd9ae2fcb01fa836e3f6dc5f1\" returns successfully" Aug 13 01:46:02.747232 kubelet[2790]: E0813 01:46:02.742857 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:03.497335 kubelet[2790]: E0813 01:46:03.497284 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:03.518972 kubelet[2790]: I0813 01:46:03.518864 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-mjdwx" podStartSLOduration=2.518821258 podStartE2EDuration="2.518821258s" podCreationTimestamp="2025-08-13 01:46:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:46:02.755947451 +0000 UTC m=+6.210715816" watchObservedRunningTime="2025-08-13 01:46:03.518821258 +0000 UTC m=+6.973589613" Aug 13 01:46:03.757927 kubelet[2790]: E0813 01:46:03.757535 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:03.788806 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2827852987.mount: Deactivated successfully. Aug 13 01:46:04.273507 kubelet[2790]: E0813 01:46:04.273471 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:04.757919 kubelet[2790]: E0813 01:46:04.757878 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:05.202200 containerd[1556]: time="2025-08-13T01:46:05.201295563Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:46:05.202200 containerd[1556]: time="2025-08-13T01:46:05.202075701Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 01:46:05.203004 containerd[1556]: time="2025-08-13T01:46:05.202817279Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:46:05.204402 containerd[1556]: time="2025-08-13T01:46:05.204373236Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:46:05.205134 containerd[1556]: time="2025-08-13T01:46:05.205091005Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 2.585722109s" Aug 13 01:46:05.205214 containerd[1556]: time="2025-08-13T01:46:05.205136795Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:46:05.210087 containerd[1556]: time="2025-08-13T01:46:05.210015705Z" level=info msg="CreateContainer within sandbox \"d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 01:46:05.220827 containerd[1556]: time="2025-08-13T01:46:05.220764712Z" level=info msg="Container 773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:46:05.226918 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount66794432.mount: Deactivated successfully. Aug 13 01:46:05.230343 containerd[1556]: time="2025-08-13T01:46:05.230278782Z" level=info msg="CreateContainer within sandbox \"d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198\"" Aug 13 01:46:05.232011 containerd[1556]: time="2025-08-13T01:46:05.231914639Z" level=info msg="StartContainer for \"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198\"" Aug 13 01:46:05.233366 containerd[1556]: time="2025-08-13T01:46:05.233305656Z" level=info msg="connecting to shim 773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198" address="unix:///run/containerd/s/9a1e041fb25d47e7862e2334957eb56a57c0db7654f90685814673dfc2bd0561" protocol=ttrpc version=3 Aug 13 01:46:05.304956 systemd[1]: Started cri-containerd-773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198.scope - libcontainer container 773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198. Aug 13 01:46:05.460264 containerd[1556]: time="2025-08-13T01:46:05.459605086Z" level=info msg="StartContainer for \"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198\" returns successfully" Aug 13 01:46:05.782634 kubelet[2790]: I0813 01:46:05.782494 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-n7zrt" podStartSLOduration=1.194026761 podStartE2EDuration="3.782417574s" podCreationTimestamp="2025-08-13 01:46:02 +0000 UTC" firstStartedPulling="2025-08-13 01:46:02.618117089 +0000 UTC m=+6.072885444" lastFinishedPulling="2025-08-13 01:46:05.206507902 +0000 UTC m=+8.661276257" observedRunningTime="2025-08-13 01:46:05.781489306 +0000 UTC m=+9.236257661" watchObservedRunningTime="2025-08-13 01:46:05.782417574 +0000 UTC m=+9.237185929" Aug 13 01:46:09.898915 kubelet[2790]: E0813 01:46:09.898143 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:10.781634 kubelet[2790]: E0813 01:46:10.781473 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:13.713436 sudo[1837]: pam_unix(sudo:session): session closed for user root Aug 13 01:46:13.766968 sshd[1836]: Connection closed by 147.75.109.163 port 60760 Aug 13 01:46:13.769798 sshd-session[1834]: pam_unix(sshd:session): session closed for user core Aug 13 01:46:13.782537 systemd-logind[1532]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:46:13.783545 systemd[1]: sshd@8-172.232.7.67:22-147.75.109.163:60760.service: Deactivated successfully. Aug 13 01:46:13.793035 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:46:13.794352 systemd[1]: session-9.scope: Consumed 7.954s CPU time, 229.2M memory peak. Aug 13 01:46:13.802909 systemd-logind[1532]: Removed session 9. Aug 13 01:46:18.126592 kubelet[2790]: I0813 01:46:18.126470 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pwzbd\" (UniqueName: \"kubernetes.io/projected/150d4c66-1467-4050-82d1-dc5cd0347f95-kube-api-access-pwzbd\") pod \"calico-typha-64bcb76cdd-m4xlg\" (UID: \"150d4c66-1467-4050-82d1-dc5cd0347f95\") " pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:46:18.127454 kubelet[2790]: I0813 01:46:18.126604 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/150d4c66-1467-4050-82d1-dc5cd0347f95-tigera-ca-bundle\") pod \"calico-typha-64bcb76cdd-m4xlg\" (UID: \"150d4c66-1467-4050-82d1-dc5cd0347f95\") " pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:46:18.127454 kubelet[2790]: I0813 01:46:18.126653 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/150d4c66-1467-4050-82d1-dc5cd0347f95-typha-certs\") pod \"calico-typha-64bcb76cdd-m4xlg\" (UID: \"150d4c66-1467-4050-82d1-dc5cd0347f95\") " pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:46:18.131399 systemd[1]: Created slice kubepods-besteffort-pod150d4c66_1467_4050_82d1_dc5cd0347f95.slice - libcontainer container kubepods-besteffort-pod150d4c66_1467_4050_82d1_dc5cd0347f95.slice. Aug 13 01:46:18.440930 kubelet[2790]: E0813 01:46:18.440491 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:18.443053 containerd[1556]: time="2025-08-13T01:46:18.442962322Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64bcb76cdd-m4xlg,Uid:150d4c66-1467-4050-82d1-dc5cd0347f95,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:18.518270 containerd[1556]: time="2025-08-13T01:46:18.518101134Z" level=info msg="connecting to shim a6df34dc1e5476403a75a033586a859ed0d1bbd1b6a5361e6314f40369ee5a54" address="unix:///run/containerd/s/3f15f93db1ea5bba63e763b12ea969aa8f8737d205498b0b55266dfe73853610" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:46:18.597900 systemd[1]: Created slice kubepods-besteffort-pod517ffc51_1a34_4ced_acf5_d8e5da6a1838.slice - libcontainer container kubepods-besteffort-pod517ffc51_1a34_4ced_acf5_d8e5da6a1838.slice. Aug 13 01:46:18.641194 systemd[1]: Started cri-containerd-a6df34dc1e5476403a75a033586a859ed0d1bbd1b6a5361e6314f40369ee5a54.scope - libcontainer container a6df34dc1e5476403a75a033586a859ed0d1bbd1b6a5361e6314f40369ee5a54. Aug 13 01:46:18.739708 containerd[1556]: time="2025-08-13T01:46:18.739649485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-64bcb76cdd-m4xlg,Uid:150d4c66-1467-4050-82d1-dc5cd0347f95,Namespace:calico-system,Attempt:0,} returns sandbox id \"a6df34dc1e5476403a75a033586a859ed0d1bbd1b6a5361e6314f40369ee5a54\"" Aug 13 01:46:18.740845 kubelet[2790]: E0813 01:46:18.740695 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:18.741346 kubelet[2790]: I0813 01:46:18.741319 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/517ffc51-1a34-4ced-acf5-d8e5da6a1838-lib-modules\") pod \"calico-node-tsmrf\" (UID: \"517ffc51-1a34-4ced-acf5-d8e5da6a1838\") " pod="calico-system/calico-node-tsmrf" Aug 13 01:46:18.741346 kubelet[2790]: I0813 01:46:18.741353 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/517ffc51-1a34-4ced-acf5-d8e5da6a1838-tigera-ca-bundle\") pod \"calico-node-tsmrf\" (UID: \"517ffc51-1a34-4ced-acf5-d8e5da6a1838\") " pod="calico-system/calico-node-tsmrf" Aug 13 01:46:18.741452 kubelet[2790]: I0813 01:46:18.741373 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/517ffc51-1a34-4ced-acf5-d8e5da6a1838-cni-bin-dir\") pod \"calico-node-tsmrf\" (UID: \"517ffc51-1a34-4ced-acf5-d8e5da6a1838\") " pod="calico-system/calico-node-tsmrf" Aug 13 01:46:18.741452 kubelet[2790]: I0813 01:46:18.741388 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/517ffc51-1a34-4ced-acf5-d8e5da6a1838-cni-net-dir\") pod \"calico-node-tsmrf\" (UID: \"517ffc51-1a34-4ced-acf5-d8e5da6a1838\") " pod="calico-system/calico-node-tsmrf" Aug 13 01:46:18.741452 kubelet[2790]: I0813 01:46:18.741404 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/517ffc51-1a34-4ced-acf5-d8e5da6a1838-policysync\") pod \"calico-node-tsmrf\" (UID: \"517ffc51-1a34-4ced-acf5-d8e5da6a1838\") " pod="calico-system/calico-node-tsmrf" Aug 13 01:46:18.741452 kubelet[2790]: I0813 01:46:18.741426 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/517ffc51-1a34-4ced-acf5-d8e5da6a1838-var-lib-calico\") pod \"calico-node-tsmrf\" (UID: \"517ffc51-1a34-4ced-acf5-d8e5da6a1838\") " pod="calico-system/calico-node-tsmrf" Aug 13 01:46:18.741639 kubelet[2790]: I0813 01:46:18.741447 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/517ffc51-1a34-4ced-acf5-d8e5da6a1838-var-run-calico\") pod \"calico-node-tsmrf\" (UID: \"517ffc51-1a34-4ced-acf5-d8e5da6a1838\") " pod="calico-system/calico-node-tsmrf" Aug 13 01:46:18.741639 kubelet[2790]: I0813 01:46:18.741476 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/517ffc51-1a34-4ced-acf5-d8e5da6a1838-node-certs\") pod \"calico-node-tsmrf\" (UID: \"517ffc51-1a34-4ced-acf5-d8e5da6a1838\") " pod="calico-system/calico-node-tsmrf" Aug 13 01:46:18.741639 kubelet[2790]: I0813 01:46:18.741521 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/517ffc51-1a34-4ced-acf5-d8e5da6a1838-xtables-lock\") pod \"calico-node-tsmrf\" (UID: \"517ffc51-1a34-4ced-acf5-d8e5da6a1838\") " pod="calico-system/calico-node-tsmrf" Aug 13 01:46:18.741639 kubelet[2790]: I0813 01:46:18.741541 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/517ffc51-1a34-4ced-acf5-d8e5da6a1838-cni-log-dir\") pod \"calico-node-tsmrf\" (UID: \"517ffc51-1a34-4ced-acf5-d8e5da6a1838\") " pod="calico-system/calico-node-tsmrf" Aug 13 01:46:18.741639 kubelet[2790]: I0813 01:46:18.741557 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/517ffc51-1a34-4ced-acf5-d8e5da6a1838-flexvol-driver-host\") pod \"calico-node-tsmrf\" (UID: \"517ffc51-1a34-4ced-acf5-d8e5da6a1838\") " pod="calico-system/calico-node-tsmrf" Aug 13 01:46:18.742499 kubelet[2790]: I0813 01:46:18.741579 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hzvb2\" (UniqueName: \"kubernetes.io/projected/517ffc51-1a34-4ced-acf5-d8e5da6a1838-kube-api-access-hzvb2\") pod \"calico-node-tsmrf\" (UID: \"517ffc51-1a34-4ced-acf5-d8e5da6a1838\") " pod="calico-system/calico-node-tsmrf" Aug 13 01:46:18.743659 containerd[1556]: time="2025-08-13T01:46:18.743528482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 01:46:18.834276 kubelet[2790]: E0813 01:46:18.834202 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:46:18.843946 kubelet[2790]: E0813 01:46:18.843830 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.843946 kubelet[2790]: W0813 01:46:18.843884 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.844178 kubelet[2790]: E0813 01:46:18.844156 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.845198 kubelet[2790]: E0813 01:46:18.845155 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.845198 kubelet[2790]: W0813 01:46:18.845169 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.845198 kubelet[2790]: E0813 01:46:18.845182 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.846104 kubelet[2790]: E0813 01:46:18.845944 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.846104 kubelet[2790]: W0813 01:46:18.845982 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.846803 kubelet[2790]: E0813 01:46:18.846780 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.848153 kubelet[2790]: E0813 01:46:18.848048 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.848574 kubelet[2790]: W0813 01:46:18.848439 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.848574 kubelet[2790]: E0813 01:46:18.848458 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.850329 kubelet[2790]: E0813 01:46:18.850114 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.850329 kubelet[2790]: W0813 01:46:18.850128 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.850329 kubelet[2790]: E0813 01:46:18.850140 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.850719 kubelet[2790]: E0813 01:46:18.850579 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.850719 kubelet[2790]: W0813 01:46:18.850632 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.850719 kubelet[2790]: E0813 01:46:18.850644 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.851681 kubelet[2790]: E0813 01:46:18.851553 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.852272 kubelet[2790]: W0813 01:46:18.852183 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.852272 kubelet[2790]: E0813 01:46:18.852206 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.852974 kubelet[2790]: E0813 01:46:18.852941 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.853114 kubelet[2790]: W0813 01:46:18.853058 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.853114 kubelet[2790]: E0813 01:46:18.853074 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.853713 kubelet[2790]: E0813 01:46:18.853694 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.853852 kubelet[2790]: W0813 01:46:18.853767 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.853852 kubelet[2790]: E0813 01:46:18.853779 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.855085 kubelet[2790]: E0813 01:46:18.854916 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.855085 kubelet[2790]: W0813 01:46:18.854930 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.855581 kubelet[2790]: E0813 01:46:18.854940 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.856389 kubelet[2790]: E0813 01:46:18.856373 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.856695 kubelet[2790]: W0813 01:46:18.856679 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.856927 kubelet[2790]: E0813 01:46:18.856912 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.857704 kubelet[2790]: E0813 01:46:18.857667 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.857871 kubelet[2790]: W0813 01:46:18.857841 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.858093 kubelet[2790]: E0813 01:46:18.858026 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.859021 kubelet[2790]: E0813 01:46:18.858716 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.859225 kubelet[2790]: W0813 01:46:18.859188 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.859225 kubelet[2790]: E0813 01:46:18.859211 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.859889 kubelet[2790]: E0813 01:46:18.859874 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.860034 kubelet[2790]: W0813 01:46:18.860020 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.860199 kubelet[2790]: E0813 01:46:18.860097 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.860492 kubelet[2790]: E0813 01:46:18.860444 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.860492 kubelet[2790]: W0813 01:46:18.860456 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.860492 kubelet[2790]: E0813 01:46:18.860465 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.861208 kubelet[2790]: E0813 01:46:18.861150 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.861208 kubelet[2790]: W0813 01:46:18.861163 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.861208 kubelet[2790]: E0813 01:46:18.861173 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.861630 kubelet[2790]: E0813 01:46:18.861595 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.861630 kubelet[2790]: W0813 01:46:18.861607 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.861630 kubelet[2790]: E0813 01:46:18.861617 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.862112 kubelet[2790]: E0813 01:46:18.862079 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.862112 kubelet[2790]: W0813 01:46:18.862091 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.862112 kubelet[2790]: E0813 01:46:18.862100 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.862561 kubelet[2790]: E0813 01:46:18.862506 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.862561 kubelet[2790]: W0813 01:46:18.862518 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.862561 kubelet[2790]: E0813 01:46:18.862539 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.863105 kubelet[2790]: E0813 01:46:18.863072 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.863105 kubelet[2790]: W0813 01:46:18.863083 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.863105 kubelet[2790]: E0813 01:46:18.863093 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.863536 kubelet[2790]: E0813 01:46:18.863490 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.863536 kubelet[2790]: W0813 01:46:18.863502 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.863536 kubelet[2790]: E0813 01:46:18.863511 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.863975 kubelet[2790]: E0813 01:46:18.863940 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.863975 kubelet[2790]: W0813 01:46:18.863952 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.863975 kubelet[2790]: E0813 01:46:18.863963 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.864391 kubelet[2790]: E0813 01:46:18.864347 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.864391 kubelet[2790]: W0813 01:46:18.864370 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.864391 kubelet[2790]: E0813 01:46:18.864379 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.864880 kubelet[2790]: E0813 01:46:18.864830 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.864880 kubelet[2790]: W0813 01:46:18.864843 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.864880 kubelet[2790]: E0813 01:46:18.864851 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.865386 kubelet[2790]: E0813 01:46:18.865373 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.865522 kubelet[2790]: W0813 01:46:18.865461 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.865522 kubelet[2790]: E0813 01:46:18.865489 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.865883 kubelet[2790]: E0813 01:46:18.865870 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.865937 kubelet[2790]: W0813 01:46:18.865927 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.866021 kubelet[2790]: E0813 01:46:18.865998 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.866505 kubelet[2790]: E0813 01:46:18.866448 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.866505 kubelet[2790]: W0813 01:46:18.866460 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.866505 kubelet[2790]: E0813 01:46:18.866469 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.867095 kubelet[2790]: E0813 01:46:18.867076 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.867194 kubelet[2790]: W0813 01:46:18.867164 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.867194 kubelet[2790]: E0813 01:46:18.867175 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.867632 kubelet[2790]: E0813 01:46:18.867609 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.867731 kubelet[2790]: W0813 01:46:18.867679 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.867731 kubelet[2790]: E0813 01:46:18.867705 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.868184 kubelet[2790]: E0813 01:46:18.868171 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.868280 kubelet[2790]: W0813 01:46:18.868244 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.868280 kubelet[2790]: E0813 01:46:18.868257 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.868617 kubelet[2790]: E0813 01:46:18.868606 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.868725 kubelet[2790]: W0813 01:46:18.868666 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.868725 kubelet[2790]: E0813 01:46:18.868679 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.869304 kubelet[2790]: E0813 01:46:18.869060 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.869304 kubelet[2790]: W0813 01:46:18.869072 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.869304 kubelet[2790]: E0813 01:46:18.869281 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.869827 kubelet[2790]: E0813 01:46:18.869776 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.869919 kubelet[2790]: W0813 01:46:18.869906 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.869970 kubelet[2790]: E0813 01:46:18.869959 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.870520 kubelet[2790]: E0813 01:46:18.870506 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.870617 kubelet[2790]: W0813 01:46:18.870604 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.870728 kubelet[2790]: E0813 01:46:18.870659 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.871067 kubelet[2790]: E0813 01:46:18.871044 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.871178 kubelet[2790]: W0813 01:46:18.871124 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.871178 kubelet[2790]: E0813 01:46:18.871155 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.871513 kubelet[2790]: E0813 01:46:18.871480 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.871513 kubelet[2790]: W0813 01:46:18.871492 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.871513 kubelet[2790]: E0813 01:46:18.871500 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.871980 kubelet[2790]: E0813 01:46:18.871955 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.872084 kubelet[2790]: W0813 01:46:18.872028 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.872084 kubelet[2790]: E0813 01:46:18.872061 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.872523 kubelet[2790]: E0813 01:46:18.872468 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.872523 kubelet[2790]: W0813 01:46:18.872480 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.872523 kubelet[2790]: E0813 01:46:18.872489 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.873026 kubelet[2790]: E0813 01:46:18.872984 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.873169 kubelet[2790]: W0813 01:46:18.873097 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.873169 kubelet[2790]: E0813 01:46:18.873113 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.873535 kubelet[2790]: E0813 01:46:18.873487 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.873535 kubelet[2790]: W0813 01:46:18.873512 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.873535 kubelet[2790]: E0813 01:46:18.873522 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.873977 kubelet[2790]: E0813 01:46:18.873945 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.873977 kubelet[2790]: W0813 01:46:18.873956 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.873977 kubelet[2790]: E0813 01:46:18.873965 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.874392 kubelet[2790]: E0813 01:46:18.874346 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.874392 kubelet[2790]: W0813 01:46:18.874370 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.874392 kubelet[2790]: E0813 01:46:18.874380 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.875125 kubelet[2790]: E0813 01:46:18.875111 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.875230 kubelet[2790]: W0813 01:46:18.875216 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.875286 kubelet[2790]: E0813 01:46:18.875274 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.875876 kubelet[2790]: E0813 01:46:18.875766 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.875992 kubelet[2790]: W0813 01:46:18.875978 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.876067 kubelet[2790]: E0813 01:46:18.876055 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.876554 kubelet[2790]: E0813 01:46:18.876439 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.876554 kubelet[2790]: W0813 01:46:18.876451 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.876554 kubelet[2790]: E0813 01:46:18.876461 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.877832 kubelet[2790]: E0813 01:46:18.877816 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.878004 kubelet[2790]: W0813 01:46:18.877883 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.878004 kubelet[2790]: E0813 01:46:18.877899 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.878132 kubelet[2790]: E0813 01:46:18.878119 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.878294 kubelet[2790]: W0813 01:46:18.878178 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.878294 kubelet[2790]: E0813 01:46:18.878192 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.878410 kubelet[2790]: E0813 01:46:18.878398 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.878470 kubelet[2790]: W0813 01:46:18.878458 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.878517 kubelet[2790]: E0813 01:46:18.878506 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.878885 kubelet[2790]: E0813 01:46:18.878781 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.878885 kubelet[2790]: W0813 01:46:18.878793 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.878885 kubelet[2790]: E0813 01:46:18.878802 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.880226 kubelet[2790]: E0813 01:46:18.879924 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.881935 kubelet[2790]: W0813 01:46:18.881819 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.881935 kubelet[2790]: E0813 01:46:18.881838 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.882348 kubelet[2790]: E0813 01:46:18.882226 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.882535 kubelet[2790]: W0813 01:46:18.882521 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.882702 kubelet[2790]: E0813 01:46:18.882688 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.884489 kubelet[2790]: E0813 01:46:18.884473 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.884552 kubelet[2790]: W0813 01:46:18.884540 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.884872 kubelet[2790]: E0813 01:46:18.884778 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.885842 kubelet[2790]: E0813 01:46:18.885826 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.886044 kubelet[2790]: W0813 01:46:18.885960 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.886044 kubelet[2790]: E0813 01:46:18.885977 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.886346 kubelet[2790]: E0813 01:46:18.886332 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.886509 kubelet[2790]: W0813 01:46:18.886394 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.886509 kubelet[2790]: E0813 01:46:18.886406 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.887005 kubelet[2790]: E0813 01:46:18.886921 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.887005 kubelet[2790]: W0813 01:46:18.886933 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.887005 kubelet[2790]: E0813 01:46:18.886942 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.887614 kubelet[2790]: E0813 01:46:18.887555 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.887614 kubelet[2790]: W0813 01:46:18.887568 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.887614 kubelet[2790]: E0813 01:46:18.887578 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.888952 kubelet[2790]: E0813 01:46:18.888912 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.888952 kubelet[2790]: W0813 01:46:18.888925 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.888952 kubelet[2790]: E0813 01:46:18.888937 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.889362 kubelet[2790]: E0813 01:46:18.889327 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.889362 kubelet[2790]: W0813 01:46:18.889340 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.889362 kubelet[2790]: E0813 01:46:18.889349 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.889811 kubelet[2790]: E0813 01:46:18.889766 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.889811 kubelet[2790]: W0813 01:46:18.889789 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.889811 kubelet[2790]: E0813 01:46:18.889798 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.890410 kubelet[2790]: E0813 01:46:18.890337 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.890410 kubelet[2790]: W0813 01:46:18.890350 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.890410 kubelet[2790]: E0813 01:46:18.890359 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.890884 kubelet[2790]: E0813 01:46:18.890823 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.891099 kubelet[2790]: W0813 01:46:18.891082 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.891252 kubelet[2790]: E0813 01:46:18.891231 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.892049 kubelet[2790]: E0813 01:46:18.891943 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.892049 kubelet[2790]: W0813 01:46:18.891958 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.892049 kubelet[2790]: E0813 01:46:18.891968 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.892306 kubelet[2790]: E0813 01:46:18.892294 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.892465 kubelet[2790]: W0813 01:46:18.892364 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.892465 kubelet[2790]: E0813 01:46:18.892377 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.893179 kubelet[2790]: E0813 01:46:18.893163 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.893346 kubelet[2790]: W0813 01:46:18.893221 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.893346 kubelet[2790]: E0813 01:46:18.893234 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.894024 kubelet[2790]: E0813 01:46:18.893991 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.894182 kubelet[2790]: W0813 01:46:18.894088 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.894182 kubelet[2790]: E0813 01:46:18.894105 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.894797 kubelet[2790]: E0813 01:46:18.894715 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.894901 kubelet[2790]: W0813 01:46:18.894850 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.895175 kubelet[2790]: E0813 01:46:18.894942 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.895865 kubelet[2790]: E0813 01:46:18.895716 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.895865 kubelet[2790]: W0813 01:46:18.895762 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.895865 kubelet[2790]: E0813 01:46:18.895775 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.899930 kubelet[2790]: E0813 01:46:18.899914 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.900167 kubelet[2790]: W0813 01:46:18.899998 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.900167 kubelet[2790]: E0813 01:46:18.900012 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.917773 containerd[1556]: time="2025-08-13T01:46:18.917690775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tsmrf,Uid:517ffc51-1a34-4ced-acf5-d8e5da6a1838,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:18.936941 containerd[1556]: time="2025-08-13T01:46:18.936729298Z" level=info msg="connecting to shim 19f136ecf27677a48dbcfabc2d82ff742c8f56ba9769515ce641ecafbfefd0f8" address="unix:///run/containerd/s/c15a429e3e4a3dd148596df52b2faebcc1e6f8eb8c8eb6510ce6f33ab5356af0" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:46:18.943873 kubelet[2790]: E0813 01:46:18.943831 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.944372 kubelet[2790]: W0813 01:46:18.944327 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.944795 kubelet[2790]: E0813 01:46:18.944774 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.945026 kubelet[2790]: I0813 01:46:18.945001 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/4296a7ed-e75a-4d74-935a-9017b9a86286-registration-dir\") pod \"csi-node-driver-c7jrc\" (UID: \"4296a7ed-e75a-4d74-935a-9017b9a86286\") " pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:46:18.946660 kubelet[2790]: E0813 01:46:18.946623 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.947634 kubelet[2790]: W0813 01:46:18.947605 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.947634 kubelet[2790]: E0813 01:46:18.947632 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.947729 kubelet[2790]: I0813 01:46:18.947680 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/4296a7ed-e75a-4d74-935a-9017b9a86286-socket-dir\") pod \"csi-node-driver-c7jrc\" (UID: \"4296a7ed-e75a-4d74-935a-9017b9a86286\") " pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:46:18.948066 kubelet[2790]: E0813 01:46:18.947960 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.948600 kubelet[2790]: W0813 01:46:18.947974 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.948801 kubelet[2790]: E0813 01:46:18.948605 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.948801 kubelet[2790]: I0813 01:46:18.948679 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/4296a7ed-e75a-4d74-935a-9017b9a86286-varrun\") pod \"csi-node-driver-c7jrc\" (UID: \"4296a7ed-e75a-4d74-935a-9017b9a86286\") " pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:46:18.949345 kubelet[2790]: E0813 01:46:18.949279 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.949345 kubelet[2790]: W0813 01:46:18.949292 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.949345 kubelet[2790]: E0813 01:46:18.949303 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.950929 kubelet[2790]: E0813 01:46:18.950832 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.950929 kubelet[2790]: W0813 01:46:18.950848 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.950929 kubelet[2790]: E0813 01:46:18.950890 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.951150 kubelet[2790]: E0813 01:46:18.951120 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.951150 kubelet[2790]: W0813 01:46:18.951129 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.951150 kubelet[2790]: E0813 01:46:18.951138 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.953765 kubelet[2790]: I0813 01:46:18.952882 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v76c4\" (UniqueName: \"kubernetes.io/projected/4296a7ed-e75a-4d74-935a-9017b9a86286-kube-api-access-v76c4\") pod \"csi-node-driver-c7jrc\" (UID: \"4296a7ed-e75a-4d74-935a-9017b9a86286\") " pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:46:18.953765 kubelet[2790]: E0813 01:46:18.952971 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.953765 kubelet[2790]: W0813 01:46:18.953000 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.953765 kubelet[2790]: E0813 01:46:18.953011 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.953765 kubelet[2790]: E0813 01:46:18.953251 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.953765 kubelet[2790]: W0813 01:46:18.953260 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.953765 kubelet[2790]: E0813 01:46:18.953269 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.953944 kubelet[2790]: E0813 01:46:18.953794 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.953944 kubelet[2790]: W0813 01:46:18.953807 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.953944 kubelet[2790]: E0813 01:46:18.953817 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.954541 kubelet[2790]: E0813 01:46:18.954499 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.954802 kubelet[2790]: W0813 01:46:18.954780 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.954864 kubelet[2790]: E0813 01:46:18.954804 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.955367 kubelet[2790]: E0813 01:46:18.955348 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.955367 kubelet[2790]: W0813 01:46:18.955362 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.955426 kubelet[2790]: E0813 01:46:18.955372 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.956572 kubelet[2790]: E0813 01:46:18.956359 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.956572 kubelet[2790]: W0813 01:46:18.956377 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.956572 kubelet[2790]: E0813 01:46:18.956387 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.957273 kubelet[2790]: E0813 01:46:18.956919 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.957386 kubelet[2790]: W0813 01:46:18.957369 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.957492 kubelet[2790]: E0813 01:46:18.957456 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.958012 kubelet[2790]: I0813 01:46:18.957918 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4296a7ed-e75a-4d74-935a-9017b9a86286-kubelet-dir\") pod \"csi-node-driver-c7jrc\" (UID: \"4296a7ed-e75a-4d74-935a-9017b9a86286\") " pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:46:18.959074 kubelet[2790]: E0813 01:46:18.958935 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.959074 kubelet[2790]: W0813 01:46:18.958949 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.959531 kubelet[2790]: E0813 01:46:18.959396 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.960339 kubelet[2790]: E0813 01:46:18.960088 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:18.960605 kubelet[2790]: W0813 01:46:18.960493 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:18.960605 kubelet[2790]: E0813 01:46:18.960516 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:18.984905 systemd[1]: Started cri-containerd-19f136ecf27677a48dbcfabc2d82ff742c8f56ba9769515ce641ecafbfefd0f8.scope - libcontainer container 19f136ecf27677a48dbcfabc2d82ff742c8f56ba9769515ce641ecafbfefd0f8. Aug 13 01:46:19.032678 containerd[1556]: time="2025-08-13T01:46:19.030099996Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-tsmrf,Uid:517ffc51-1a34-4ced-acf5-d8e5da6a1838,Namespace:calico-system,Attempt:0,} returns sandbox id \"19f136ecf27677a48dbcfabc2d82ff742c8f56ba9769515ce641ecafbfefd0f8\"" Aug 13 01:46:19.059541 kubelet[2790]: E0813 01:46:19.059514 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.059696 kubelet[2790]: W0813 01:46:19.059551 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.059696 kubelet[2790]: E0813 01:46:19.059570 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.059948 kubelet[2790]: E0813 01:46:19.059929 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.059948 kubelet[2790]: W0813 01:46:19.059941 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.060085 kubelet[2790]: E0813 01:46:19.060013 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.060469 kubelet[2790]: E0813 01:46:19.060455 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.060469 kubelet[2790]: W0813 01:46:19.060466 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.060526 kubelet[2790]: E0813 01:46:19.060475 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.060874 kubelet[2790]: E0813 01:46:19.060855 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.060874 kubelet[2790]: W0813 01:46:19.060870 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.060947 kubelet[2790]: E0813 01:46:19.060883 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.061326 kubelet[2790]: E0813 01:46:19.061310 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.061428 kubelet[2790]: W0813 01:46:19.061404 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.061428 kubelet[2790]: E0813 01:46:19.061423 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.061688 kubelet[2790]: E0813 01:46:19.061657 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.061688 kubelet[2790]: W0813 01:46:19.061682 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.061791 kubelet[2790]: E0813 01:46:19.061690 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.061952 kubelet[2790]: E0813 01:46:19.061939 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.061952 kubelet[2790]: W0813 01:46:19.061950 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.062004 kubelet[2790]: E0813 01:46:19.061959 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.062221 kubelet[2790]: E0813 01:46:19.062206 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.062221 kubelet[2790]: W0813 01:46:19.062218 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.062296 kubelet[2790]: E0813 01:46:19.062238 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.062447 kubelet[2790]: E0813 01:46:19.062431 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.062447 kubelet[2790]: W0813 01:46:19.062443 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.062501 kubelet[2790]: E0813 01:46:19.062451 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.062714 kubelet[2790]: E0813 01:46:19.062701 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.062714 kubelet[2790]: W0813 01:46:19.062711 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.062819 kubelet[2790]: E0813 01:46:19.062719 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.062964 kubelet[2790]: E0813 01:46:19.062948 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.062964 kubelet[2790]: W0813 01:46:19.062961 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.063019 kubelet[2790]: E0813 01:46:19.062969 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.064476 kubelet[2790]: E0813 01:46:19.064456 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.064476 kubelet[2790]: W0813 01:46:19.064473 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.064659 kubelet[2790]: E0813 01:46:19.064490 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.064812 kubelet[2790]: E0813 01:46:19.064780 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.064812 kubelet[2790]: W0813 01:46:19.064793 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.064812 kubelet[2790]: E0813 01:46:19.064803 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.065445 kubelet[2790]: E0813 01:46:19.065430 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.065445 kubelet[2790]: W0813 01:46:19.065443 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.065540 kubelet[2790]: E0813 01:46:19.065454 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.066889 kubelet[2790]: E0813 01:46:19.066873 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.066889 kubelet[2790]: W0813 01:46:19.066886 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.067106 kubelet[2790]: E0813 01:46:19.066896 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.067140 kubelet[2790]: E0813 01:46:19.067110 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.067140 kubelet[2790]: W0813 01:46:19.067119 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.067140 kubelet[2790]: E0813 01:46:19.067127 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.067351 kubelet[2790]: E0813 01:46:19.067334 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.067351 kubelet[2790]: W0813 01:46:19.067348 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.067462 kubelet[2790]: E0813 01:46:19.067356 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.067604 kubelet[2790]: E0813 01:46:19.067589 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.067604 kubelet[2790]: W0813 01:46:19.067602 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.067737 kubelet[2790]: E0813 01:46:19.067611 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.067941 kubelet[2790]: E0813 01:46:19.067911 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.067941 kubelet[2790]: W0813 01:46:19.067922 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.067941 kubelet[2790]: E0813 01:46:19.067929 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.069246 kubelet[2790]: E0813 01:46:19.068903 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.069246 kubelet[2790]: W0813 01:46:19.068916 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.069246 kubelet[2790]: E0813 01:46:19.068928 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.069246 kubelet[2790]: E0813 01:46:19.069083 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.069246 kubelet[2790]: W0813 01:46:19.069091 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.069246 kubelet[2790]: E0813 01:46:19.069099 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.069246 kubelet[2790]: E0813 01:46:19.069243 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.069246 kubelet[2790]: W0813 01:46:19.069251 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.069453 kubelet[2790]: E0813 01:46:19.069259 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.070846 kubelet[2790]: E0813 01:46:19.070824 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.070846 kubelet[2790]: W0813 01:46:19.070841 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.070920 kubelet[2790]: E0813 01:46:19.070851 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.071174 kubelet[2790]: E0813 01:46:19.071057 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.071174 kubelet[2790]: W0813 01:46:19.071071 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.071174 kubelet[2790]: E0813 01:46:19.071082 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.072357 kubelet[2790]: E0813 01:46:19.072341 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.072567 kubelet[2790]: W0813 01:46:19.072488 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.072567 kubelet[2790]: E0813 01:46:19.072504 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.073381 kubelet[2790]: E0813 01:46:19.073366 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:19.073474 kubelet[2790]: W0813 01:46:19.073438 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:19.073474 kubelet[2790]: E0813 01:46:19.073452 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:19.356607 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3830281030.mount: Deactivated successfully. Aug 13 01:46:20.718949 kubelet[2790]: E0813 01:46:20.718673 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:46:21.240723 containerd[1556]: time="2025-08-13T01:46:21.240131021Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:46:21.241558 containerd[1556]: time="2025-08-13T01:46:21.241518350Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 01:46:21.241992 containerd[1556]: time="2025-08-13T01:46:21.241960110Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:46:21.249790 containerd[1556]: time="2025-08-13T01:46:21.249670854Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:46:21.250621 containerd[1556]: time="2025-08-13T01:46:21.250593063Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 2.507010801s" Aug 13 01:46:21.250717 containerd[1556]: time="2025-08-13T01:46:21.250692693Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 01:46:21.253470 containerd[1556]: time="2025-08-13T01:46:21.252813091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 01:46:21.275203 containerd[1556]: time="2025-08-13T01:46:21.275157385Z" level=info msg="CreateContainer within sandbox \"a6df34dc1e5476403a75a033586a859ed0d1bbd1b6a5361e6314f40369ee5a54\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 01:46:21.283556 containerd[1556]: time="2025-08-13T01:46:21.283502559Z" level=info msg="Container 2b9b16d3a696259bfa160d4bdf11e63cf133237683db2f955c0725cf670b427d: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:46:21.291040 containerd[1556]: time="2025-08-13T01:46:21.291002433Z" level=info msg="CreateContainer within sandbox \"a6df34dc1e5476403a75a033586a859ed0d1bbd1b6a5361e6314f40369ee5a54\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"2b9b16d3a696259bfa160d4bdf11e63cf133237683db2f955c0725cf670b427d\"" Aug 13 01:46:21.292125 containerd[1556]: time="2025-08-13T01:46:21.292075052Z" level=info msg="StartContainer for \"2b9b16d3a696259bfa160d4bdf11e63cf133237683db2f955c0725cf670b427d\"" Aug 13 01:46:21.294110 containerd[1556]: time="2025-08-13T01:46:21.294063521Z" level=info msg="connecting to shim 2b9b16d3a696259bfa160d4bdf11e63cf133237683db2f955c0725cf670b427d" address="unix:///run/containerd/s/3f15f93db1ea5bba63e763b12ea969aa8f8737d205498b0b55266dfe73853610" protocol=ttrpc version=3 Aug 13 01:46:21.413119 systemd[1]: Started cri-containerd-2b9b16d3a696259bfa160d4bdf11e63cf133237683db2f955c0725cf670b427d.scope - libcontainer container 2b9b16d3a696259bfa160d4bdf11e63cf133237683db2f955c0725cf670b427d. Aug 13 01:46:21.616585 containerd[1556]: time="2025-08-13T01:46:21.616459272Z" level=info msg="StartContainer for \"2b9b16d3a696259bfa160d4bdf11e63cf133237683db2f955c0725cf670b427d\" returns successfully" Aug 13 01:46:21.849009 kubelet[2790]: E0813 01:46:21.848682 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:21.862830 kubelet[2790]: E0813 01:46:21.862789 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.862830 kubelet[2790]: W0813 01:46:21.862816 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.862982 kubelet[2790]: E0813 01:46:21.862860 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.863529 kubelet[2790]: E0813 01:46:21.863493 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.863654 kubelet[2790]: W0813 01:46:21.863512 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.863654 kubelet[2790]: E0813 01:46:21.863639 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.864284 kubelet[2790]: E0813 01:46:21.864254 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.864284 kubelet[2790]: W0813 01:46:21.864272 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.864445 kubelet[2790]: E0813 01:46:21.864415 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.867501 kubelet[2790]: I0813 01:46:21.867372 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" podStartSLOduration=1.357824357 podStartE2EDuration="3.867336127s" podCreationTimestamp="2025-08-13 01:46:18 +0000 UTC" firstStartedPulling="2025-08-13 01:46:18.742992122 +0000 UTC m=+22.197760477" lastFinishedPulling="2025-08-13 01:46:21.252503892 +0000 UTC m=+24.707272247" observedRunningTime="2025-08-13 01:46:21.86265501 +0000 UTC m=+25.317423365" watchObservedRunningTime="2025-08-13 01:46:21.867336127 +0000 UTC m=+25.322104482" Aug 13 01:46:21.868838 kubelet[2790]: E0813 01:46:21.868806 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.868948 kubelet[2790]: W0813 01:46:21.868919 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.869042 kubelet[2790]: E0813 01:46:21.869024 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.869396 kubelet[2790]: E0813 01:46:21.869342 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.869550 kubelet[2790]: W0813 01:46:21.869462 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.869550 kubelet[2790]: E0813 01:46:21.869480 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.869895 kubelet[2790]: E0813 01:46:21.869829 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.869895 kubelet[2790]: W0813 01:46:21.869842 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.869895 kubelet[2790]: E0813 01:46:21.869852 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.870221 kubelet[2790]: E0813 01:46:21.870154 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.870221 kubelet[2790]: W0813 01:46:21.870165 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.870221 kubelet[2790]: E0813 01:46:21.870174 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.870544 kubelet[2790]: E0813 01:46:21.870471 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.870544 kubelet[2790]: W0813 01:46:21.870483 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.870544 kubelet[2790]: E0813 01:46:21.870493 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.870935 kubelet[2790]: E0813 01:46:21.870866 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.870935 kubelet[2790]: W0813 01:46:21.870877 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.870935 kubelet[2790]: E0813 01:46:21.870886 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.871491 kubelet[2790]: E0813 01:46:21.871430 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.871491 kubelet[2790]: W0813 01:46:21.871441 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.871491 kubelet[2790]: E0813 01:46:21.871450 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.872017 kubelet[2790]: E0813 01:46:21.871949 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.872017 kubelet[2790]: W0813 01:46:21.871964 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.872017 kubelet[2790]: E0813 01:46:21.871973 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.872594 kubelet[2790]: E0813 01:46:21.872519 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.872594 kubelet[2790]: W0813 01:46:21.872532 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.872594 kubelet[2790]: E0813 01:46:21.872541 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.874006 kubelet[2790]: E0813 01:46:21.873672 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.874006 kubelet[2790]: W0813 01:46:21.873686 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.874006 kubelet[2790]: E0813 01:46:21.873697 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.874006 kubelet[2790]: E0813 01:46:21.873920 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.874006 kubelet[2790]: W0813 01:46:21.873928 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.874006 kubelet[2790]: E0813 01:46:21.873936 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.874647 kubelet[2790]: E0813 01:46:21.874554 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.874647 kubelet[2790]: W0813 01:46:21.874568 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.874647 kubelet[2790]: E0813 01:46:21.874577 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.958822 kubelet[2790]: E0813 01:46:21.954063 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.958822 kubelet[2790]: W0813 01:46:21.954087 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.958822 kubelet[2790]: E0813 01:46:21.954110 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.958822 kubelet[2790]: E0813 01:46:21.954766 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.958822 kubelet[2790]: W0813 01:46:21.954781 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.958822 kubelet[2790]: E0813 01:46:21.954796 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.958822 kubelet[2790]: E0813 01:46:21.955196 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.958822 kubelet[2790]: W0813 01:46:21.955210 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.958822 kubelet[2790]: E0813 01:46:21.955224 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.958822 kubelet[2790]: E0813 01:46:21.956804 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.959148 kubelet[2790]: W0813 01:46:21.956819 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.959148 kubelet[2790]: E0813 01:46:21.956834 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.959148 kubelet[2790]: E0813 01:46:21.957165 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.959148 kubelet[2790]: W0813 01:46:21.957178 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.959148 kubelet[2790]: E0813 01:46:21.957191 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.959148 kubelet[2790]: E0813 01:46:21.957450 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.959148 kubelet[2790]: W0813 01:46:21.957462 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.959148 kubelet[2790]: E0813 01:46:21.957474 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.959148 kubelet[2790]: E0813 01:46:21.957790 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.959148 kubelet[2790]: W0813 01:46:21.957803 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.959561 kubelet[2790]: E0813 01:46:21.957812 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.959561 kubelet[2790]: E0813 01:46:21.958045 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.959561 kubelet[2790]: W0813 01:46:21.958056 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.959561 kubelet[2790]: E0813 01:46:21.958071 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.959561 kubelet[2790]: E0813 01:46:21.958543 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.959561 kubelet[2790]: W0813 01:46:21.958555 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.959561 kubelet[2790]: E0813 01:46:21.958565 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.963157 kubelet[2790]: E0813 01:46:21.960791 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.963157 kubelet[2790]: W0813 01:46:21.960806 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.963157 kubelet[2790]: E0813 01:46:21.960818 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.963157 kubelet[2790]: E0813 01:46:21.961043 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.963157 kubelet[2790]: W0813 01:46:21.961052 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.963157 kubelet[2790]: E0813 01:46:21.961060 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.963157 kubelet[2790]: E0813 01:46:21.961323 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.963157 kubelet[2790]: W0813 01:46:21.961332 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.963157 kubelet[2790]: E0813 01:46:21.961341 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.963157 kubelet[2790]: E0813 01:46:21.961760 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.964987 kubelet[2790]: W0813 01:46:21.961769 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.964987 kubelet[2790]: E0813 01:46:21.961778 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.964987 kubelet[2790]: E0813 01:46:21.962881 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.964987 kubelet[2790]: W0813 01:46:21.962891 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.964987 kubelet[2790]: E0813 01:46:21.962902 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.965638 kubelet[2790]: E0813 01:46:21.965437 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.965638 kubelet[2790]: W0813 01:46:21.965455 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.965638 kubelet[2790]: E0813 01:46:21.965469 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.966060 kubelet[2790]: E0813 01:46:21.965879 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.966060 kubelet[2790]: W0813 01:46:21.965892 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.966060 kubelet[2790]: E0813 01:46:21.965903 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.966652 kubelet[2790]: E0813 01:46:21.966210 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.966652 kubelet[2790]: W0813 01:46:21.966222 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.966652 kubelet[2790]: E0813 01:46:21.966421 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:21.968089 kubelet[2790]: E0813 01:46:21.968073 2790 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:46:21.968176 kubelet[2790]: W0813 01:46:21.968163 2790 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:46:21.968237 kubelet[2790]: E0813 01:46:21.968224 2790 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:46:22.069328 containerd[1556]: time="2025-08-13T01:46:22.069263690Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:46:22.070489 containerd[1556]: time="2025-08-13T01:46:22.070272830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 01:46:22.071358 containerd[1556]: time="2025-08-13T01:46:22.071312059Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:46:22.072829 containerd[1556]: time="2025-08-13T01:46:22.072774208Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:46:22.073724 containerd[1556]: time="2025-08-13T01:46:22.073322457Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 820.454826ms" Aug 13 01:46:22.073724 containerd[1556]: time="2025-08-13T01:46:22.073357327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 01:46:22.077576 containerd[1556]: time="2025-08-13T01:46:22.077527255Z" level=info msg="CreateContainer within sandbox \"19f136ecf27677a48dbcfabc2d82ff742c8f56ba9769515ce641ecafbfefd0f8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 01:46:22.086859 containerd[1556]: time="2025-08-13T01:46:22.085257349Z" level=info msg="Container fa5a919e20223bac07f8f651375dcbb9305c368d27edd1d93dbdc2e9e7e64361: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:46:22.104334 containerd[1556]: time="2025-08-13T01:46:22.104283216Z" level=info msg="CreateContainer within sandbox \"19f136ecf27677a48dbcfabc2d82ff742c8f56ba9769515ce641ecafbfefd0f8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"fa5a919e20223bac07f8f651375dcbb9305c368d27edd1d93dbdc2e9e7e64361\"" Aug 13 01:46:22.105820 containerd[1556]: time="2025-08-13T01:46:22.105365875Z" level=info msg="StartContainer for \"fa5a919e20223bac07f8f651375dcbb9305c368d27edd1d93dbdc2e9e7e64361\"" Aug 13 01:46:22.107350 containerd[1556]: time="2025-08-13T01:46:22.107292644Z" level=info msg="connecting to shim fa5a919e20223bac07f8f651375dcbb9305c368d27edd1d93dbdc2e9e7e64361" address="unix:///run/containerd/s/c15a429e3e4a3dd148596df52b2faebcc1e6f8eb8c8eb6510ce6f33ab5356af0" protocol=ttrpc version=3 Aug 13 01:46:22.163306 systemd[1]: Started cri-containerd-fa5a919e20223bac07f8f651375dcbb9305c368d27edd1d93dbdc2e9e7e64361.scope - libcontainer container fa5a919e20223bac07f8f651375dcbb9305c368d27edd1d93dbdc2e9e7e64361. Aug 13 01:46:22.483425 containerd[1556]: time="2025-08-13T01:46:22.483320663Z" level=info msg="StartContainer for \"fa5a919e20223bac07f8f651375dcbb9305c368d27edd1d93dbdc2e9e7e64361\" returns successfully" Aug 13 01:46:22.526961 systemd[1]: cri-containerd-fa5a919e20223bac07f8f651375dcbb9305c368d27edd1d93dbdc2e9e7e64361.scope: Deactivated successfully. Aug 13 01:46:22.532265 containerd[1556]: time="2025-08-13T01:46:22.531720039Z" level=info msg="received exit event container_id:\"fa5a919e20223bac07f8f651375dcbb9305c368d27edd1d93dbdc2e9e7e64361\" id:\"fa5a919e20223bac07f8f651375dcbb9305c368d27edd1d93dbdc2e9e7e64361\" pid:3510 exited_at:{seconds:1755049582 nanos:530935950}" Aug 13 01:46:22.532606 containerd[1556]: time="2025-08-13T01:46:22.531730149Z" level=info msg="TaskExit event in podsandbox handler container_id:\"fa5a919e20223bac07f8f651375dcbb9305c368d27edd1d93dbdc2e9e7e64361\" id:\"fa5a919e20223bac07f8f651375dcbb9305c368d27edd1d93dbdc2e9e7e64361\" pid:3510 exited_at:{seconds:1755049582 nanos:530935950}" Aug 13 01:46:22.566174 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-fa5a919e20223bac07f8f651375dcbb9305c368d27edd1d93dbdc2e9e7e64361-rootfs.mount: Deactivated successfully. Aug 13 01:46:22.694585 kubelet[2790]: E0813 01:46:22.694240 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:46:22.852654 kubelet[2790]: I0813 01:46:22.852527 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:46:22.855968 containerd[1556]: time="2025-08-13T01:46:22.855907474Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 01:46:22.856196 kubelet[2790]: E0813 01:46:22.856076 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:24.703522 kubelet[2790]: E0813 01:46:24.703409 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:46:25.443366 kubelet[2790]: I0813 01:46:25.442985 2790 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:46:25.444596 kubelet[2790]: E0813 01:46:25.444146 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:25.859501 kubelet[2790]: E0813 01:46:25.859470 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:26.698872 kubelet[2790]: E0813 01:46:26.698822 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:46:26.958034 containerd[1556]: time="2025-08-13T01:46:26.957361838Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:46:26.958889 containerd[1556]: time="2025-08-13T01:46:26.958858448Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 01:46:26.959195 containerd[1556]: time="2025-08-13T01:46:26.959162318Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:46:26.962113 containerd[1556]: time="2025-08-13T01:46:26.962074566Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:46:26.963768 containerd[1556]: time="2025-08-13T01:46:26.962999195Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 4.106950921s" Aug 13 01:46:26.963768 containerd[1556]: time="2025-08-13T01:46:26.963043545Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 01:46:26.973929 containerd[1556]: time="2025-08-13T01:46:26.973870380Z" level=info msg="CreateContainer within sandbox \"19f136ecf27677a48dbcfabc2d82ff742c8f56ba9769515ce641ecafbfefd0f8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 01:46:26.983873 containerd[1556]: time="2025-08-13T01:46:26.983829494Z" level=info msg="Container 3538f29f84e2362f6432f3bf84aea32ac2549d9e4a8698ef54009978bd9bcd7f: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:46:26.989853 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3332506356.mount: Deactivated successfully. Aug 13 01:46:27.002734 containerd[1556]: time="2025-08-13T01:46:27.002659024Z" level=info msg="CreateContainer within sandbox \"19f136ecf27677a48dbcfabc2d82ff742c8f56ba9769515ce641ecafbfefd0f8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3538f29f84e2362f6432f3bf84aea32ac2549d9e4a8698ef54009978bd9bcd7f\"" Aug 13 01:46:27.004550 containerd[1556]: time="2025-08-13T01:46:27.004492553Z" level=info msg="StartContainer for \"3538f29f84e2362f6432f3bf84aea32ac2549d9e4a8698ef54009978bd9bcd7f\"" Aug 13 01:46:27.007147 containerd[1556]: time="2025-08-13T01:46:27.006997032Z" level=info msg="connecting to shim 3538f29f84e2362f6432f3bf84aea32ac2549d9e4a8698ef54009978bd9bcd7f" address="unix:///run/containerd/s/c15a429e3e4a3dd148596df52b2faebcc1e6f8eb8c8eb6510ce6f33ab5356af0" protocol=ttrpc version=3 Aug 13 01:46:27.101153 systemd[1]: Started cri-containerd-3538f29f84e2362f6432f3bf84aea32ac2549d9e4a8698ef54009978bd9bcd7f.scope - libcontainer container 3538f29f84e2362f6432f3bf84aea32ac2549d9e4a8698ef54009978bd9bcd7f. Aug 13 01:46:27.224450 containerd[1556]: time="2025-08-13T01:46:27.224363583Z" level=info msg="StartContainer for \"3538f29f84e2362f6432f3bf84aea32ac2549d9e4a8698ef54009978bd9bcd7f\" returns successfully" Aug 13 01:46:28.695463 kubelet[2790]: E0813 01:46:28.695413 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:46:29.995423 systemd[1]: cri-containerd-3538f29f84e2362f6432f3bf84aea32ac2549d9e4a8698ef54009978bd9bcd7f.scope: Deactivated successfully. Aug 13 01:46:29.995872 containerd[1556]: time="2025-08-13T01:46:29.995479042Z" level=info msg="TaskExit event in podsandbox handler container_id:\"3538f29f84e2362f6432f3bf84aea32ac2549d9e4a8698ef54009978bd9bcd7f\" id:\"3538f29f84e2362f6432f3bf84aea32ac2549d9e4a8698ef54009978bd9bcd7f\" pid:3570 exited_at:{seconds:1755049589 nanos:995122382}" Aug 13 01:46:29.995872 containerd[1556]: time="2025-08-13T01:46:29.995600982Z" level=info msg="received exit event container_id:\"3538f29f84e2362f6432f3bf84aea32ac2549d9e4a8698ef54009978bd9bcd7f\" id:\"3538f29f84e2362f6432f3bf84aea32ac2549d9e4a8698ef54009978bd9bcd7f\" pid:3570 exited_at:{seconds:1755049589 nanos:995122382}" Aug 13 01:46:29.996734 systemd[1]: cri-containerd-3538f29f84e2362f6432f3bf84aea32ac2549d9e4a8698ef54009978bd9bcd7f.scope: Consumed 2.935s CPU time, 193.8M memory peak, 171.2M written to disk. Aug 13 01:46:30.022284 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3538f29f84e2362f6432f3bf84aea32ac2549d9e4a8698ef54009978bd9bcd7f-rootfs.mount: Deactivated successfully. Aug 13 01:46:30.079442 kubelet[2790]: I0813 01:46:30.078945 2790 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 01:46:30.125389 kubelet[2790]: I0813 01:46:30.125238 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqghm\" (UniqueName: \"kubernetes.io/projected/f88563f6-5704-426b-aecc-303b3869ce30-kube-api-access-nqghm\") pod \"calico-kube-controllers-76ff444f8d-4xcg9\" (UID: \"f88563f6-5704-426b-aecc-303b3869ce30\") " pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:46:30.127020 kubelet[2790]: I0813 01:46:30.125553 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f88563f6-5704-426b-aecc-303b3869ce30-tigera-ca-bundle\") pod \"calico-kube-controllers-76ff444f8d-4xcg9\" (UID: \"f88563f6-5704-426b-aecc-303b3869ce30\") " pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:46:30.134760 systemd[1]: Created slice kubepods-besteffort-podf88563f6_5704_426b_aecc_303b3869ce30.slice - libcontainer container kubepods-besteffort-podf88563f6_5704_426b_aecc_303b3869ce30.slice. Aug 13 01:46:30.156274 systemd[1]: Created slice kubepods-besteffort-pod77f01e2a_2024_40f5_867c_b4861d62171a.slice - libcontainer container kubepods-besteffort-pod77f01e2a_2024_40f5_867c_b4861d62171a.slice. Aug 13 01:46:30.173360 systemd[1]: Created slice kubepods-burstable-pod21a6ba02_58d5_43c1_a7de_9e24560a65f6.slice - libcontainer container kubepods-burstable-pod21a6ba02_58d5_43c1_a7de_9e24560a65f6.slice. Aug 13 01:46:30.226991 kubelet[2790]: I0813 01:46:30.226023 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk64b\" (UniqueName: \"kubernetes.io/projected/77f01e2a-2024-40f5-867c-b4861d62171a-kube-api-access-fk64b\") pod \"calico-apiserver-8bcfdc4c4-9shkp\" (UID: \"77f01e2a-2024-40f5-867c-b4861d62171a\") " pod="calico-apiserver/calico-apiserver-8bcfdc4c4-9shkp" Aug 13 01:46:30.226991 kubelet[2790]: I0813 01:46:30.226073 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2-whisker-ca-bundle\") pod \"whisker-7d7968965c-f6p9v\" (UID: \"6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2\") " pod="calico-system/whisker-7d7968965c-f6p9v" Aug 13 01:46:30.226991 kubelet[2790]: I0813 01:46:30.226095 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tswnm\" (UniqueName: \"kubernetes.io/projected/6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2-kube-api-access-tswnm\") pod \"whisker-7d7968965c-f6p9v\" (UID: \"6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2\") " pod="calico-system/whisker-7d7968965c-f6p9v" Aug 13 01:46:30.226991 kubelet[2790]: I0813 01:46:30.226110 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/caa5836a-45f9-496b-86c1-95f6e1b6da17-config-volume\") pod \"coredns-674b8bbfcf-vtdcd\" (UID: \"caa5836a-45f9-496b-86c1-95f6e1b6da17\") " pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:46:30.226991 kubelet[2790]: I0813 01:46:30.226127 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46f2q\" (UniqueName: \"kubernetes.io/projected/caa5836a-45f9-496b-86c1-95f6e1b6da17-kube-api-access-46f2q\") pod \"coredns-674b8bbfcf-vtdcd\" (UID: \"caa5836a-45f9-496b-86c1-95f6e1b6da17\") " pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:46:30.227253 kubelet[2790]: I0813 01:46:30.226143 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rflj2\" (UniqueName: \"kubernetes.io/projected/d359a7e1-fa9f-4da1-9de3-6b7092de2ba9-kube-api-access-rflj2\") pod \"goldmane-768f4c5c69-rxx6b\" (UID: \"d359a7e1-fa9f-4da1-9de3-6b7092de2ba9\") " pod="calico-system/goldmane-768f4c5c69-rxx6b" Aug 13 01:46:30.227253 kubelet[2790]: I0813 01:46:30.226171 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d359a7e1-fa9f-4da1-9de3-6b7092de2ba9-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-rxx6b\" (UID: \"d359a7e1-fa9f-4da1-9de3-6b7092de2ba9\") " pod="calico-system/goldmane-768f4c5c69-rxx6b" Aug 13 01:46:30.227253 kubelet[2790]: I0813 01:46:30.226195 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77f01e2a-2024-40f5-867c-b4861d62171a-calico-apiserver-certs\") pod \"calico-apiserver-8bcfdc4c4-9shkp\" (UID: \"77f01e2a-2024-40f5-867c-b4861d62171a\") " pod="calico-apiserver/calico-apiserver-8bcfdc4c4-9shkp" Aug 13 01:46:30.227253 kubelet[2790]: I0813 01:46:30.226242 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2-whisker-backend-key-pair\") pod \"whisker-7d7968965c-f6p9v\" (UID: \"6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2\") " pod="calico-system/whisker-7d7968965c-f6p9v" Aug 13 01:46:30.227253 kubelet[2790]: I0813 01:46:30.226260 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w7pqv\" (UniqueName: \"kubernetes.io/projected/95e39f46-24f7-4c8e-8fd8-41c618bd7cd7-kube-api-access-w7pqv\") pod \"calico-apiserver-6587675c7f-cbcmb\" (UID: \"95e39f46-24f7-4c8e-8fd8-41c618bd7cd7\") " pod="calico-apiserver/calico-apiserver-6587675c7f-cbcmb" Aug 13 01:46:30.227391 kubelet[2790]: I0813 01:46:30.226277 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/21a6ba02-58d5-43c1-a7de-9e24560a65f6-config-volume\") pod \"coredns-674b8bbfcf-6rlkc\" (UID: \"21a6ba02-58d5-43c1-a7de-9e24560a65f6\") " pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:46:30.227391 kubelet[2790]: I0813 01:46:30.226293 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d359a7e1-fa9f-4da1-9de3-6b7092de2ba9-goldmane-key-pair\") pod \"goldmane-768f4c5c69-rxx6b\" (UID: \"d359a7e1-fa9f-4da1-9de3-6b7092de2ba9\") " pod="calico-system/goldmane-768f4c5c69-rxx6b" Aug 13 01:46:30.227391 kubelet[2790]: I0813 01:46:30.226393 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/95e39f46-24f7-4c8e-8fd8-41c618bd7cd7-calico-apiserver-certs\") pod \"calico-apiserver-6587675c7f-cbcmb\" (UID: \"95e39f46-24f7-4c8e-8fd8-41c618bd7cd7\") " pod="calico-apiserver/calico-apiserver-6587675c7f-cbcmb" Aug 13 01:46:30.227391 kubelet[2790]: I0813 01:46:30.226430 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d359a7e1-fa9f-4da1-9de3-6b7092de2ba9-config\") pod \"goldmane-768f4c5c69-rxx6b\" (UID: \"d359a7e1-fa9f-4da1-9de3-6b7092de2ba9\") " pod="calico-system/goldmane-768f4c5c69-rxx6b" Aug 13 01:46:30.227391 kubelet[2790]: I0813 01:46:30.226465 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d3393251-40c0-4172-a2b2-9ca9c0a1f573-calico-apiserver-certs\") pod \"calico-apiserver-8bcfdc4c4-76dcl\" (UID: \"d3393251-40c0-4172-a2b2-9ca9c0a1f573\") " pod="calico-apiserver/calico-apiserver-8bcfdc4c4-76dcl" Aug 13 01:46:30.227509 kubelet[2790]: I0813 01:46:30.226513 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zb5xn\" (UniqueName: \"kubernetes.io/projected/d3393251-40c0-4172-a2b2-9ca9c0a1f573-kube-api-access-zb5xn\") pod \"calico-apiserver-8bcfdc4c4-76dcl\" (UID: \"d3393251-40c0-4172-a2b2-9ca9c0a1f573\") " pod="calico-apiserver/calico-apiserver-8bcfdc4c4-76dcl" Aug 13 01:46:30.227509 kubelet[2790]: I0813 01:46:30.226535 2790 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7xx5\" (UniqueName: \"kubernetes.io/projected/21a6ba02-58d5-43c1-a7de-9e24560a65f6-kube-api-access-x7xx5\") pod \"coredns-674b8bbfcf-6rlkc\" (UID: \"21a6ba02-58d5-43c1-a7de-9e24560a65f6\") " pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:46:30.228266 systemd[1]: Created slice kubepods-besteffort-podd3393251_40c0_4172_a2b2_9ca9c0a1f573.slice - libcontainer container kubepods-besteffort-podd3393251_40c0_4172_a2b2_9ca9c0a1f573.slice. Aug 13 01:46:30.238045 systemd[1]: Created slice kubepods-burstable-podcaa5836a_45f9_496b_86c1_95f6e1b6da17.slice - libcontainer container kubepods-burstable-podcaa5836a_45f9_496b_86c1_95f6e1b6da17.slice. Aug 13 01:46:30.262365 systemd[1]: Created slice kubepods-besteffort-pod6b938dc0_adc7_4aa5_8d4d_dbf51ae2cfd2.slice - libcontainer container kubepods-besteffort-pod6b938dc0_adc7_4aa5_8d4d_dbf51ae2cfd2.slice. Aug 13 01:46:30.273918 systemd[1]: Created slice kubepods-besteffort-podd359a7e1_fa9f_4da1_9de3_6b7092de2ba9.slice - libcontainer container kubepods-besteffort-podd359a7e1_fa9f_4da1_9de3_6b7092de2ba9.slice. Aug 13 01:46:30.281450 systemd[1]: Created slice kubepods-besteffort-pod95e39f46_24f7_4c8e_8fd8_41c618bd7cd7.slice - libcontainer container kubepods-besteffort-pod95e39f46_24f7_4c8e_8fd8_41c618bd7cd7.slice. Aug 13 01:46:30.447447 containerd[1556]: time="2025-08-13T01:46:30.447393374Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:30.466259 containerd[1556]: time="2025-08-13T01:46:30.466201227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bcfdc4c4-9shkp,Uid:77f01e2a-2024-40f5-867c-b4861d62171a,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:46:30.481719 kubelet[2790]: E0813 01:46:30.481230 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:30.482575 containerd[1556]: time="2025-08-13T01:46:30.482534270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:30.536245 containerd[1556]: time="2025-08-13T01:46:30.536084788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bcfdc4c4-76dcl,Uid:d3393251-40c0-4172-a2b2-9ca9c0a1f573,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:46:30.546260 kubelet[2790]: E0813 01:46:30.545642 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:30.548278 containerd[1556]: time="2025-08-13T01:46:30.547848273Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:30.569951 containerd[1556]: time="2025-08-13T01:46:30.569912164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d7968965c-f6p9v,Uid:6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:30.579136 containerd[1556]: time="2025-08-13T01:46:30.579061170Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-rxx6b,Uid:d359a7e1-fa9f-4da1-9de3-6b7092de2ba9,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:30.587663 containerd[1556]: time="2025-08-13T01:46:30.587624556Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6587675c7f-cbcmb,Uid:95e39f46-24f7-4c8e-8fd8-41c618bd7cd7,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:46:30.709094 systemd[1]: Created slice kubepods-besteffort-pod4296a7ed_e75a_4d74_935a_9017b9a86286.slice - libcontainer container kubepods-besteffort-pod4296a7ed_e75a_4d74_935a_9017b9a86286.slice. Aug 13 01:46:30.714452 containerd[1556]: time="2025-08-13T01:46:30.714414424Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:30.717085 containerd[1556]: time="2025-08-13T01:46:30.717031863Z" level=error msg="Failed to destroy network for sandbox \"6e413f93bb92ff3792bfa1aaee2e5bdb2816779507ee57d14d80bf8064d1bdfe\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.723693 containerd[1556]: time="2025-08-13T01:46:30.723649340Z" level=error msg="Failed to destroy network for sandbox \"a6d8664fdbf7918cf6e5ec97b10842e1fd5bc9fb8e527cfb2e34dcf383dc23eb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.724129 containerd[1556]: time="2025-08-13T01:46:30.724070270Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e413f93bb92ff3792bfa1aaee2e5bdb2816779507ee57d14d80bf8064d1bdfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.725114 kubelet[2790]: E0813 01:46:30.725046 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e413f93bb92ff3792bfa1aaee2e5bdb2816779507ee57d14d80bf8064d1bdfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.725443 kubelet[2790]: E0813 01:46:30.725411 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e413f93bb92ff3792bfa1aaee2e5bdb2816779507ee57d14d80bf8064d1bdfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:46:30.725557 kubelet[2790]: E0813 01:46:30.725535 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6e413f93bb92ff3792bfa1aaee2e5bdb2816779507ee57d14d80bf8064d1bdfe\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:46:30.725853 kubelet[2790]: E0813 01:46:30.725719 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6e413f93bb92ff3792bfa1aaee2e5bdb2816779507ee57d14d80bf8064d1bdfe\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:46:30.742866 containerd[1556]: time="2025-08-13T01:46:30.742819162Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bcfdc4c4-9shkp,Uid:77f01e2a-2024-40f5-867c-b4861d62171a,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6d8664fdbf7918cf6e5ec97b10842e1fd5bc9fb8e527cfb2e34dcf383dc23eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.743732 kubelet[2790]: E0813 01:46:30.743254 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6d8664fdbf7918cf6e5ec97b10842e1fd5bc9fb8e527cfb2e34dcf383dc23eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.743732 kubelet[2790]: E0813 01:46:30.743305 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6d8664fdbf7918cf6e5ec97b10842e1fd5bc9fb8e527cfb2e34dcf383dc23eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bcfdc4c4-9shkp" Aug 13 01:46:30.743732 kubelet[2790]: E0813 01:46:30.743352 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6d8664fdbf7918cf6e5ec97b10842e1fd5bc9fb8e527cfb2e34dcf383dc23eb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bcfdc4c4-9shkp" Aug 13 01:46:30.743882 kubelet[2790]: E0813 01:46:30.743429 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8bcfdc4c4-9shkp_calico-apiserver(77f01e2a-2024-40f5-867c-b4861d62171a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8bcfdc4c4-9shkp_calico-apiserver(77f01e2a-2024-40f5-867c-b4861d62171a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6d8664fdbf7918cf6e5ec97b10842e1fd5bc9fb8e527cfb2e34dcf383dc23eb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8bcfdc4c4-9shkp" podUID="77f01e2a-2024-40f5-867c-b4861d62171a" Aug 13 01:46:30.766655 containerd[1556]: time="2025-08-13T01:46:30.766591572Z" level=error msg="Failed to destroy network for sandbox \"d1691736a3db197d5391f0cf135dd2accc247d343fbd87869635835f49661ca1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.771813 containerd[1556]: time="2025-08-13T01:46:30.771756890Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1691736a3db197d5391f0cf135dd2accc247d343fbd87869635835f49661ca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.772064 kubelet[2790]: E0813 01:46:30.772018 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1691736a3db197d5391f0cf135dd2accc247d343fbd87869635835f49661ca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.772181 kubelet[2790]: E0813 01:46:30.772086 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1691736a3db197d5391f0cf135dd2accc247d343fbd87869635835f49661ca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:46:30.772181 kubelet[2790]: E0813 01:46:30.772109 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1691736a3db197d5391f0cf135dd2accc247d343fbd87869635835f49661ca1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:46:30.772294 kubelet[2790]: E0813 01:46:30.772166 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1691736a3db197d5391f0cf135dd2accc247d343fbd87869635835f49661ca1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6rlkc" podUID="21a6ba02-58d5-43c1-a7de-9e24560a65f6" Aug 13 01:46:30.838613 containerd[1556]: time="2025-08-13T01:46:30.838446842Z" level=error msg="Failed to destroy network for sandbox \"c1f3a46dead33b4960643684a36c5edec2b0302b0d81e778197fef148b39e17f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.840372 containerd[1556]: time="2025-08-13T01:46:30.840331982Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-8bcfdc4c4-76dcl,Uid:d3393251-40c0-4172-a2b2-9ca9c0a1f573,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1f3a46dead33b4960643684a36c5edec2b0302b0d81e778197fef148b39e17f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.841518 kubelet[2790]: E0813 01:46:30.841070 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1f3a46dead33b4960643684a36c5edec2b0302b0d81e778197fef148b39e17f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.841518 kubelet[2790]: E0813 01:46:30.841153 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1f3a46dead33b4960643684a36c5edec2b0302b0d81e778197fef148b39e17f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bcfdc4c4-76dcl" Aug 13 01:46:30.841518 kubelet[2790]: E0813 01:46:30.841174 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c1f3a46dead33b4960643684a36c5edec2b0302b0d81e778197fef148b39e17f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-8bcfdc4c4-76dcl" Aug 13 01:46:30.841711 kubelet[2790]: E0813 01:46:30.841225 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-8bcfdc4c4-76dcl_calico-apiserver(d3393251-40c0-4172-a2b2-9ca9c0a1f573)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-8bcfdc4c4-76dcl_calico-apiserver(d3393251-40c0-4172-a2b2-9ca9c0a1f573)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c1f3a46dead33b4960643684a36c5edec2b0302b0d81e778197fef148b39e17f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-8bcfdc4c4-76dcl" podUID="d3393251-40c0-4172-a2b2-9ca9c0a1f573" Aug 13 01:46:30.888041 containerd[1556]: time="2025-08-13T01:46:30.887947712Z" level=error msg="Failed to destroy network for sandbox \"465089a140b161c8c683d37f43afa658c18a510b058f628e1311f1aaed007d0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.891183 containerd[1556]: time="2025-08-13T01:46:30.891144061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7d7968965c-f6p9v,Uid:6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"465089a140b161c8c683d37f43afa658c18a510b058f628e1311f1aaed007d0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.891893 kubelet[2790]: E0813 01:46:30.891845 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"465089a140b161c8c683d37f43afa658c18a510b058f628e1311f1aaed007d0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.891976 kubelet[2790]: E0813 01:46:30.891914 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"465089a140b161c8c683d37f43afa658c18a510b058f628e1311f1aaed007d0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d7968965c-f6p9v" Aug 13 01:46:30.891976 kubelet[2790]: E0813 01:46:30.891937 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"465089a140b161c8c683d37f43afa658c18a510b058f628e1311f1aaed007d0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7d7968965c-f6p9v" Aug 13 01:46:30.892038 kubelet[2790]: E0813 01:46:30.891990 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7d7968965c-f6p9v_calico-system(6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7d7968965c-f6p9v_calico-system(6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"465089a140b161c8c683d37f43afa658c18a510b058f628e1311f1aaed007d0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7d7968965c-f6p9v" podUID="6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2" Aug 13 01:46:30.927277 containerd[1556]: time="2025-08-13T01:46:30.924276807Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:46:30.928873 containerd[1556]: time="2025-08-13T01:46:30.927175856Z" level=error msg="Failed to destroy network for sandbox \"ceb8c8afe9eec83bf516216851ce6ef97b93f3a14642322fd746d2a5c54d593d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.932612 containerd[1556]: time="2025-08-13T01:46:30.932494503Z" level=error msg="Failed to destroy network for sandbox \"a4f109f782f6aaf90400a2b688fe0e3418cdc271781facbe04936508e6437f96\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.936997 containerd[1556]: time="2025-08-13T01:46:30.936906992Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4f109f782f6aaf90400a2b688fe0e3418cdc271781facbe04936508e6437f96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.938380 kubelet[2790]: E0813 01:46:30.937526 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4f109f782f6aaf90400a2b688fe0e3418cdc271781facbe04936508e6437f96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.938380 kubelet[2790]: E0813 01:46:30.937581 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4f109f782f6aaf90400a2b688fe0e3418cdc271781facbe04936508e6437f96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:46:30.938380 kubelet[2790]: E0813 01:46:30.938119 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a4f109f782f6aaf90400a2b688fe0e3418cdc271781facbe04936508e6437f96\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:46:30.939628 containerd[1556]: time="2025-08-13T01:46:30.939365171Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceb8c8afe9eec83bf516216851ce6ef97b93f3a14642322fd746d2a5c54d593d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.941247 containerd[1556]: time="2025-08-13T01:46:30.940259170Z" level=error msg="Failed to destroy network for sandbox \"cbae9013f22586b106d99ae35282fda2bd9dc7064a33caaa346c2ff15c9fcdc4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.941299 kubelet[2790]: E0813 01:46:30.939046 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a4f109f782f6aaf90400a2b688fe0e3418cdc271781facbe04936508e6437f96\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:46:30.941299 kubelet[2790]: E0813 01:46:30.940891 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceb8c8afe9eec83bf516216851ce6ef97b93f3a14642322fd746d2a5c54d593d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.941299 kubelet[2790]: E0813 01:46:30.940929 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceb8c8afe9eec83bf516216851ce6ef97b93f3a14642322fd746d2a5c54d593d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:46:30.942150 kubelet[2790]: E0813 01:46:30.940947 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ceb8c8afe9eec83bf516216851ce6ef97b93f3a14642322fd746d2a5c54d593d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:46:30.942150 kubelet[2790]: E0813 01:46:30.940987 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ceb8c8afe9eec83bf516216851ce6ef97b93f3a14642322fd746d2a5c54d593d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vtdcd" podUID="caa5836a-45f9-496b-86c1-95f6e1b6da17" Aug 13 01:46:30.949434 containerd[1556]: time="2025-08-13T01:46:30.949280206Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-6587675c7f-cbcmb,Uid:95e39f46-24f7-4c8e-8fd8-41c618bd7cd7,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbae9013f22586b106d99ae35282fda2bd9dc7064a33caaa346c2ff15c9fcdc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.950152 kubelet[2790]: E0813 01:46:30.950122 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbae9013f22586b106d99ae35282fda2bd9dc7064a33caaa346c2ff15c9fcdc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.950332 kubelet[2790]: E0813 01:46:30.950218 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbae9013f22586b106d99ae35282fda2bd9dc7064a33caaa346c2ff15c9fcdc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6587675c7f-cbcmb" Aug 13 01:46:30.950332 kubelet[2790]: E0813 01:46:30.950241 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cbae9013f22586b106d99ae35282fda2bd9dc7064a33caaa346c2ff15c9fcdc4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-6587675c7f-cbcmb" Aug 13 01:46:30.950509 kubelet[2790]: E0813 01:46:30.950438 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-6587675c7f-cbcmb_calico-apiserver(95e39f46-24f7-4c8e-8fd8-41c618bd7cd7)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-6587675c7f-cbcmb_calico-apiserver(95e39f46-24f7-4c8e-8fd8-41c618bd7cd7)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cbae9013f22586b106d99ae35282fda2bd9dc7064a33caaa346c2ff15c9fcdc4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-6587675c7f-cbcmb" podUID="95e39f46-24f7-4c8e-8fd8-41c618bd7cd7" Aug 13 01:46:30.969185 containerd[1556]: time="2025-08-13T01:46:30.969072238Z" level=error msg="Failed to destroy network for sandbox \"f35ce1b86dafc0dc4b4093e50cf19dbd93de2d24331bb4bd933fdf97bb587a46\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.970826 containerd[1556]: time="2025-08-13T01:46:30.970738528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-rxx6b,Uid:d359a7e1-fa9f-4da1-9de3-6b7092de2ba9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f35ce1b86dafc0dc4b4093e50cf19dbd93de2d24331bb4bd933fdf97bb587a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.971146 kubelet[2790]: E0813 01:46:30.971087 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f35ce1b86dafc0dc4b4093e50cf19dbd93de2d24331bb4bd933fdf97bb587a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:30.971146 kubelet[2790]: E0813 01:46:30.971152 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f35ce1b86dafc0dc4b4093e50cf19dbd93de2d24331bb4bd933fdf97bb587a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-rxx6b" Aug 13 01:46:30.971487 kubelet[2790]: E0813 01:46:30.971176 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f35ce1b86dafc0dc4b4093e50cf19dbd93de2d24331bb4bd933fdf97bb587a46\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-rxx6b" Aug 13 01:46:30.971487 kubelet[2790]: E0813 01:46:30.971237 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-rxx6b_calico-system(d359a7e1-fa9f-4da1-9de3-6b7092de2ba9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-rxx6b_calico-system(d359a7e1-fa9f-4da1-9de3-6b7092de2ba9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f35ce1b86dafc0dc4b4093e50cf19dbd93de2d24331bb4bd933fdf97bb587a46\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-rxx6b" podUID="d359a7e1-fa9f-4da1-9de3-6b7092de2ba9" Aug 13 01:46:35.396126 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount182436704.mount: Deactivated successfully. Aug 13 01:46:35.396825 containerd[1556]: time="2025-08-13T01:46:35.396736263Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount182436704: write /var/lib/containerd/tmpmounts/containerd-mount182436704/usr/bin/calico-node: no space left on device" Aug 13 01:46:35.397203 containerd[1556]: time="2025-08-13T01:46:35.396849503Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:46:35.397338 kubelet[2790]: E0813 01:46:35.397229 2790 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount182436704: write /var/lib/containerd/tmpmounts/containerd-mount182436704/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:46:35.398136 kubelet[2790]: E0813 01:46:35.397789 2790 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount182436704: write /var/lib/containerd/tmpmounts/containerd-mount182436704/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:46:35.398637 kubelet[2790]: E0813 01:46:35.398492 2790 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzvb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},StopSignal:nil,},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-tsmrf_calico-system(517ffc51-1a34-4ced-acf5-d8e5da6a1838): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount182436704: write /var/lib/containerd/tmpmounts/containerd-mount182436704/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:46:35.400786 kubelet[2790]: E0813 01:46:35.399953 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount182436704: write /var/lib/containerd/tmpmounts/containerd-mount182436704/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-tsmrf" podUID="517ffc51-1a34-4ced-acf5-d8e5da6a1838" Aug 13 01:46:37.000702 kubelet[2790]: I0813 01:46:37.000650 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:37.002457 kubelet[2790]: I0813 01:46:37.000734 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:46:37.004429 kubelet[2790]: I0813 01:46:37.004395 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:46:37.017508 kubelet[2790]: I0813 01:46:37.017461 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:37.017632 kubelet[2790]: I0813 01:46:37.017594 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-8bcfdc4c4-76dcl","calico-system/goldmane-768f4c5c69-rxx6b","calico-apiserver/calico-apiserver-6587675c7f-cbcmb","calico-apiserver/calico-apiserver-8bcfdc4c4-9shkp","calico-system/whisker-7d7968965c-f6p9v","kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/calico-node-tsmrf","calico-system/csi-node-driver-c7jrc","tigera-operator/tigera-operator-747864d56d-n7zrt","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:46:37.023582 kubelet[2790]: I0813 01:46:37.023531 2790 eviction_manager.go:629] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-8bcfdc4c4-76dcl" Aug 13 01:46:37.023582 kubelet[2790]: I0813 01:46:37.023566 2790 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-8bcfdc4c4-76dcl"] Aug 13 01:46:37.052782 kubelet[2790]: I0813 01:46:37.051829 2790 kubelet.go:2405] "Pod admission denied" podUID="a2517e00-ad33-4059-b0b9-bb6e08cf61ac" pod="calico-apiserver/calico-apiserver-8bcfdc4c4-5xqgl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:37.077384 kubelet[2790]: I0813 01:46:37.077213 2790 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zb5xn\" (UniqueName: \"kubernetes.io/projected/d3393251-40c0-4172-a2b2-9ca9c0a1f573-kube-api-access-zb5xn\") pod \"d3393251-40c0-4172-a2b2-9ca9c0a1f573\" (UID: \"d3393251-40c0-4172-a2b2-9ca9c0a1f573\") " Aug 13 01:46:37.077560 kubelet[2790]: I0813 01:46:37.077515 2790 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d3393251-40c0-4172-a2b2-9ca9c0a1f573-calico-apiserver-certs\") pod \"d3393251-40c0-4172-a2b2-9ca9c0a1f573\" (UID: \"d3393251-40c0-4172-a2b2-9ca9c0a1f573\") " Aug 13 01:46:37.086793 kubelet[2790]: I0813 01:46:37.086724 2790 kubelet.go:2405] "Pod admission denied" podUID="fbe93fc8-95bb-4cb6-8216-c2ecffef767a" pod="calico-apiserver/calico-apiserver-8bcfdc4c4-wrwlw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:37.096772 kubelet[2790]: I0813 01:46:37.094829 2790 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d3393251-40c0-4172-a2b2-9ca9c0a1f573-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "d3393251-40c0-4172-a2b2-9ca9c0a1f573" (UID: "d3393251-40c0-4172-a2b2-9ca9c0a1f573"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:46:37.132684 systemd[1]: var-lib-kubelet-pods-d3393251\x2d40c0\x2d4172\x2da2b2\x2d9ca9c0a1f573-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dzb5xn.mount: Deactivated successfully. Aug 13 01:46:37.137383 kubelet[2790]: I0813 01:46:37.094918 2790 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d3393251-40c0-4172-a2b2-9ca9c0a1f573-kube-api-access-zb5xn" (OuterVolumeSpecName: "kube-api-access-zb5xn") pod "d3393251-40c0-4172-a2b2-9ca9c0a1f573" (UID: "d3393251-40c0-4172-a2b2-9ca9c0a1f573"). InnerVolumeSpecName "kube-api-access-zb5xn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:46:37.135665 systemd[1]: var-lib-kubelet-pods-d3393251\x2d40c0\x2d4172\x2da2b2\x2d9ca9c0a1f573-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:46:37.179209 kubelet[2790]: I0813 01:46:37.179036 2790 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zb5xn\" (UniqueName: \"kubernetes.io/projected/d3393251-40c0-4172-a2b2-9ca9c0a1f573-kube-api-access-zb5xn\") on node \"172-232-7-67\" DevicePath \"\"" Aug 13 01:46:37.179209 kubelet[2790]: I0813 01:46:37.179079 2790 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/d3393251-40c0-4172-a2b2-9ca9c0a1f573-calico-apiserver-certs\") on node \"172-232-7-67\" DevicePath \"\"" Aug 13 01:46:37.181279 kubelet[2790]: I0813 01:46:37.181231 2790 kubelet.go:2405] "Pod admission denied" podUID="d3b5b4fd-182a-475e-ab5a-7136a2e5163e" pod="calico-apiserver/calico-apiserver-8bcfdc4c4-962gn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:37.208850 kubelet[2790]: I0813 01:46:37.208540 2790 kubelet.go:2405] "Pod admission denied" podUID="2150a439-47d1-4d0f-89ee-e0f8c5f2b840" pod="calico-apiserver/calico-apiserver-8bcfdc4c4-zb2x7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:37.239241 kubelet[2790]: I0813 01:46:37.239156 2790 kubelet.go:2405] "Pod admission denied" podUID="1242d324-c8a5-42cb-a4e4-a47a6accc6b6" pod="calico-apiserver/calico-apiserver-8bcfdc4c4-zw8bc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:37.277161 kubelet[2790]: I0813 01:46:37.276984 2790 kubelet.go:2405] "Pod admission denied" podUID="73198539-e766-4942-8bd0-bf1ea4548dcb" pod="calico-apiserver/calico-apiserver-8bcfdc4c4-gtqfg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:37.316116 kubelet[2790]: I0813 01:46:37.316053 2790 kubelet.go:2405] "Pod admission denied" podUID="7909d45d-2335-4203-bb9b-553ada90f226" pod="calico-apiserver/calico-apiserver-8bcfdc4c4-qlmlj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:37.374471 kubelet[2790]: I0813 01:46:37.374391 2790 kubelet.go:2405] "Pod admission denied" podUID="b1669044-36b7-400a-b9c1-f62554b6f29e" pod="calico-apiserver/calico-apiserver-8bcfdc4c4-k52g7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:37.421087 kubelet[2790]: I0813 01:46:37.421026 2790 kubelet.go:2405] "Pod admission denied" podUID="be62b4b2-c679-41bd-b43d-1eb0e641f277" pod="calico-apiserver/calico-apiserver-8bcfdc4c4-w8tcl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:37.545886 kubelet[2790]: I0813 01:46:37.545468 2790 kubelet.go:2405] "Pod admission denied" podUID="bac0a4fd-b9fa-489d-8f6b-530c89d87edb" pod="calico-apiserver/calico-apiserver-8bcfdc4c4-nstld" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:37.957145 systemd[1]: Removed slice kubepods-besteffort-podd3393251_40c0_4172_a2b2_9ca9c0a1f573.slice - libcontainer container kubepods-besteffort-podd3393251_40c0_4172_a2b2_9ca9c0a1f573.slice. Aug 13 01:46:38.024227 kubelet[2790]: I0813 01:46:38.024145 2790 eviction_manager.go:459] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-8bcfdc4c4-76dcl"] Aug 13 01:46:38.045659 kubelet[2790]: I0813 01:46:38.045616 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:38.045821 kubelet[2790]: I0813 01:46:38.045691 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:46:38.050688 kubelet[2790]: I0813 01:46:38.050385 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:46:38.074138 kubelet[2790]: I0813 01:46:38.074101 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:38.074389 kubelet[2790]: I0813 01:46:38.074192 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/goldmane-768f4c5c69-rxx6b","calico-apiserver/calico-apiserver-8bcfdc4c4-9shkp","calico-system/whisker-7d7968965c-f6p9v","calico-apiserver/calico-apiserver-6587675c7f-cbcmb","kube-system/coredns-674b8bbfcf-vtdcd","calico-system/calico-kube-controllers-76ff444f8d-4xcg9","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-node-tsmrf","calico-system/csi-node-driver-c7jrc","tigera-operator/tigera-operator-747864d56d-n7zrt","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:46:38.081030 kubelet[2790]: I0813 01:46:38.080989 2790 eviction_manager.go:629] "Eviction manager: pod is evicted successfully" pod="calico-system/goldmane-768f4c5c69-rxx6b" Aug 13 01:46:38.081030 kubelet[2790]: I0813 01:46:38.081018 2790 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/goldmane-768f4c5c69-rxx6b"] Aug 13 01:46:38.084531 kubelet[2790]: I0813 01:46:38.084456 2790 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d359a7e1-fa9f-4da1-9de3-6b7092de2ba9-goldmane-key-pair\") pod \"d359a7e1-fa9f-4da1-9de3-6b7092de2ba9\" (UID: \"d359a7e1-fa9f-4da1-9de3-6b7092de2ba9\") " Aug 13 01:46:38.084778 kubelet[2790]: I0813 01:46:38.084629 2790 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d359a7e1-fa9f-4da1-9de3-6b7092de2ba9-config\") pod \"d359a7e1-fa9f-4da1-9de3-6b7092de2ba9\" (UID: \"d359a7e1-fa9f-4da1-9de3-6b7092de2ba9\") " Aug 13 01:46:38.084913 kubelet[2790]: I0813 01:46:38.084898 2790 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rflj2\" (UniqueName: \"kubernetes.io/projected/d359a7e1-fa9f-4da1-9de3-6b7092de2ba9-kube-api-access-rflj2\") pod \"d359a7e1-fa9f-4da1-9de3-6b7092de2ba9\" (UID: \"d359a7e1-fa9f-4da1-9de3-6b7092de2ba9\") " Aug 13 01:46:38.085176 kubelet[2790]: I0813 01:46:38.084979 2790 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d359a7e1-fa9f-4da1-9de3-6b7092de2ba9-goldmane-ca-bundle\") pod \"d359a7e1-fa9f-4da1-9de3-6b7092de2ba9\" (UID: \"d359a7e1-fa9f-4da1-9de3-6b7092de2ba9\") " Aug 13 01:46:38.086509 kubelet[2790]: I0813 01:46:38.086486 2790 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d359a7e1-fa9f-4da1-9de3-6b7092de2ba9-goldmane-ca-bundle" (OuterVolumeSpecName: "goldmane-ca-bundle") pod "d359a7e1-fa9f-4da1-9de3-6b7092de2ba9" (UID: "d359a7e1-fa9f-4da1-9de3-6b7092de2ba9"). InnerVolumeSpecName "goldmane-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:46:38.086863 kubelet[2790]: I0813 01:46:38.086759 2790 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d359a7e1-fa9f-4da1-9de3-6b7092de2ba9-config" (OuterVolumeSpecName: "config") pod "d359a7e1-fa9f-4da1-9de3-6b7092de2ba9" (UID: "d359a7e1-fa9f-4da1-9de3-6b7092de2ba9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:46:38.097103 systemd[1]: var-lib-kubelet-pods-d359a7e1\x2dfa9f\x2d4da1\x2d9de3\x2d6b7092de2ba9-volumes-kubernetes.io\x7esecret-goldmane\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:46:38.100497 systemd[1]: var-lib-kubelet-pods-d359a7e1\x2dfa9f\x2d4da1\x2d9de3\x2d6b7092de2ba9-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2drflj2.mount: Deactivated successfully. Aug 13 01:46:38.104614 kubelet[2790]: I0813 01:46:38.104575 2790 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d359a7e1-fa9f-4da1-9de3-6b7092de2ba9-goldmane-key-pair" (OuterVolumeSpecName: "goldmane-key-pair") pod "d359a7e1-fa9f-4da1-9de3-6b7092de2ba9" (UID: "d359a7e1-fa9f-4da1-9de3-6b7092de2ba9"). InnerVolumeSpecName "goldmane-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:46:38.107766 kubelet[2790]: I0813 01:46:38.106938 2790 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d359a7e1-fa9f-4da1-9de3-6b7092de2ba9-kube-api-access-rflj2" (OuterVolumeSpecName: "kube-api-access-rflj2") pod "d359a7e1-fa9f-4da1-9de3-6b7092de2ba9" (UID: "d359a7e1-fa9f-4da1-9de3-6b7092de2ba9"). InnerVolumeSpecName "kube-api-access-rflj2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:46:38.185729 kubelet[2790]: I0813 01:46:38.185666 2790 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d359a7e1-fa9f-4da1-9de3-6b7092de2ba9-config\") on node \"172-232-7-67\" DevicePath \"\"" Aug 13 01:46:38.185729 kubelet[2790]: I0813 01:46:38.185707 2790 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rflj2\" (UniqueName: \"kubernetes.io/projected/d359a7e1-fa9f-4da1-9de3-6b7092de2ba9-kube-api-access-rflj2\") on node \"172-232-7-67\" DevicePath \"\"" Aug 13 01:46:38.185729 kubelet[2790]: I0813 01:46:38.185720 2790 reconciler_common.go:299] "Volume detached for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d359a7e1-fa9f-4da1-9de3-6b7092de2ba9-goldmane-ca-bundle\") on node \"172-232-7-67\" DevicePath \"\"" Aug 13 01:46:38.185729 kubelet[2790]: I0813 01:46:38.185730 2790 reconciler_common.go:299] "Volume detached for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/d359a7e1-fa9f-4da1-9de3-6b7092de2ba9-goldmane-key-pair\") on node \"172-232-7-67\" DevicePath \"\"" Aug 13 01:46:38.706696 systemd[1]: Removed slice kubepods-besteffort-podd359a7e1_fa9f_4da1_9de3_6b7092de2ba9.slice - libcontainer container kubepods-besteffort-podd359a7e1_fa9f_4da1_9de3_6b7092de2ba9.slice. Aug 13 01:46:39.082189 kubelet[2790]: I0813 01:46:39.082124 2790 eviction_manager.go:459] "Eviction manager: pods successfully cleaned up" pods=["calico-system/goldmane-768f4c5c69-rxx6b"] Aug 13 01:46:39.093179 kubelet[2790]: I0813 01:46:39.093154 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:39.093179 kubelet[2790]: I0813 01:46:39.093194 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:46:39.095712 kubelet[2790]: I0813 01:46:39.095641 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:46:39.108583 kubelet[2790]: I0813 01:46:39.108558 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:39.108677 kubelet[2790]: I0813 01:46:39.108652 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-8bcfdc4c4-9shkp","calico-apiserver/calico-apiserver-6587675c7f-cbcmb","calico-system/whisker-7d7968965c-f6p9v","calico-system/calico-kube-controllers-76ff444f8d-4xcg9","kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-node-tsmrf","calico-system/csi-node-driver-c7jrc","tigera-operator/tigera-operator-747864d56d-n7zrt","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:46:39.115266 kubelet[2790]: I0813 01:46:39.115242 2790 eviction_manager.go:629] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-8bcfdc4c4-9shkp" Aug 13 01:46:39.115418 kubelet[2790]: I0813 01:46:39.115397 2790 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-8bcfdc4c4-9shkp"] Aug 13 01:46:39.192093 kubelet[2790]: I0813 01:46:39.191931 2790 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk64b\" (UniqueName: \"kubernetes.io/projected/77f01e2a-2024-40f5-867c-b4861d62171a-kube-api-access-fk64b\") pod \"77f01e2a-2024-40f5-867c-b4861d62171a\" (UID: \"77f01e2a-2024-40f5-867c-b4861d62171a\") " Aug 13 01:46:39.192093 kubelet[2790]: I0813 01:46:39.191987 2790 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77f01e2a-2024-40f5-867c-b4861d62171a-calico-apiserver-certs\") pod \"77f01e2a-2024-40f5-867c-b4861d62171a\" (UID: \"77f01e2a-2024-40f5-867c-b4861d62171a\") " Aug 13 01:46:39.199150 kubelet[2790]: I0813 01:46:39.197606 2790 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/77f01e2a-2024-40f5-867c-b4861d62171a-kube-api-access-fk64b" (OuterVolumeSpecName: "kube-api-access-fk64b") pod "77f01e2a-2024-40f5-867c-b4861d62171a" (UID: "77f01e2a-2024-40f5-867c-b4861d62171a"). InnerVolumeSpecName "kube-api-access-fk64b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:46:39.197995 systemd[1]: var-lib-kubelet-pods-77f01e2a\x2d2024\x2d40f5\x2d867c\x2db4861d62171a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfk64b.mount: Deactivated successfully. Aug 13 01:46:39.201548 systemd[1]: var-lib-kubelet-pods-77f01e2a\x2d2024\x2d40f5\x2d867c\x2db4861d62171a-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:46:39.203209 kubelet[2790]: I0813 01:46:39.203163 2790 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/77f01e2a-2024-40f5-867c-b4861d62171a-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "77f01e2a-2024-40f5-867c-b4861d62171a" (UID: "77f01e2a-2024-40f5-867c-b4861d62171a"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:46:39.292855 kubelet[2790]: I0813 01:46:39.292796 2790 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fk64b\" (UniqueName: \"kubernetes.io/projected/77f01e2a-2024-40f5-867c-b4861d62171a-kube-api-access-fk64b\") on node \"172-232-7-67\" DevicePath \"\"" Aug 13 01:46:39.292855 kubelet[2790]: I0813 01:46:39.292836 2790 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/77f01e2a-2024-40f5-867c-b4861d62171a-calico-apiserver-certs\") on node \"172-232-7-67\" DevicePath \"\"" Aug 13 01:46:39.960361 systemd[1]: Removed slice kubepods-besteffort-pod77f01e2a_2024_40f5_867c_b4861d62171a.slice - libcontainer container kubepods-besteffort-pod77f01e2a_2024_40f5_867c_b4861d62171a.slice. Aug 13 01:46:40.115540 kubelet[2790]: I0813 01:46:40.115475 2790 eviction_manager.go:459] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-8bcfdc4c4-9shkp"] Aug 13 01:46:40.128396 kubelet[2790]: I0813 01:46:40.128342 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:40.128396 kubelet[2790]: I0813 01:46:40.128389 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:46:40.130777 kubelet[2790]: I0813 01:46:40.130718 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:46:40.148875 kubelet[2790]: I0813 01:46:40.148832 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:40.148991 kubelet[2790]: I0813 01:46:40.148923 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-6587675c7f-cbcmb","calico-system/whisker-7d7968965c-f6p9v","kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/csi-node-driver-c7jrc","calico-system/calico-node-tsmrf","tigera-operator/tigera-operator-747864d56d-n7zrt","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:46:40.156959 kubelet[2790]: I0813 01:46:40.156922 2790 eviction_manager.go:629] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-6587675c7f-cbcmb" Aug 13 01:46:40.157116 kubelet[2790]: I0813 01:46:40.157091 2790 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-6587675c7f-cbcmb"] Aug 13 01:46:40.198319 kubelet[2790]: I0813 01:46:40.198180 2790 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7pqv\" (UniqueName: \"kubernetes.io/projected/95e39f46-24f7-4c8e-8fd8-41c618bd7cd7-kube-api-access-w7pqv\") pod \"95e39f46-24f7-4c8e-8fd8-41c618bd7cd7\" (UID: \"95e39f46-24f7-4c8e-8fd8-41c618bd7cd7\") " Aug 13 01:46:40.199618 kubelet[2790]: I0813 01:46:40.199580 2790 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/95e39f46-24f7-4c8e-8fd8-41c618bd7cd7-calico-apiserver-certs\") pod \"95e39f46-24f7-4c8e-8fd8-41c618bd7cd7\" (UID: \"95e39f46-24f7-4c8e-8fd8-41c618bd7cd7\") " Aug 13 01:46:40.208148 kubelet[2790]: I0813 01:46:40.207954 2790 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95e39f46-24f7-4c8e-8fd8-41c618bd7cd7-kube-api-access-w7pqv" (OuterVolumeSpecName: "kube-api-access-w7pqv") pod "95e39f46-24f7-4c8e-8fd8-41c618bd7cd7" (UID: "95e39f46-24f7-4c8e-8fd8-41c618bd7cd7"). InnerVolumeSpecName "kube-api-access-w7pqv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:46:40.210864 kubelet[2790]: I0813 01:46:40.208559 2790 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95e39f46-24f7-4c8e-8fd8-41c618bd7cd7-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "95e39f46-24f7-4c8e-8fd8-41c618bd7cd7" (UID: "95e39f46-24f7-4c8e-8fd8-41c618bd7cd7"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:46:40.210933 systemd[1]: var-lib-kubelet-pods-95e39f46\x2d24f7\x2d4c8e\x2d8fd8\x2d41c618bd7cd7-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dw7pqv.mount: Deactivated successfully. Aug 13 01:46:40.215161 systemd[1]: var-lib-kubelet-pods-95e39f46\x2d24f7\x2d4c8e\x2d8fd8\x2d41c618bd7cd7-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:46:40.300920 kubelet[2790]: I0813 01:46:40.300832 2790 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w7pqv\" (UniqueName: \"kubernetes.io/projected/95e39f46-24f7-4c8e-8fd8-41c618bd7cd7-kube-api-access-w7pqv\") on node \"172-232-7-67\" DevicePath \"\"" Aug 13 01:46:40.300920 kubelet[2790]: I0813 01:46:40.300884 2790 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/95e39f46-24f7-4c8e-8fd8-41c618bd7cd7-calico-apiserver-certs\") on node \"172-232-7-67\" DevicePath \"\"" Aug 13 01:46:40.706506 systemd[1]: Removed slice kubepods-besteffort-pod95e39f46_24f7_4c8e_8fd8_41c618bd7cd7.slice - libcontainer container kubepods-besteffort-pod95e39f46_24f7_4c8e_8fd8_41c618bd7cd7.slice. Aug 13 01:46:41.158147 kubelet[2790]: I0813 01:46:41.158048 2790 eviction_manager.go:459] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-6587675c7f-cbcmb"] Aug 13 01:46:41.178201 kubelet[2790]: I0813 01:46:41.178155 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:41.178361 kubelet[2790]: I0813 01:46:41.178230 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:46:41.181566 kubelet[2790]: I0813 01:46:41.181471 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:46:41.196119 kubelet[2790]: I0813 01:46:41.196081 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:41.196384 kubelet[2790]: I0813 01:46:41.196173 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/whisker-7d7968965c-f6p9v","calico-system/calico-kube-controllers-76ff444f8d-4xcg9","kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-node-tsmrf","calico-system/csi-node-driver-c7jrc","tigera-operator/tigera-operator-747864d56d-n7zrt","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:46:41.201863 kubelet[2790]: I0813 01:46:41.201836 2790 eviction_manager.go:629] "Eviction manager: pod is evicted successfully" pod="calico-system/whisker-7d7968965c-f6p9v" Aug 13 01:46:41.202177 kubelet[2790]: I0813 01:46:41.202133 2790 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/whisker-7d7968965c-f6p9v"] Aug 13 01:46:41.206474 kubelet[2790]: I0813 01:46:41.206389 2790 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2-whisker-backend-key-pair\") pod \"6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2\" (UID: \"6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2\") " Aug 13 01:46:41.206474 kubelet[2790]: I0813 01:46:41.206432 2790 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tswnm\" (UniqueName: \"kubernetes.io/projected/6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2-kube-api-access-tswnm\") pod \"6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2\" (UID: \"6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2\") " Aug 13 01:46:41.206968 kubelet[2790]: I0813 01:46:41.206575 2790 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2-whisker-ca-bundle\") pod \"6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2\" (UID: \"6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2\") " Aug 13 01:46:41.207857 kubelet[2790]: I0813 01:46:41.207735 2790 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2" (UID: "6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:46:41.213583 kubelet[2790]: I0813 01:46:41.212576 2790 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2" (UID: "6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:46:41.215715 systemd[1]: var-lib-kubelet-pods-6b938dc0\x2dadc7\x2d4aa5\x2d8d4d\x2ddbf51ae2cfd2-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:46:41.221671 systemd[1]: var-lib-kubelet-pods-6b938dc0\x2dadc7\x2d4aa5\x2d8d4d\x2ddbf51ae2cfd2-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dtswnm.mount: Deactivated successfully. Aug 13 01:46:41.223482 kubelet[2790]: I0813 01:46:41.223441 2790 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2-kube-api-access-tswnm" (OuterVolumeSpecName: "kube-api-access-tswnm") pod "6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2" (UID: "6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2"). InnerVolumeSpecName "kube-api-access-tswnm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:46:41.307306 kubelet[2790]: I0813 01:46:41.307241 2790 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2-whisker-backend-key-pair\") on node \"172-232-7-67\" DevicePath \"\"" Aug 13 01:46:41.307306 kubelet[2790]: I0813 01:46:41.307284 2790 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tswnm\" (UniqueName: \"kubernetes.io/projected/6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2-kube-api-access-tswnm\") on node \"172-232-7-67\" DevicePath \"\"" Aug 13 01:46:41.307306 kubelet[2790]: I0813 01:46:41.307303 2790 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6b938dc0-adc7-4aa5-8d4d-dbf51ae2cfd2-whisker-ca-bundle\") on node \"172-232-7-67\" DevicePath \"\"" Aug 13 01:46:41.964494 systemd[1]: Removed slice kubepods-besteffort-pod6b938dc0_adc7_4aa5_8d4d_dbf51ae2cfd2.slice - libcontainer container kubepods-besteffort-pod6b938dc0_adc7_4aa5_8d4d_dbf51ae2cfd2.slice. Aug 13 01:46:42.202467 kubelet[2790]: I0813 01:46:42.202392 2790 eviction_manager.go:459] "Eviction manager: pods successfully cleaned up" pods=["calico-system/whisker-7d7968965c-f6p9v"] Aug 13 01:46:42.221851 kubelet[2790]: I0813 01:46:42.221516 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:42.221851 kubelet[2790]: I0813 01:46:42.221569 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:46:42.224703 kubelet[2790]: I0813 01:46:42.224637 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:46:42.238593 kubelet[2790]: I0813 01:46:42.238542 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:42.238771 kubelet[2790]: I0813 01:46:42.238691 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-kube-controllers-76ff444f8d-4xcg9","kube-system/coredns-674b8bbfcf-vtdcd","calico-system/csi-node-driver-c7jrc","calico-system/calico-node-tsmrf","tigera-operator/tigera-operator-747864d56d-n7zrt","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:46:42.238771 kubelet[2790]: E0813 01:46:42.238722 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:46:42.238771 kubelet[2790]: E0813 01:46:42.238732 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:46:42.238771 kubelet[2790]: E0813 01:46:42.238759 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:46:42.238771 kubelet[2790]: E0813 01:46:42.238766 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:46:42.238771 kubelet[2790]: E0813 01:46:42.238772 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:46:42.240217 containerd[1556]: time="2025-08-13T01:46:42.240173851Z" level=info msg="StopContainer for \"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198\" with timeout 2 (s)" Aug 13 01:46:42.241194 containerd[1556]: time="2025-08-13T01:46:42.241153180Z" level=info msg="Stop container \"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198\" with signal terminated" Aug 13 01:46:42.326040 systemd[1]: cri-containerd-773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198.scope: Deactivated successfully. Aug 13 01:46:42.326441 systemd[1]: cri-containerd-773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198.scope: Consumed 5.231s CPU time, 83.2M memory peak. Aug 13 01:46:42.332121 containerd[1556]: time="2025-08-13T01:46:42.331951098Z" level=info msg="received exit event container_id:\"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198\" id:\"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198\" pid:3116 exited_at:{seconds:1755049602 nanos:331450539}" Aug 13 01:46:42.332254 containerd[1556]: time="2025-08-13T01:46:42.332217768Z" level=info msg="TaskExit event in podsandbox handler container_id:\"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198\" id:\"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198\" pid:3116 exited_at:{seconds:1755049602 nanos:331450539}" Aug 13 01:46:42.358242 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198-rootfs.mount: Deactivated successfully. Aug 13 01:46:42.364511 containerd[1556]: time="2025-08-13T01:46:42.364478482Z" level=info msg="StopContainer for \"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198\" returns successfully" Aug 13 01:46:42.365208 containerd[1556]: time="2025-08-13T01:46:42.365180822Z" level=info msg="StopPodSandbox for \"d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f\"" Aug 13 01:46:42.365301 containerd[1556]: time="2025-08-13T01:46:42.365277532Z" level=info msg="Container to stop \"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:46:42.376171 systemd[1]: cri-containerd-d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f.scope: Deactivated successfully. Aug 13 01:46:42.379369 containerd[1556]: time="2025-08-13T01:46:42.379154151Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f\" id:\"d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f\" pid:2915 exit_status:137 exited_at:{seconds:1755049602 nanos:378454821}" Aug 13 01:46:42.409512 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f-rootfs.mount: Deactivated successfully. Aug 13 01:46:42.413285 containerd[1556]: time="2025-08-13T01:46:42.413240644Z" level=info msg="shim disconnected" id=d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f namespace=k8s.io Aug 13 01:46:42.413285 containerd[1556]: time="2025-08-13T01:46:42.413276953Z" level=warning msg="cleaning up after shim disconnected" id=d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f namespace=k8s.io Aug 13 01:46:42.413442 containerd[1556]: time="2025-08-13T01:46:42.413289483Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:46:42.428792 containerd[1556]: time="2025-08-13T01:46:42.428631151Z" level=info msg="received exit event sandbox_id:\"d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f\" exit_status:137 exited_at:{seconds:1755049602 nanos:378454821}" Aug 13 01:46:42.430096 containerd[1556]: time="2025-08-13T01:46:42.429972290Z" level=info msg="TearDown network for sandbox \"d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f\" successfully" Aug 13 01:46:42.430096 containerd[1556]: time="2025-08-13T01:46:42.429996030Z" level=info msg="StopPodSandbox for \"d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f\" returns successfully" Aug 13 01:46:42.431670 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f-shm.mount: Deactivated successfully. Aug 13 01:46:42.438111 kubelet[2790]: I0813 01:46:42.438027 2790 eviction_manager.go:629] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-747864d56d-n7zrt" Aug 13 01:46:42.438111 kubelet[2790]: I0813 01:46:42.438047 2790 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-747864d56d-n7zrt"] Aug 13 01:46:42.468779 kubelet[2790]: I0813 01:46:42.468640 2790 kubelet.go:2405] "Pod admission denied" podUID="ae2f37d7-e59d-49b2-9716-c0416865bb42" pod="tigera-operator/tigera-operator-747864d56d-qrf7c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.488601 kubelet[2790]: I0813 01:46:42.488236 2790 kubelet.go:2405] "Pod admission denied" podUID="5c2fd679-6d5c-44de-a60e-dfc54ebc7198" pod="tigera-operator/tigera-operator-747864d56d-9jrzz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.513818 kubelet[2790]: I0813 01:46:42.513675 2790 kubelet.go:2405] "Pod admission denied" podUID="a05e3997-3e58-4fc4-97ac-01db0c71f3ad" pod="tigera-operator/tigera-operator-747864d56d-kjs4c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.515267 kubelet[2790]: I0813 01:46:42.515242 2790 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d9ece072-77bc-4878-9cdf-811be2efec7d-var-lib-calico\") pod \"d9ece072-77bc-4878-9cdf-811be2efec7d\" (UID: \"d9ece072-77bc-4878-9cdf-811be2efec7d\") " Aug 13 01:46:42.515412 kubelet[2790]: I0813 01:46:42.515323 2790 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hwpls\" (UniqueName: \"kubernetes.io/projected/d9ece072-77bc-4878-9cdf-811be2efec7d-kube-api-access-hwpls\") pod \"d9ece072-77bc-4878-9cdf-811be2efec7d\" (UID: \"d9ece072-77bc-4878-9cdf-811be2efec7d\") " Aug 13 01:46:42.515834 kubelet[2790]: I0813 01:46:42.515812 2790 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d9ece072-77bc-4878-9cdf-811be2efec7d-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "d9ece072-77bc-4878-9cdf-811be2efec7d" (UID: "d9ece072-77bc-4878-9cdf-811be2efec7d"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:46:42.524999 kubelet[2790]: I0813 01:46:42.524960 2790 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9ece072-77bc-4878-9cdf-811be2efec7d-kube-api-access-hwpls" (OuterVolumeSpecName: "kube-api-access-hwpls") pod "d9ece072-77bc-4878-9cdf-811be2efec7d" (UID: "d9ece072-77bc-4878-9cdf-811be2efec7d"). InnerVolumeSpecName "kube-api-access-hwpls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:46:42.525854 systemd[1]: var-lib-kubelet-pods-d9ece072\x2d77bc\x2d4878\x2d9cdf\x2d811be2efec7d-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dhwpls.mount: Deactivated successfully. Aug 13 01:46:42.544514 kubelet[2790]: I0813 01:46:42.544462 2790 kubelet.go:2405] "Pod admission denied" podUID="e85d009d-f3b6-4667-aea9-88ce50ac20cb" pod="tigera-operator/tigera-operator-747864d56d-qg6pb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.569304 kubelet[2790]: I0813 01:46:42.569176 2790 kubelet.go:2405] "Pod admission denied" podUID="57f69ec5-b502-46ed-a45e-b4914b540a43" pod="tigera-operator/tigera-operator-747864d56d-ptpp2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.591474 kubelet[2790]: I0813 01:46:42.591404 2790 kubelet.go:2405] "Pod admission denied" podUID="90e14221-819f-49d8-9aee-67cf1e5f15eb" pod="tigera-operator/tigera-operator-747864d56d-6gzzf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.609897 kubelet[2790]: I0813 01:46:42.609852 2790 kubelet.go:2405] "Pod admission denied" podUID="a8187c7c-7bf2-4ac4-b523-fa55f723491c" pod="tigera-operator/tigera-operator-747864d56d-97cjr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.615892 kubelet[2790]: I0813 01:46:42.615852 2790 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hwpls\" (UniqueName: \"kubernetes.io/projected/d9ece072-77bc-4878-9cdf-811be2efec7d-kube-api-access-hwpls\") on node \"172-232-7-67\" DevicePath \"\"" Aug 13 01:46:42.615892 kubelet[2790]: I0813 01:46:42.615883 2790 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d9ece072-77bc-4878-9cdf-811be2efec7d-var-lib-calico\") on node \"172-232-7-67\" DevicePath \"\"" Aug 13 01:46:42.629097 kubelet[2790]: I0813 01:46:42.629049 2790 kubelet.go:2405] "Pod admission denied" podUID="11b3f9d7-7194-4d61-b5cd-fb3095bdf57c" pod="tigera-operator/tigera-operator-747864d56d-8kbdl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.656235 kubelet[2790]: I0813 01:46:42.656176 2790 kubelet.go:2405] "Pod admission denied" podUID="6fe60052-4e26-487a-9111-bc31ed2ed2d7" pod="tigera-operator/tigera-operator-747864d56d-2l69q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.694967 containerd[1556]: time="2025-08-13T01:46:42.694911870Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:42.712086 systemd[1]: Removed slice kubepods-besteffort-podd9ece072_77bc_4878_9cdf_811be2efec7d.slice - libcontainer container kubepods-besteffort-podd9ece072_77bc_4878_9cdf_811be2efec7d.slice. Aug 13 01:46:42.712404 systemd[1]: kubepods-besteffort-podd9ece072_77bc_4878_9cdf_811be2efec7d.slice: Consumed 5.272s CPU time, 83.5M memory peak. Aug 13 01:46:42.755925 containerd[1556]: time="2025-08-13T01:46:42.755500662Z" level=error msg="Failed to destroy network for sandbox \"f715a08533b96085c44f577e4647ed18fd83fc28e390d3d04fb5ef17eced3da1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:42.757808 containerd[1556]: time="2025-08-13T01:46:42.757734700Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f715a08533b96085c44f577e4647ed18fd83fc28e390d3d04fb5ef17eced3da1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:42.758085 kubelet[2790]: E0813 01:46:42.758019 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f715a08533b96085c44f577e4647ed18fd83fc28e390d3d04fb5ef17eced3da1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:42.758159 kubelet[2790]: E0813 01:46:42.758085 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f715a08533b96085c44f577e4647ed18fd83fc28e390d3d04fb5ef17eced3da1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:46:42.758159 kubelet[2790]: E0813 01:46:42.758115 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f715a08533b96085c44f577e4647ed18fd83fc28e390d3d04fb5ef17eced3da1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:46:42.758578 kubelet[2790]: E0813 01:46:42.758311 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f715a08533b96085c44f577e4647ed18fd83fc28e390d3d04fb5ef17eced3da1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:46:42.812071 kubelet[2790]: I0813 01:46:42.811975 2790 kubelet.go:2405] "Pod admission denied" podUID="7524270a-81bf-4b95-9a9a-149eaff9d8e8" pod="tigera-operator/tigera-operator-747864d56d-mbxps" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.964110 kubelet[2790]: I0813 01:46:42.963956 2790 scope.go:117] "RemoveContainer" containerID="773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198" Aug 13 01:46:42.969804 containerd[1556]: time="2025-08-13T01:46:42.969419312Z" level=info msg="RemoveContainer for \"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198\"" Aug 13 01:46:42.973561 kubelet[2790]: I0813 01:46:42.973291 2790 kubelet.go:2405] "Pod admission denied" podUID="93b24dc1-eb2c-42b6-b562-e92ffb45e7b4" pod="tigera-operator/tigera-operator-747864d56d-8twft" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.975735 containerd[1556]: time="2025-08-13T01:46:42.975663017Z" level=info msg="RemoveContainer for \"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198\" returns successfully" Aug 13 01:46:42.976812 kubelet[2790]: I0813 01:46:42.976675 2790 scope.go:117] "RemoveContainer" containerID="773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198" Aug 13 01:46:42.977520 containerd[1556]: time="2025-08-13T01:46:42.977471796Z" level=error msg="ContainerStatus for \"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198\": not found" Aug 13 01:46:42.977680 kubelet[2790]: E0813 01:46:42.977657 2790 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198\": not found" containerID="773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198" Aug 13 01:46:42.977897 kubelet[2790]: I0813 01:46:42.977687 2790 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198"} err="failed to get container status \"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198\": rpc error: code = NotFound desc = an error occurred when try to find container \"773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198\": not found" Aug 13 01:46:43.115721 kubelet[2790]: I0813 01:46:43.115284 2790 kubelet.go:2405] "Pod admission denied" podUID="4b5d8a83-1c15-4347-882b-926b736420ec" pod="tigera-operator/tigera-operator-747864d56d-5zhp7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.265956 kubelet[2790]: I0813 01:46:43.265910 2790 kubelet.go:2405] "Pod admission denied" podUID="63c4d6c4-6fd1-4eb6-832a-af66b10b14da" pod="tigera-operator/tigera-operator-747864d56d-wdx6t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.357226 systemd[1]: run-netns-cni\x2d29e8b107\x2dafd0\x2dd94f\x2d9abc\x2da1613cc0dfc9.mount: Deactivated successfully. Aug 13 01:46:43.418699 kubelet[2790]: I0813 01:46:43.418366 2790 kubelet.go:2405] "Pod admission denied" podUID="ac614344-9b8e-47b9-9da0-32208dd60896" pod="tigera-operator/tigera-operator-747864d56d-kf6lv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.438764 kubelet[2790]: I0813 01:46:43.438712 2790 eviction_manager.go:459] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-747864d56d-n7zrt"] Aug 13 01:46:43.460570 kubelet[2790]: I0813 01:46:43.460507 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:43.461384 kubelet[2790]: I0813 01:46:43.460668 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:46:43.466360 containerd[1556]: time="2025-08-13T01:46:43.466218659Z" level=info msg="StopPodSandbox for \"d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f\"" Aug 13 01:46:43.467492 containerd[1556]: time="2025-08-13T01:46:43.467085914Z" level=info msg="TearDown network for sandbox \"d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f\" successfully" Aug 13 01:46:43.467492 containerd[1556]: time="2025-08-13T01:46:43.467150764Z" level=info msg="StopPodSandbox for \"d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f\" returns successfully" Aug 13 01:46:43.470001 containerd[1556]: time="2025-08-13T01:46:43.469928099Z" level=info msg="RemovePodSandbox for \"d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f\"" Aug 13 01:46:43.470001 containerd[1556]: time="2025-08-13T01:46:43.469973978Z" level=info msg="Forcibly stopping sandbox \"d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f\"" Aug 13 01:46:43.470295 containerd[1556]: time="2025-08-13T01:46:43.470251727Z" level=info msg="TearDown network for sandbox \"d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f\" successfully" Aug 13 01:46:43.471958 containerd[1556]: time="2025-08-13T01:46:43.471900108Z" level=info msg="Ensure that sandbox d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f in task-service has been cleanup successfully" Aug 13 01:46:43.474989 containerd[1556]: time="2025-08-13T01:46:43.474926811Z" level=info msg="RemovePodSandbox \"d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f\" returns successfully" Aug 13 01:46:43.475825 kubelet[2790]: I0813 01:46:43.475786 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:46:43.503286 kubelet[2790]: I0813 01:46:43.503218 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:43.503706 kubelet[2790]: I0813 01:46:43.503307 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-kube-controllers-76ff444f8d-4xcg9","kube-system/coredns-674b8bbfcf-vtdcd","calico-system/calico-node-tsmrf","calico-system/csi-node-driver-c7jrc","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:46:43.503706 kubelet[2790]: E0813 01:46:43.503337 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:46:43.503706 kubelet[2790]: E0813 01:46:43.503348 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:46:43.503706 kubelet[2790]: E0813 01:46:43.503355 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:46:43.503706 kubelet[2790]: E0813 01:46:43.503363 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:46:43.503706 kubelet[2790]: E0813 01:46:43.503521 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:46:43.503706 kubelet[2790]: E0813 01:46:43.503530 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:46:43.503706 kubelet[2790]: E0813 01:46:43.503537 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:46:43.503706 kubelet[2790]: E0813 01:46:43.503544 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:46:43.503706 kubelet[2790]: E0813 01:46:43.503551 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:46:43.503706 kubelet[2790]: E0813 01:46:43.503559 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:46:43.503706 kubelet[2790]: I0813 01:46:43.503569 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:46:43.565054 kubelet[2790]: I0813 01:46:43.564961 2790 kubelet.go:2405] "Pod admission denied" podUID="7d118a42-79f7-42dc-9e94-9ded34d60396" pod="tigera-operator/tigera-operator-747864d56d-5wjg8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.695085 containerd[1556]: time="2025-08-13T01:46:43.694899977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:43.764781 containerd[1556]: time="2025-08-13T01:46:43.764025421Z" level=error msg="Failed to destroy network for sandbox \"f5fe5034cf94228697cda17cc41e6096e1859e4e1d856371c2fc11e635073aa0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:43.770405 systemd[1]: run-netns-cni\x2d4e096778\x2d013a\x2d2178\x2d305c\x2dd2492e353f7a.mount: Deactivated successfully. Aug 13 01:46:43.771984 containerd[1556]: time="2025-08-13T01:46:43.771778529Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5fe5034cf94228697cda17cc41e6096e1859e4e1d856371c2fc11e635073aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:43.775285 kubelet[2790]: E0813 01:46:43.775139 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5fe5034cf94228697cda17cc41e6096e1859e4e1d856371c2fc11e635073aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:43.775385 kubelet[2790]: E0813 01:46:43.775312 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5fe5034cf94228697cda17cc41e6096e1859e4e1d856371c2fc11e635073aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:46:43.775385 kubelet[2790]: E0813 01:46:43.775361 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5fe5034cf94228697cda17cc41e6096e1859e4e1d856371c2fc11e635073aa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:46:43.775494 kubelet[2790]: E0813 01:46:43.775448 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5fe5034cf94228697cda17cc41e6096e1859e4e1d856371c2fc11e635073aa0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:46:43.818774 kubelet[2790]: I0813 01:46:43.818412 2790 kubelet.go:2405] "Pod admission denied" podUID="8e41dcc6-4218-4017-9648-9e49c4d34211" pod="tigera-operator/tigera-operator-747864d56d-rm75j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.962682 kubelet[2790]: I0813 01:46:43.962422 2790 kubelet.go:2405] "Pod admission denied" podUID="decad915-e1bd-4874-80f2-fb4613c8e09f" pod="tigera-operator/tigera-operator-747864d56d-j577c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.114920 kubelet[2790]: I0813 01:46:44.114836 2790 kubelet.go:2405] "Pod admission denied" podUID="915f8dcb-577c-4670-b6b6-e7a8f001eaac" pod="tigera-operator/tigera-operator-747864d56d-qj56x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.364688 kubelet[2790]: I0813 01:46:44.364511 2790 kubelet.go:2405] "Pod admission denied" podUID="e5ca4f5f-1857-4d94-bec0-c1034fd34d0c" pod="tigera-operator/tigera-operator-747864d56d-f7tmq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.516562 kubelet[2790]: I0813 01:46:44.516479 2790 kubelet.go:2405] "Pod admission denied" podUID="91b39af0-9625-428a-8bff-e42c013dc2b2" pod="tigera-operator/tigera-operator-747864d56d-b5jn7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.614110 kubelet[2790]: I0813 01:46:44.613826 2790 kubelet.go:2405] "Pod admission denied" podUID="8f478e4e-1b8f-44c9-9102-1090e76faf6e" pod="tigera-operator/tigera-operator-747864d56d-cgnnb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.695001 kubelet[2790]: E0813 01:46:44.694697 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:44.696162 containerd[1556]: time="2025-08-13T01:46:44.696081804Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:44.764233 kubelet[2790]: I0813 01:46:44.764116 2790 kubelet.go:2405] "Pod admission denied" podUID="398b2114-2e7a-4b41-984c-47b19f109890" pod="tigera-operator/tigera-operator-747864d56d-kjmrg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.804719 containerd[1556]: time="2025-08-13T01:46:44.804660840Z" level=error msg="Failed to destroy network for sandbox \"6d091df9d8674c1ec3f18a53e6e02010130776c86b932446f09f906b89789c85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:44.807658 systemd[1]: run-netns-cni\x2d1a858e82\x2dc307\x2db99b\x2d8a88\x2d01ced145b1f2.mount: Deactivated successfully. Aug 13 01:46:44.808919 containerd[1556]: time="2025-08-13T01:46:44.808729829Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d091df9d8674c1ec3f18a53e6e02010130776c86b932446f09f906b89789c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:44.809586 kubelet[2790]: E0813 01:46:44.809537 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d091df9d8674c1ec3f18a53e6e02010130776c86b932446f09f906b89789c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:44.809669 kubelet[2790]: E0813 01:46:44.809611 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d091df9d8674c1ec3f18a53e6e02010130776c86b932446f09f906b89789c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:46:44.809669 kubelet[2790]: E0813 01:46:44.809654 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6d091df9d8674c1ec3f18a53e6e02010130776c86b932446f09f906b89789c85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:46:44.809959 kubelet[2790]: E0813 01:46:44.809733 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6d091df9d8674c1ec3f18a53e6e02010130776c86b932446f09f906b89789c85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vtdcd" podUID="caa5836a-45f9-496b-86c1-95f6e1b6da17" Aug 13 01:46:44.925569 kubelet[2790]: I0813 01:46:44.925512 2790 kubelet.go:2405] "Pod admission denied" podUID="1e494f34-40f2-47e7-9582-4f669e0ae253" pod="tigera-operator/tigera-operator-747864d56d-drsgd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.016185 kubelet[2790]: I0813 01:46:45.016120 2790 kubelet.go:2405] "Pod admission denied" podUID="15154f53-ed3f-4a61-92a0-9d7b2480954e" pod="tigera-operator/tigera-operator-747864d56d-8sg4m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.164403 kubelet[2790]: I0813 01:46:45.163925 2790 kubelet.go:2405] "Pod admission denied" podUID="25d8219f-6a30-421e-9cbf-bb60b5b8e001" pod="tigera-operator/tigera-operator-747864d56d-7h4hh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.363790 kubelet[2790]: I0813 01:46:45.363206 2790 kubelet.go:2405] "Pod admission denied" podUID="46c637c1-acaa-41cd-be20-45e81a8db576" pod="tigera-operator/tigera-operator-747864d56d-fqdzt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.464080 kubelet[2790]: I0813 01:46:45.464032 2790 kubelet.go:2405] "Pod admission denied" podUID="b9a50bf4-848a-45d3-8f53-3b42c6b95419" pod="tigera-operator/tigera-operator-747864d56d-9zgsw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.565781 kubelet[2790]: I0813 01:46:45.565624 2790 kubelet.go:2405] "Pod admission denied" podUID="683e9042-5af3-44e0-a7f6-0a8f94202edd" pod="tigera-operator/tigera-operator-747864d56d-8xjjb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.664431 kubelet[2790]: I0813 01:46:45.664058 2790 kubelet.go:2405] "Pod admission denied" podUID="b5ba434a-5ee2-4258-ae58-f6993b0d3942" pod="tigera-operator/tigera-operator-747864d56d-sxmmc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.694319 kubelet[2790]: E0813 01:46:45.694280 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:45.694920 containerd[1556]: time="2025-08-13T01:46:45.694848021Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:45.761830 containerd[1556]: time="2025-08-13T01:46:45.761737428Z" level=error msg="Failed to destroy network for sandbox \"7c4b044611a5120316e9b427776c9269ca224b8daf7491ebb5d55fb1df16849f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:45.764762 systemd[1]: run-netns-cni\x2dd709b6ab\x2da9a1\x2d7169\x2dcf2b\x2ddb1a1640db98.mount: Deactivated successfully. Aug 13 01:46:45.765895 containerd[1556]: time="2025-08-13T01:46:45.765505329Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4b044611a5120316e9b427776c9269ca224b8daf7491ebb5d55fb1df16849f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:45.767025 kubelet[2790]: E0813 01:46:45.766707 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4b044611a5120316e9b427776c9269ca224b8daf7491ebb5d55fb1df16849f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:45.769056 kubelet[2790]: E0813 01:46:45.769013 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4b044611a5120316e9b427776c9269ca224b8daf7491ebb5d55fb1df16849f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:46:45.769056 kubelet[2790]: E0813 01:46:45.769052 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7c4b044611a5120316e9b427776c9269ca224b8daf7491ebb5d55fb1df16849f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:46:45.769286 kubelet[2790]: E0813 01:46:45.769099 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7c4b044611a5120316e9b427776c9269ca224b8daf7491ebb5d55fb1df16849f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6rlkc" podUID="21a6ba02-58d5-43c1-a7de-9e24560a65f6" Aug 13 01:46:45.770689 kubelet[2790]: I0813 01:46:45.770662 2790 kubelet.go:2405] "Pod admission denied" podUID="9f17940a-0ffa-4e44-aae8-78c361031207" pod="tigera-operator/tigera-operator-747864d56d-zxnv4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.864145 kubelet[2790]: I0813 01:46:45.864095 2790 kubelet.go:2405] "Pod admission denied" podUID="fef5eb71-d84a-40c2-9045-e904fd1e6e00" pod="tigera-operator/tigera-operator-747864d56d-rnrw6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.962629 kubelet[2790]: I0813 01:46:45.962592 2790 kubelet.go:2405] "Pod admission denied" podUID="143518b5-8758-4dfe-ba7b-47343aa7e41c" pod="tigera-operator/tigera-operator-747864d56d-f9mds" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.166376 kubelet[2790]: I0813 01:46:46.166317 2790 kubelet.go:2405] "Pod admission denied" podUID="bebef7f2-3de2-4e74-967f-7183d4fd48f8" pod="tigera-operator/tigera-operator-747864d56d-nghts" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.267583 kubelet[2790]: I0813 01:46:46.267168 2790 kubelet.go:2405] "Pod admission denied" podUID="f8157dc0-fdde-4af4-8e2f-b044846d2de5" pod="tigera-operator/tigera-operator-747864d56d-lmgq9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.366027 kubelet[2790]: I0813 01:46:46.365984 2790 kubelet.go:2405] "Pod admission denied" podUID="1f79bea1-6f53-45f8-9aab-4aaf4f01be4a" pod="tigera-operator/tigera-operator-747864d56d-qdslw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.565360 kubelet[2790]: I0813 01:46:46.563487 2790 kubelet.go:2405] "Pod admission denied" podUID="3606252f-ed2d-45d8-9ec0-5c7c4e45f7ee" pod="tigera-operator/tigera-operator-747864d56d-7hlnk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.664972 kubelet[2790]: I0813 01:46:46.664926 2790 kubelet.go:2405] "Pod admission denied" podUID="07e4b776-b4fd-47c9-97ba-27ccf39480cc" pod="tigera-operator/tigera-operator-747864d56d-wdvtl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.714959 kubelet[2790]: I0813 01:46:46.714912 2790 kubelet.go:2405] "Pod admission denied" podUID="ea76ac9e-805a-4208-a4f1-5319291026fa" pod="tigera-operator/tigera-operator-747864d56d-thxz2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.813257 kubelet[2790]: I0813 01:46:46.813184 2790 kubelet.go:2405] "Pod admission denied" podUID="2dd3f712-84c3-41cd-b03c-a1feb7fc830c" pod="tigera-operator/tigera-operator-747864d56d-zdnjs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.919023 kubelet[2790]: I0813 01:46:46.918873 2790 kubelet.go:2405] "Pod admission denied" podUID="ca481fdd-4d8a-499f-8bc9-183bfdf79f42" pod="tigera-operator/tigera-operator-747864d56d-55kqj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.013978 kubelet[2790]: I0813 01:46:47.013882 2790 kubelet.go:2405] "Pod admission denied" podUID="c44b5655-8794-42db-9ef0-f1f266a9c45e" pod="tigera-operator/tigera-operator-747864d56d-svj42" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.114613 kubelet[2790]: I0813 01:46:47.114546 2790 kubelet.go:2405] "Pod admission denied" podUID="3a1d48a5-088c-408a-906d-fa24f0606f64" pod="tigera-operator/tigera-operator-747864d56d-69jn4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.219116 kubelet[2790]: I0813 01:46:47.217942 2790 kubelet.go:2405] "Pod admission denied" podUID="fd5c3e59-53ef-440c-88bf-bfaa6a0ce6b4" pod="tigera-operator/tigera-operator-747864d56d-sd7hr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.314915 kubelet[2790]: I0813 01:46:47.314840 2790 kubelet.go:2405] "Pod admission denied" podUID="99093fb7-e6ee-462e-bfa8-0390292ee540" pod="tigera-operator/tigera-operator-747864d56d-s62hl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.419897 kubelet[2790]: I0813 01:46:47.419838 2790 kubelet.go:2405] "Pod admission denied" podUID="f0dc0a96-a7ed-477a-9cbb-a92c492a1c0e" pod="tigera-operator/tigera-operator-747864d56d-tlvjr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.517368 kubelet[2790]: I0813 01:46:47.516966 2790 kubelet.go:2405] "Pod admission denied" podUID="730554d1-602e-4045-af59-121f5ebf4d67" pod="tigera-operator/tigera-operator-747864d56d-xm6vx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.565895 kubelet[2790]: I0813 01:46:47.565828 2790 kubelet.go:2405] "Pod admission denied" podUID="2efb5830-e297-421d-81b3-9f5e019bacb8" pod="tigera-operator/tigera-operator-747864d56d-ddmgl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.666090 kubelet[2790]: I0813 01:46:47.666034 2790 kubelet.go:2405] "Pod admission denied" podUID="275c5c79-7aae-452a-b6fb-cabe1783282f" pod="tigera-operator/tigera-operator-747864d56d-hqgkt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.764817 kubelet[2790]: I0813 01:46:47.764770 2790 kubelet.go:2405] "Pod admission denied" podUID="ebfe709e-700b-4a20-b13c-1d2bb44ddfab" pod="tigera-operator/tigera-operator-747864d56d-645nh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.864793 kubelet[2790]: I0813 01:46:47.864376 2790 kubelet.go:2405] "Pod admission denied" podUID="a1c45c07-599e-4d96-b693-bacdefc26762" pod="tigera-operator/tigera-operator-747864d56d-zx4ks" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.970488 kubelet[2790]: I0813 01:46:47.970418 2790 kubelet.go:2405] "Pod admission denied" podUID="4440fab4-e78e-4c08-aeca-30a7235d70f6" pod="tigera-operator/tigera-operator-747864d56d-bgch9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.067996 kubelet[2790]: I0813 01:46:48.067910 2790 kubelet.go:2405] "Pod admission denied" podUID="e8ec2c5e-b17f-43fe-915d-d7b44984a361" pod="tigera-operator/tigera-operator-747864d56d-5nrcq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.165301 kubelet[2790]: I0813 01:46:48.164926 2790 kubelet.go:2405] "Pod admission denied" podUID="43cf7408-704d-4017-a24e-c9eeb9e357d7" pod="tigera-operator/tigera-operator-747864d56d-pxnn2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.265111 kubelet[2790]: I0813 01:46:48.264977 2790 kubelet.go:2405] "Pod admission denied" podUID="fd39a366-e402-42ac-a0f2-0bc188b9d166" pod="tigera-operator/tigera-operator-747864d56d-npwn7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.368251 kubelet[2790]: I0813 01:46:48.368164 2790 kubelet.go:2405] "Pod admission denied" podUID="158e620c-52fe-43a7-9d93-fd4b3641a7bb" pod="tigera-operator/tigera-operator-747864d56d-xkhh2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.463106 kubelet[2790]: I0813 01:46:48.463059 2790 kubelet.go:2405] "Pod admission denied" podUID="f63a2766-8138-413a-8f06-1ed63344c681" pod="tigera-operator/tigera-operator-747864d56d-rqmkj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.665223 kubelet[2790]: I0813 01:46:48.665159 2790 kubelet.go:2405] "Pod admission denied" podUID="199158a0-b0b3-4da9-b5f4-1091711f2f67" pod="tigera-operator/tigera-operator-747864d56d-6qkw4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.696449 containerd[1556]: time="2025-08-13T01:46:48.696388927Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:46:48.766990 kubelet[2790]: I0813 01:46:48.766837 2790 kubelet.go:2405] "Pod admission denied" podUID="453efa8c-e0c0-415c-a40a-bd1e1077e1cc" pod="tigera-operator/tigera-operator-747864d56d-cqfd9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.862473 kubelet[2790]: I0813 01:46:48.862424 2790 kubelet.go:2405] "Pod admission denied" podUID="f0055b2f-3fbe-4c1a-bd30-217d733d4e06" pod="tigera-operator/tigera-operator-747864d56d-cw54t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.064659 kubelet[2790]: I0813 01:46:49.064463 2790 kubelet.go:2405] "Pod admission denied" podUID="5d43dc82-3f75-4175-9239-69d9f272fe85" pod="tigera-operator/tigera-operator-747864d56d-jbqxg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.163668 kubelet[2790]: I0813 01:46:49.163613 2790 kubelet.go:2405] "Pod admission denied" podUID="3522c0f5-c8db-4d31-b291-d9dbc81ac9d5" pod="tigera-operator/tigera-operator-747864d56d-8xnq9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.220490 kubelet[2790]: I0813 01:46:49.220118 2790 kubelet.go:2405] "Pod admission denied" podUID="ba6a59c6-829b-40dd-b571-53ebae69bb72" pod="tigera-operator/tigera-operator-747864d56d-qxzc7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.319045 kubelet[2790]: I0813 01:46:49.318670 2790 kubelet.go:2405] "Pod admission denied" podUID="8c240cbd-1ed7-4531-beb8-d229d0be383a" pod="tigera-operator/tigera-operator-747864d56d-cdpf2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.531175 kubelet[2790]: I0813 01:46:49.531104 2790 kubelet.go:2405] "Pod admission denied" podUID="51baa2db-4fd6-4d6e-ab06-e492e84b4089" pod="tigera-operator/tigera-operator-747864d56d-fjcvg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.620711 kubelet[2790]: I0813 01:46:49.620460 2790 kubelet.go:2405] "Pod admission denied" podUID="a23c8909-249b-455d-bb10-39ebb47702c6" pod="tigera-operator/tigera-operator-747864d56d-ksfkh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.722306 kubelet[2790]: I0813 01:46:49.722105 2790 kubelet.go:2405] "Pod admission denied" podUID="f2ac4959-f206-4c04-ade0-51c4dc03d24a" pod="tigera-operator/tigera-operator-747864d56d-2twf6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.819455 kubelet[2790]: I0813 01:46:49.819395 2790 kubelet.go:2405] "Pod admission denied" podUID="f585812e-de9e-4389-8d83-eaa3ad74fa8c" pod="tigera-operator/tigera-operator-747864d56d-frx67" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.925583 kubelet[2790]: I0813 01:46:49.925434 2790 kubelet.go:2405] "Pod admission denied" podUID="cab21257-752b-4669-b6c0-ad98712555ad" pod="tigera-operator/tigera-operator-747864d56d-z4vlk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.024283 kubelet[2790]: I0813 01:46:50.024233 2790 kubelet.go:2405] "Pod admission denied" podUID="3a105190-2273-41a0-8795-5dec6d9b1c97" pod="tigera-operator/tigera-operator-747864d56d-bhvrx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.124273 kubelet[2790]: I0813 01:46:50.124190 2790 kubelet.go:2405] "Pod admission denied" podUID="49a264e5-f159-4e63-8324-6df7037c65cd" pod="tigera-operator/tigera-operator-747864d56d-5vlxk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.222766 kubelet[2790]: I0813 01:46:50.222701 2790 kubelet.go:2405] "Pod admission denied" podUID="071acbd1-dfa5-4f86-8f38-bb5bee4f0b28" pod="tigera-operator/tigera-operator-747864d56d-7zx6f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.324703 kubelet[2790]: I0813 01:46:50.324561 2790 kubelet.go:2405] "Pod admission denied" podUID="1902476f-1010-44ae-a2a0-c8972da11cae" pod="tigera-operator/tigera-operator-747864d56d-v54hk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.527782 kubelet[2790]: I0813 01:46:50.527468 2790 kubelet.go:2405] "Pod admission denied" podUID="2c7516fa-afd8-4272-946c-9a76b4da160e" pod="tigera-operator/tigera-operator-747864d56d-dw8jc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.627890 kubelet[2790]: I0813 01:46:50.627619 2790 kubelet.go:2405] "Pod admission denied" podUID="13844a9d-4da5-48b6-85a3-c0139697c983" pod="tigera-operator/tigera-operator-747864d56d-dz6xq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.723567 kubelet[2790]: I0813 01:46:50.723449 2790 kubelet.go:2405] "Pod admission denied" podUID="e2a5b777-6d4a-46ca-bea4-1934c6f7ef4e" pod="tigera-operator/tigera-operator-747864d56d-vvzz6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.821491 kubelet[2790]: I0813 01:46:50.820978 2790 kubelet.go:2405] "Pod admission denied" podUID="29008f6d-a324-41b6-ad3d-f5bfcb82fed5" pod="tigera-operator/tigera-operator-747864d56d-bjtjf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.926352 kubelet[2790]: I0813 01:46:50.926286 2790 kubelet.go:2405] "Pod admission denied" podUID="9379c448-114b-459d-8255-34873ad01b38" pod="tigera-operator/tigera-operator-747864d56d-hm49w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.027151 kubelet[2790]: I0813 01:46:51.026434 2790 kubelet.go:2405] "Pod admission denied" podUID="0b19c675-1e65-4649-973a-d8d29c5c7fd5" pod="tigera-operator/tigera-operator-747864d56d-2zfq7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.086989 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2386263665.mount: Deactivated successfully. Aug 13 01:46:51.089155 containerd[1556]: time="2025-08-13T01:46:51.088971642Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2386263665: write /var/lib/containerd/tmpmounts/containerd-mount2386263665/usr/bin/calico-node: no space left on device" Aug 13 01:46:51.089155 containerd[1556]: time="2025-08-13T01:46:51.089028212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:46:51.090708 kubelet[2790]: E0813 01:46:51.089981 2790 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2386263665: write /var/lib/containerd/tmpmounts/containerd-mount2386263665/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:46:51.090708 kubelet[2790]: E0813 01:46:51.090062 2790 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2386263665: write /var/lib/containerd/tmpmounts/containerd-mount2386263665/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:46:51.091285 kubelet[2790]: E0813 01:46:51.091104 2790 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzvb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},StopSignal:nil,},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-tsmrf_calico-system(517ffc51-1a34-4ced-acf5-d8e5da6a1838): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2386263665: write /var/lib/containerd/tmpmounts/containerd-mount2386263665/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:46:51.092518 kubelet[2790]: E0813 01:46:51.092466 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2386263665: write /var/lib/containerd/tmpmounts/containerd-mount2386263665/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-tsmrf" podUID="517ffc51-1a34-4ced-acf5-d8e5da6a1838" Aug 13 01:46:51.114725 kubelet[2790]: I0813 01:46:51.114340 2790 kubelet.go:2405] "Pod admission denied" podUID="354be8f3-e379-423b-b8f5-24a6279db5dd" pod="tigera-operator/tigera-operator-747864d56d-6nmtd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.232463 kubelet[2790]: I0813 01:46:51.231119 2790 kubelet.go:2405] "Pod admission denied" podUID="6a4c3f76-9c85-4a04-bee3-4d65017c5fa0" pod="tigera-operator/tigera-operator-747864d56d-ls72d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.315929 kubelet[2790]: I0813 01:46:51.315862 2790 kubelet.go:2405] "Pod admission denied" podUID="2cbb8f1e-1061-4c79-8431-c9ed483e99cd" pod="tigera-operator/tigera-operator-747864d56d-48x29" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.518137 kubelet[2790]: I0813 01:46:51.518092 2790 kubelet.go:2405] "Pod admission denied" podUID="7d0d4331-a82e-4bd7-bcef-ebed9af93c9a" pod="tigera-operator/tigera-operator-747864d56d-jns6b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.614757 kubelet[2790]: I0813 01:46:51.614685 2790 kubelet.go:2405] "Pod admission denied" podUID="fc3beae1-a0e1-4e6f-8a7e-a8bd6b2af3b4" pod="tigera-operator/tigera-operator-747864d56d-9vk59" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.665878 kubelet[2790]: I0813 01:46:51.665767 2790 kubelet.go:2405] "Pod admission denied" podUID="5c73248d-eb2f-4ff7-beaf-edc8721b9192" pod="tigera-operator/tigera-operator-747864d56d-r6xdk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.770866 kubelet[2790]: I0813 01:46:51.769408 2790 kubelet.go:2405] "Pod admission denied" podUID="977f5148-3311-494a-bf97-2ff121fa03b4" pod="tigera-operator/tigera-operator-747864d56d-bhmhd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.865208 kubelet[2790]: I0813 01:46:51.865049 2790 kubelet.go:2405] "Pod admission denied" podUID="21e670d4-2df8-487f-a4f5-a14db53d38df" pod="tigera-operator/tigera-operator-747864d56d-z8hbq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.965269 kubelet[2790]: I0813 01:46:51.965220 2790 kubelet.go:2405] "Pod admission denied" podUID="36fb273c-3207-49d6-a56b-cc706e5cf779" pod="tigera-operator/tigera-operator-747864d56d-c5bgs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.168277 kubelet[2790]: I0813 01:46:52.167946 2790 kubelet.go:2405] "Pod admission denied" podUID="5516c230-d3c3-4949-919b-fc81608e5729" pod="tigera-operator/tigera-operator-747864d56d-jjfq7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.265612 kubelet[2790]: I0813 01:46:52.265564 2790 kubelet.go:2405] "Pod admission denied" podUID="169ef545-c02c-437d-844f-d18fbcd19029" pod="tigera-operator/tigera-operator-747864d56d-jtbh8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.366913 kubelet[2790]: I0813 01:46:52.366860 2790 kubelet.go:2405] "Pod admission denied" podUID="d86d3cbe-7e6b-413e-953e-43d0f5572d82" pod="tigera-operator/tigera-operator-747864d56d-mtdw2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.464729 kubelet[2790]: I0813 01:46:52.464679 2790 kubelet.go:2405] "Pod admission denied" podUID="947736cd-e598-4015-b565-7ddbed33f123" pod="tigera-operator/tigera-operator-747864d56d-sm68h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.565935 kubelet[2790]: I0813 01:46:52.565871 2790 kubelet.go:2405] "Pod admission denied" podUID="482822d7-efb8-4c59-9613-8aea3a2cf7d9" pod="tigera-operator/tigera-operator-747864d56d-gftbt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.666383 kubelet[2790]: I0813 01:46:52.666327 2790 kubelet.go:2405] "Pod admission denied" podUID="fcc213d5-2eaf-4508-af97-1a39169500d3" pod="tigera-operator/tigera-operator-747864d56d-clhfm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.769913 kubelet[2790]: I0813 01:46:52.769203 2790 kubelet.go:2405] "Pod admission denied" podUID="d7012f09-8367-4911-9275-1c3d75fa8e6b" pod="tigera-operator/tigera-operator-747864d56d-f52jb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.866564 kubelet[2790]: I0813 01:46:52.866503 2790 kubelet.go:2405] "Pod admission denied" podUID="8d8dabac-6d84-45a3-8d72-3328b0ab45d5" pod="tigera-operator/tigera-operator-747864d56d-7m4n7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.918033 kubelet[2790]: I0813 01:46:52.917980 2790 kubelet.go:2405] "Pod admission denied" podUID="e4852665-c2f1-410d-88ff-0dd8c7fe0e94" pod="tigera-operator/tigera-operator-747864d56d-jhsbg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.015292 kubelet[2790]: I0813 01:46:53.015206 2790 kubelet.go:2405] "Pod admission denied" podUID="f6434559-e87c-4deb-8deb-9f5fccf4b36a" pod="tigera-operator/tigera-operator-747864d56d-t2clz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.119143 kubelet[2790]: I0813 01:46:53.118224 2790 kubelet.go:2405] "Pod admission denied" podUID="5f7d21d0-d3c8-4011-86bb-7f9682967587" pod="tigera-operator/tigera-operator-747864d56d-m27wn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.168875 kubelet[2790]: I0813 01:46:53.168788 2790 kubelet.go:2405] "Pod admission denied" podUID="68193d00-7e35-47fc-a959-4d3742fa9c4b" pod="tigera-operator/tigera-operator-747864d56d-dqqcf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.266137 kubelet[2790]: I0813 01:46:53.266067 2790 kubelet.go:2405] "Pod admission denied" podUID="cbf44601-e506-475e-a214-6e6cb8f85002" pod="tigera-operator/tigera-operator-747864d56d-hwpqs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.365998 kubelet[2790]: I0813 01:46:53.365938 2790 kubelet.go:2405] "Pod admission denied" podUID="c482f3b9-b492-49e7-b033-97b59f2d0c50" pod="tigera-operator/tigera-operator-747864d56d-psf4n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.467051 kubelet[2790]: I0813 01:46:53.466996 2790 kubelet.go:2405] "Pod admission denied" podUID="ac3bc5df-a661-4056-83b8-af61d98e5c4d" pod="tigera-operator/tigera-operator-747864d56d-mdfp9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.526254 kubelet[2790]: I0813 01:46:53.526198 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:53.526254 kubelet[2790]: I0813 01:46:53.526240 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:46:53.528676 kubelet[2790]: I0813 01:46:53.528638 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:46:53.540705 kubelet[2790]: I0813 01:46:53.540671 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:53.540872 kubelet[2790]: I0813 01:46:53.540761 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-76ff444f8d-4xcg9","kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-node-tsmrf","calico-system/csi-node-driver-c7jrc","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:46:53.540872 kubelet[2790]: E0813 01:46:53.540791 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:46:53.540872 kubelet[2790]: E0813 01:46:53.540801 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:46:53.540872 kubelet[2790]: E0813 01:46:53.540808 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:46:53.540872 kubelet[2790]: E0813 01:46:53.540814 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:46:53.540872 kubelet[2790]: E0813 01:46:53.540820 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:46:53.540872 kubelet[2790]: E0813 01:46:53.540830 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:46:53.540872 kubelet[2790]: E0813 01:46:53.540838 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:46:53.540872 kubelet[2790]: E0813 01:46:53.540845 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:46:53.540872 kubelet[2790]: E0813 01:46:53.540853 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:46:53.540872 kubelet[2790]: E0813 01:46:53.540860 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:46:53.540872 kubelet[2790]: I0813 01:46:53.540870 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:46:53.665616 kubelet[2790]: I0813 01:46:53.665562 2790 kubelet.go:2405] "Pod admission denied" podUID="42c021d7-5aa7-4991-9aeb-85cf890afb8b" pod="tigera-operator/tigera-operator-747864d56d-67hrr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.767211 kubelet[2790]: I0813 01:46:53.766219 2790 kubelet.go:2405] "Pod admission denied" podUID="1c60c981-417b-4fda-b686-1a79d2e4d8f0" pod="tigera-operator/tigera-operator-747864d56d-sbdd2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.864157 kubelet[2790]: I0813 01:46:53.864094 2790 kubelet.go:2405] "Pod admission denied" podUID="24131215-10f0-428b-859f-cf615c2b6fc6" pod="tigera-operator/tigera-operator-747864d56d-ctm5n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.067418 kubelet[2790]: I0813 01:46:54.066272 2790 kubelet.go:2405] "Pod admission denied" podUID="1d6bfdf1-50cf-4ade-84ca-dcd95de9e56b" pod="tigera-operator/tigera-operator-747864d56d-z965b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.164879 kubelet[2790]: I0813 01:46:54.164823 2790 kubelet.go:2405] "Pod admission denied" podUID="ec3d590f-4713-489d-94f5-fbbb9ed2ce1d" pod="tigera-operator/tigera-operator-747864d56d-bcft5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.217783 kubelet[2790]: I0813 01:46:54.217648 2790 kubelet.go:2405] "Pod admission denied" podUID="b992a67c-018b-4a6d-ac12-a3f3bf104ea2" pod="tigera-operator/tigera-operator-747864d56d-z4kf9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.318871 kubelet[2790]: I0813 01:46:54.318478 2790 kubelet.go:2405] "Pod admission denied" podUID="b5fabe09-cf55-4ad5-a8ae-eacef0417ce6" pod="tigera-operator/tigera-operator-747864d56d-zrjjh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.419777 kubelet[2790]: I0813 01:46:54.418678 2790 kubelet.go:2405] "Pod admission denied" podUID="4966fed4-fad1-424b-88a3-91831b704018" pod="tigera-operator/tigera-operator-747864d56d-wxfsp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.517685 kubelet[2790]: I0813 01:46:54.517629 2790 kubelet.go:2405] "Pod admission denied" podUID="0ac3ff7d-ec07-4773-a5d2-9b9712354734" pod="tigera-operator/tigera-operator-747864d56d-cthb4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.614609 kubelet[2790]: I0813 01:46:54.614248 2790 kubelet.go:2405] "Pod admission denied" podUID="21b56c7e-53b1-4f76-ae45-5ea148022d8b" pod="tigera-operator/tigera-operator-747864d56d-krhbl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.698298 containerd[1556]: time="2025-08-13T01:46:54.698249852Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:54.722786 kubelet[2790]: I0813 01:46:54.722340 2790 kubelet.go:2405] "Pod admission denied" podUID="81f311e8-fa22-42ef-ab02-55e5f5c63a84" pod="tigera-operator/tigera-operator-747864d56d-cp8gw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.777990 containerd[1556]: time="2025-08-13T01:46:54.777934583Z" level=error msg="Failed to destroy network for sandbox \"ba5d6172032cdf4c854227ccff91b8f1712a8b22c6c865d65c742737acb1826d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:54.780705 containerd[1556]: time="2025-08-13T01:46:54.780671682Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba5d6172032cdf4c854227ccff91b8f1712a8b22c6c865d65c742737acb1826d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:54.781196 kubelet[2790]: E0813 01:46:54.781152 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba5d6172032cdf4c854227ccff91b8f1712a8b22c6c865d65c742737acb1826d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:54.781269 kubelet[2790]: E0813 01:46:54.781240 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba5d6172032cdf4c854227ccff91b8f1712a8b22c6c865d65c742737acb1826d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:46:54.781297 kubelet[2790]: E0813 01:46:54.781268 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ba5d6172032cdf4c854227ccff91b8f1712a8b22c6c865d65c742737acb1826d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:46:54.781735 kubelet[2790]: E0813 01:46:54.781350 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ba5d6172032cdf4c854227ccff91b8f1712a8b22c6c865d65c742737acb1826d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:46:54.782438 systemd[1]: run-netns-cni\x2d5a5103eb\x2d5018\x2dfc4a\x2d85e6\x2d07d5944c880e.mount: Deactivated successfully. Aug 13 01:46:54.815689 kubelet[2790]: I0813 01:46:54.815627 2790 kubelet.go:2405] "Pod admission denied" podUID="78b72ddd-c11e-4314-a5e6-b7d4a600b899" pod="tigera-operator/tigera-operator-747864d56d-xbtp5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.916412 kubelet[2790]: I0813 01:46:54.915860 2790 kubelet.go:2405] "Pod admission denied" podUID="18ef99c4-1966-4bc6-8254-60a0c6320110" pod="tigera-operator/tigera-operator-747864d56d-wc2kh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.018771 kubelet[2790]: I0813 01:46:55.018003 2790 kubelet.go:2405] "Pod admission denied" podUID="edc24067-c180-4d51-9667-88b3ebfe6e66" pod="tigera-operator/tigera-operator-747864d56d-slvw2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.117713 kubelet[2790]: I0813 01:46:55.117669 2790 kubelet.go:2405] "Pod admission denied" podUID="4cbb9c7f-c6d5-491a-82f8-ca9177b18f6d" pod="tigera-operator/tigera-operator-747864d56d-xz9db" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.242675 kubelet[2790]: I0813 01:46:55.242601 2790 kubelet.go:2405] "Pod admission denied" podUID="ca46b372-ef6d-4e64-a7f6-19b50473ee7d" pod="tigera-operator/tigera-operator-747864d56d-x7md6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.264434 kubelet[2790]: I0813 01:46:55.264390 2790 kubelet.go:2405] "Pod admission denied" podUID="6a698e7b-894a-4a17-9af4-b88e25fa7673" pod="tigera-operator/tigera-operator-747864d56d-bppgv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.367023 kubelet[2790]: I0813 01:46:55.366960 2790 kubelet.go:2405] "Pod admission denied" podUID="49c06b24-9e1d-4aa1-b0c5-37a37cb53451" pod="tigera-operator/tigera-operator-747864d56d-7zprv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.566799 kubelet[2790]: I0813 01:46:55.566369 2790 kubelet.go:2405] "Pod admission denied" podUID="e3d68ecb-2fda-4c06-b51d-cbb10b8267fe" pod="tigera-operator/tigera-operator-747864d56d-zvzz4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.665889 kubelet[2790]: I0813 01:46:55.665825 2790 kubelet.go:2405] "Pod admission denied" podUID="bf788fc7-4e4a-4f7e-872c-cc52cbeb6035" pod="tigera-operator/tigera-operator-747864d56d-wqgdh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.764611 kubelet[2790]: I0813 01:46:55.764561 2790 kubelet.go:2405] "Pod admission denied" podUID="728417a3-18ae-4a65-b3c9-31f21282e5d7" pod="tigera-operator/tigera-operator-747864d56d-mkj25" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.965985 kubelet[2790]: I0813 01:46:55.965948 2790 kubelet.go:2405] "Pod admission denied" podUID="6bffaef1-c68e-435b-bd9d-ab64945e8f73" pod="tigera-operator/tigera-operator-747864d56d-gbdvn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.069114 kubelet[2790]: I0813 01:46:56.068891 2790 kubelet.go:2405] "Pod admission denied" podUID="7854a3f7-3dc0-4db5-a2c8-ca7c6fc3d0b3" pod="tigera-operator/tigera-operator-747864d56d-pw4tk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.166211 kubelet[2790]: I0813 01:46:56.166007 2790 kubelet.go:2405] "Pod admission denied" podUID="45b2d245-698c-4abc-9016-d3692f4a397f" pod="tigera-operator/tigera-operator-747864d56d-kbwf6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.267414 kubelet[2790]: I0813 01:46:56.267180 2790 kubelet.go:2405] "Pod admission denied" podUID="c54c8078-778f-4681-ac56-df0d624394e5" pod="tigera-operator/tigera-operator-747864d56d-nhgv5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.366530 kubelet[2790]: I0813 01:46:56.366472 2790 kubelet.go:2405] "Pod admission denied" podUID="ff992ae7-98dd-465d-a84d-cdd2050dc329" pod="tigera-operator/tigera-operator-747864d56d-j9lmw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.569876 kubelet[2790]: I0813 01:46:56.569484 2790 kubelet.go:2405] "Pod admission denied" podUID="13b03d81-d5cd-43e7-bc2c-43e8fec82781" pod="tigera-operator/tigera-operator-747864d56d-jg8m9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.665910 kubelet[2790]: I0813 01:46:56.665837 2790 kubelet.go:2405] "Pod admission denied" podUID="6ae37917-b83f-4a37-b42e-1c744975f469" pod="tigera-operator/tigera-operator-747864d56d-zjhzk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.766205 kubelet[2790]: I0813 01:46:56.766123 2790 kubelet.go:2405] "Pod admission denied" podUID="ec216371-085e-496b-9c89-fe34ab7e8a1f" pod="tigera-operator/tigera-operator-747864d56d-4bpv9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.869985 kubelet[2790]: I0813 01:46:56.869175 2790 kubelet.go:2405] "Pod admission denied" podUID="b6a96814-f39b-4a4b-9c57-ecf37e196a0f" pod="tigera-operator/tigera-operator-747864d56d-fdwlz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.969114 kubelet[2790]: I0813 01:46:56.969050 2790 kubelet.go:2405] "Pod admission denied" podUID="ac4de25c-9b0b-4a14-b6ca-fc32e9bbeaf6" pod="tigera-operator/tigera-operator-747864d56d-8cfcb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.167504 kubelet[2790]: I0813 01:46:57.167111 2790 kubelet.go:2405] "Pod admission denied" podUID="e716f2d3-c732-46c6-99a9-eb4cdcede12c" pod="tigera-operator/tigera-operator-747864d56d-zhl5g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.265953 kubelet[2790]: I0813 01:46:57.265906 2790 kubelet.go:2405] "Pod admission denied" podUID="4bbaccb6-8b34-4821-b3b3-92c70fd770e9" pod="tigera-operator/tigera-operator-747864d56d-7nts6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.366264 kubelet[2790]: I0813 01:46:57.366207 2790 kubelet.go:2405] "Pod admission denied" podUID="e2da5927-3edd-497a-9f8f-dac6dd2a2efd" pod="tigera-operator/tigera-operator-747864d56d-ljcpx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.567405 kubelet[2790]: I0813 01:46:57.567334 2790 kubelet.go:2405] "Pod admission denied" podUID="52ff830c-5642-433e-bfff-ae63950ca34e" pod="tigera-operator/tigera-operator-747864d56d-2mrxt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.669206 kubelet[2790]: I0813 01:46:57.669145 2790 kubelet.go:2405] "Pod admission denied" podUID="84e2ccd4-514a-4d95-9f33-54b60ea8ac62" pod="tigera-operator/tigera-operator-747864d56d-nm6gz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.717999 kubelet[2790]: I0813 01:46:57.717937 2790 kubelet.go:2405] "Pod admission denied" podUID="86bcea3a-55c1-44f4-85af-ca61b11f49f4" pod="tigera-operator/tigera-operator-747864d56d-kd5q7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.814974 kubelet[2790]: I0813 01:46:57.814916 2790 kubelet.go:2405] "Pod admission denied" podUID="7fb2cbdc-2c5d-482f-8805-f90802d9ca34" pod="tigera-operator/tigera-operator-747864d56d-rsqw8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.018224 kubelet[2790]: I0813 01:46:58.018165 2790 kubelet.go:2405] "Pod admission denied" podUID="a16520e8-17b1-4d64-bca7-ba25a910b500" pod="tigera-operator/tigera-operator-747864d56d-fjpbd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.117356 kubelet[2790]: I0813 01:46:58.117022 2790 kubelet.go:2405] "Pod admission denied" podUID="60cd2ad3-1684-4dc9-bf0a-62af522b3356" pod="tigera-operator/tigera-operator-747864d56d-rkkw7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.167795 kubelet[2790]: I0813 01:46:58.167717 2790 kubelet.go:2405] "Pod admission denied" podUID="35c3ab81-e850-4c2b-bf46-c465f27d17c2" pod="tigera-operator/tigera-operator-747864d56d-sbtbk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.264360 kubelet[2790]: I0813 01:46:58.264305 2790 kubelet.go:2405] "Pod admission denied" podUID="8b4cda45-6900-4a5a-a714-75052fe1b11f" pod="tigera-operator/tigera-operator-747864d56d-4dnzk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.466109 kubelet[2790]: I0813 01:46:58.466063 2790 kubelet.go:2405] "Pod admission denied" podUID="c1ddc2fd-7381-48b2-b32b-11170a0c35f8" pod="tigera-operator/tigera-operator-747864d56d-fwjcz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.565835 kubelet[2790]: I0813 01:46:58.565769 2790 kubelet.go:2405] "Pod admission denied" podUID="b6f4262f-9d6f-49aa-9c34-a304b13182db" pod="tigera-operator/tigera-operator-747864d56d-9bnqw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.618521 kubelet[2790]: I0813 01:46:58.618461 2790 kubelet.go:2405] "Pod admission denied" podUID="6d7395d9-5d2c-43c3-9277-9667d28d7310" pod="tigera-operator/tigera-operator-747864d56d-cvfxw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.695924 containerd[1556]: time="2025-08-13T01:46:58.695349388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:58.726110 kubelet[2790]: I0813 01:46:58.725493 2790 kubelet.go:2405] "Pod admission denied" podUID="fca30b3a-3e2a-4673-af3f-1c5c86943f91" pod="tigera-operator/tigera-operator-747864d56d-pcgfz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.785353 containerd[1556]: time="2025-08-13T01:46:58.785303144Z" level=error msg="Failed to destroy network for sandbox \"2752c3f83939120ac55a9b0f7ba515dbf9b098f939d6590428f49990d148bed5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:58.787454 systemd[1]: run-netns-cni\x2d7bbbd4d2\x2dd798\x2dec2b\x2d2ca5\x2d66df222f39d6.mount: Deactivated successfully. Aug 13 01:46:58.789132 containerd[1556]: time="2025-08-13T01:46:58.789085370Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2752c3f83939120ac55a9b0f7ba515dbf9b098f939d6590428f49990d148bed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:58.789704 kubelet[2790]: E0813 01:46:58.789670 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2752c3f83939120ac55a9b0f7ba515dbf9b098f939d6590428f49990d148bed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:58.789813 kubelet[2790]: E0813 01:46:58.789732 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2752c3f83939120ac55a9b0f7ba515dbf9b098f939d6590428f49990d148bed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:46:58.789846 kubelet[2790]: E0813 01:46:58.789816 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2752c3f83939120ac55a9b0f7ba515dbf9b098f939d6590428f49990d148bed5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:46:58.790164 kubelet[2790]: E0813 01:46:58.789892 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2752c3f83939120ac55a9b0f7ba515dbf9b098f939d6590428f49990d148bed5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:46:58.816336 kubelet[2790]: I0813 01:46:58.816265 2790 kubelet.go:2405] "Pod admission denied" podUID="7c168f61-ea3c-42b7-9f72-006d6612f181" pod="tigera-operator/tigera-operator-747864d56d-kgwd5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.920177 kubelet[2790]: I0813 01:46:58.920113 2790 kubelet.go:2405] "Pod admission denied" podUID="894c03c2-c0a0-480c-ab93-7f408085f46c" pod="tigera-operator/tigera-operator-747864d56d-4q8d8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.019140 kubelet[2790]: I0813 01:46:59.018990 2790 kubelet.go:2405] "Pod admission denied" podUID="99e1513b-40ed-4247-8746-17c10276e2f5" pod="tigera-operator/tigera-operator-747864d56d-2qwkh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.117102 kubelet[2790]: I0813 01:46:59.117044 2790 kubelet.go:2405] "Pod admission denied" podUID="16125425-6ce5-4da2-a84b-376c4a489c8f" pod="tigera-operator/tigera-operator-747864d56d-q422x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.217674 kubelet[2790]: I0813 01:46:59.217582 2790 kubelet.go:2405] "Pod admission denied" podUID="aba22dde-3152-48b5-b53a-99bcdb41015b" pod="tigera-operator/tigera-operator-747864d56d-njzt4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.316026 kubelet[2790]: I0813 01:46:59.315899 2790 kubelet.go:2405] "Pod admission denied" podUID="0b31f06f-5e46-492a-bd42-93e76c4078c3" pod="tigera-operator/tigera-operator-747864d56d-9qsxh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.419152 kubelet[2790]: I0813 01:46:59.419095 2790 kubelet.go:2405] "Pod admission denied" podUID="a3303522-9782-43fc-a3a4-ab630fd644aa" pod="tigera-operator/tigera-operator-747864d56d-hk5pr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.465974 kubelet[2790]: I0813 01:46:59.465927 2790 kubelet.go:2405] "Pod admission denied" podUID="938e12aa-ff91-4d07-a669-8452c9dbfe4f" pod="tigera-operator/tigera-operator-747864d56d-zzzkw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.568086 kubelet[2790]: I0813 01:46:59.567963 2790 kubelet.go:2405] "Pod admission denied" podUID="69dbba42-452b-4894-bba2-34cd72b14d14" pod="tigera-operator/tigera-operator-747864d56d-9hc9d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.695347 kubelet[2790]: E0813 01:46:59.695143 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:59.695347 kubelet[2790]: E0813 01:46:59.695182 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:46:59.696541 containerd[1556]: time="2025-08-13T01:46:59.696251708Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:59.697021 containerd[1556]: time="2025-08-13T01:46:59.696835186Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:59.780359 kubelet[2790]: I0813 01:46:59.780301 2790 kubelet.go:2405] "Pod admission denied" podUID="a6041022-74c2-47e2-9928-2b9c68161937" pod="tigera-operator/tigera-operator-747864d56d-6bjc7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.785516 containerd[1556]: time="2025-08-13T01:46:59.785366715Z" level=error msg="Failed to destroy network for sandbox \"8367cf5020314c8a3c6ea1443e624adde58745bf743cf2f40dd09ac56c4ba8b8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:59.786543 containerd[1556]: time="2025-08-13T01:46:59.786487661Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8367cf5020314c8a3c6ea1443e624adde58745bf743cf2f40dd09ac56c4ba8b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:59.788903 kubelet[2790]: E0813 01:46:59.788868 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8367cf5020314c8a3c6ea1443e624adde58745bf743cf2f40dd09ac56c4ba8b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:59.788992 kubelet[2790]: E0813 01:46:59.788919 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8367cf5020314c8a3c6ea1443e624adde58745bf743cf2f40dd09ac56c4ba8b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:46:59.788992 kubelet[2790]: E0813 01:46:59.788959 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8367cf5020314c8a3c6ea1443e624adde58745bf743cf2f40dd09ac56c4ba8b8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:46:59.789085 kubelet[2790]: E0813 01:46:59.789003 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8367cf5020314c8a3c6ea1443e624adde58745bf743cf2f40dd09ac56c4ba8b8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vtdcd" podUID="caa5836a-45f9-496b-86c1-95f6e1b6da17" Aug 13 01:46:59.790532 systemd[1]: run-netns-cni\x2d7a4c7b4a\x2d0243\x2d39bd\x2db8d8\x2d934a3ae424c0.mount: Deactivated successfully. Aug 13 01:46:59.806785 containerd[1556]: time="2025-08-13T01:46:59.805159336Z" level=error msg="Failed to destroy network for sandbox \"7d97f22285d7e2c142bf789fc84c186194cb50f1807eb4ae3775c5824f1aa0a5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:59.807918 containerd[1556]: time="2025-08-13T01:46:59.807879956Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d97f22285d7e2c142bf789fc84c186194cb50f1807eb4ae3775c5824f1aa0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:59.808441 kubelet[2790]: E0813 01:46:59.808365 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d97f22285d7e2c142bf789fc84c186194cb50f1807eb4ae3775c5824f1aa0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:59.809375 systemd[1]: run-netns-cni\x2d8edcebb4\x2d4f6d\x2d492b\x2d4b80\x2d6db707e89263.mount: Deactivated successfully. Aug 13 01:46:59.812850 kubelet[2790]: E0813 01:46:59.812802 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d97f22285d7e2c142bf789fc84c186194cb50f1807eb4ae3775c5824f1aa0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:46:59.813029 kubelet[2790]: E0813 01:46:59.812968 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d97f22285d7e2c142bf789fc84c186194cb50f1807eb4ae3775c5824f1aa0a5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:46:59.813208 kubelet[2790]: E0813 01:46:59.813125 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d97f22285d7e2c142bf789fc84c186194cb50f1807eb4ae3775c5824f1aa0a5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6rlkc" podUID="21a6ba02-58d5-43c1-a7de-9e24560a65f6" Aug 13 01:47:00.021361 kubelet[2790]: I0813 01:47:00.021286 2790 kubelet.go:2405] "Pod admission denied" podUID="abaac50e-e851-48b3-8290-14519a74d4c4" pod="tigera-operator/tigera-operator-747864d56d-dmd69" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.116566 kubelet[2790]: I0813 01:47:00.116052 2790 kubelet.go:2405] "Pod admission denied" podUID="062cfc43-1509-4dd4-994b-c65e0f3ae5ab" pod="tigera-operator/tigera-operator-747864d56d-m6wn9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.215563 kubelet[2790]: I0813 01:47:00.215499 2790 kubelet.go:2405] "Pod admission denied" podUID="2cff2a96-2c91-4a93-ba67-18269d9bc5d2" pod="tigera-operator/tigera-operator-747864d56d-zqr9c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.271868 kubelet[2790]: I0813 01:47:00.270770 2790 kubelet.go:2405] "Pod admission denied" podUID="31317667-90b6-4ca3-a417-1b85b38a2a34" pod="tigera-operator/tigera-operator-747864d56d-zhmsp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.365466 kubelet[2790]: I0813 01:47:00.365406 2790 kubelet.go:2405] "Pod admission denied" podUID="49728d1a-ca3e-4ac5-802b-6b1b49754582" pod="tigera-operator/tigera-operator-747864d56d-rh59s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.466363 kubelet[2790]: I0813 01:47:00.466305 2790 kubelet.go:2405] "Pod admission denied" podUID="8a3c2873-6628-41c5-8a06-f34daef6be50" pod="tigera-operator/tigera-operator-747864d56d-sn2jq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.567659 kubelet[2790]: I0813 01:47:00.566534 2790 kubelet.go:2405] "Pod admission denied" podUID="93068c93-2eb4-49de-a2a2-9f1fe713b630" pod="tigera-operator/tigera-operator-747864d56d-vscjc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.767452 kubelet[2790]: I0813 01:47:00.767398 2790 kubelet.go:2405] "Pod admission denied" podUID="bfd27716-ee6e-4be7-a617-0e08928ba37c" pod="tigera-operator/tigera-operator-747864d56d-phlfg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.868885 kubelet[2790]: I0813 01:47:00.868111 2790 kubelet.go:2405] "Pod admission denied" podUID="115dd428-d67a-4294-b0d4-2a472e32d512" pod="tigera-operator/tigera-operator-747864d56d-mftmt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.914525 kubelet[2790]: I0813 01:47:00.914467 2790 kubelet.go:2405] "Pod admission denied" podUID="f52ed146-8248-4a1c-92fe-0c319bcaf845" pod="tigera-operator/tigera-operator-747864d56d-cxxqh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.017523 kubelet[2790]: I0813 01:47:01.017473 2790 kubelet.go:2405] "Pod admission denied" podUID="0db82aa6-bcdf-422c-a3f7-de7330de8671" pod="tigera-operator/tigera-operator-747864d56d-sjjbr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.117299 kubelet[2790]: I0813 01:47:01.117246 2790 kubelet.go:2405] "Pod admission denied" podUID="2507dcfb-5fc1-4c54-a49c-9ad82f63ddcb" pod="tigera-operator/tigera-operator-747864d56d-r4rhr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.217768 kubelet[2790]: I0813 01:47:01.217213 2790 kubelet.go:2405] "Pod admission denied" podUID="f4cbb80a-b9bb-49bf-9820-fae4168da3f5" pod="tigera-operator/tigera-operator-747864d56d-wpgl7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.316536 kubelet[2790]: I0813 01:47:01.316475 2790 kubelet.go:2405] "Pod admission denied" podUID="1176578c-30c6-4d51-bd42-2740f2f9a121" pod="tigera-operator/tigera-operator-747864d56d-qttdh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.366346 kubelet[2790]: I0813 01:47:01.366284 2790 kubelet.go:2405] "Pod admission denied" podUID="535201ab-f191-4b17-b56d-b7420fb9c551" pod="tigera-operator/tigera-operator-747864d56d-r69sl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.465466 kubelet[2790]: I0813 01:47:01.465402 2790 kubelet.go:2405] "Pod admission denied" podUID="2541b9a8-47ee-4985-8f22-84ca650119af" pod="tigera-operator/tigera-operator-747864d56d-q5gqb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.566650 kubelet[2790]: I0813 01:47:01.566335 2790 kubelet.go:2405] "Pod admission denied" podUID="8eb43a0d-4d4d-41e7-9ebb-10cbf6b48f5f" pod="tigera-operator/tigera-operator-747864d56d-tp8cl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.665808 kubelet[2790]: I0813 01:47:01.665733 2790 kubelet.go:2405] "Pod admission denied" podUID="72c6c59b-8910-422b-8672-d94d395e7065" pod="tigera-operator/tigera-operator-747864d56d-p5z58" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.865360 kubelet[2790]: I0813 01:47:01.865225 2790 kubelet.go:2405] "Pod admission denied" podUID="3b6bdf8f-a851-4158-b065-63a080fb6e37" pod="tigera-operator/tigera-operator-747864d56d-6mxsv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.966448 kubelet[2790]: I0813 01:47:01.966394 2790 kubelet.go:2405] "Pod admission denied" podUID="a3aebb8a-b711-4afd-b8b0-818efa53befe" pod="tigera-operator/tigera-operator-747864d56d-8vppl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.064930 kubelet[2790]: I0813 01:47:02.064879 2790 kubelet.go:2405] "Pod admission denied" podUID="604b1344-f40e-44b7-be3d-1bae56813e86" pod="tigera-operator/tigera-operator-747864d56d-jwp9r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.268920 kubelet[2790]: I0813 01:47:02.268866 2790 kubelet.go:2405] "Pod admission denied" podUID="7b63eba5-e148-4837-b2c6-78895c10c1a3" pod="tigera-operator/tigera-operator-747864d56d-mm7qh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.368178 kubelet[2790]: I0813 01:47:02.368107 2790 kubelet.go:2405] "Pod admission denied" podUID="d2a2fd84-9846-47e0-966b-952762716885" pod="tigera-operator/tigera-operator-747864d56d-hskxw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.468273 kubelet[2790]: I0813 01:47:02.468218 2790 kubelet.go:2405] "Pod admission denied" podUID="c1203e97-0f15-4b18-be6d-5aa1900a221a" pod="tigera-operator/tigera-operator-747864d56d-md6fv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.671132 kubelet[2790]: I0813 01:47:02.670975 2790 kubelet.go:2405] "Pod admission denied" podUID="311d9b27-e11e-4bb3-aa0d-f422c05aa168" pod="tigera-operator/tigera-operator-747864d56d-2dwth" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.768400 kubelet[2790]: I0813 01:47:02.768337 2790 kubelet.go:2405] "Pod admission denied" podUID="25950c58-309e-4e46-83f3-ae41dc53b61e" pod="tigera-operator/tigera-operator-747864d56d-2rtxn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.867767 kubelet[2790]: I0813 01:47:02.867675 2790 kubelet.go:2405] "Pod admission denied" podUID="f77884f0-ba79-4023-9f6e-c2d12145ab4f" pod="tigera-operator/tigera-operator-747864d56d-5hls2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.970105 kubelet[2790]: I0813 01:47:02.970020 2790 kubelet.go:2405] "Pod admission denied" podUID="335d6858-a18b-405f-8ca2-cfc712c7ee23" pod="tigera-operator/tigera-operator-747864d56d-zfgwx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.068011 kubelet[2790]: I0813 01:47:03.067932 2790 kubelet.go:2405] "Pod admission denied" podUID="3ae422ad-e476-44aa-8683-75fe27580a2b" pod="tigera-operator/tigera-operator-747864d56d-xrm2v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.168109 kubelet[2790]: I0813 01:47:03.168057 2790 kubelet.go:2405] "Pod admission denied" podUID="d316660d-5fb4-47c6-a33c-f51edebc8700" pod="tigera-operator/tigera-operator-747864d56d-l8wbv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.266321 kubelet[2790]: I0813 01:47:03.266142 2790 kubelet.go:2405] "Pod admission denied" podUID="9382ce0c-7c5e-4c3e-85f3-5f17ffc526b0" pod="tigera-operator/tigera-operator-747864d56d-822ll" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.465577 kubelet[2790]: I0813 01:47:03.465491 2790 kubelet.go:2405] "Pod admission denied" podUID="2152f09b-65c9-4d46-aa27-1c7b02e45aa4" pod="tigera-operator/tigera-operator-747864d56d-n6hg5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.561780 kubelet[2790]: I0813 01:47:03.561486 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:03.561780 kubelet[2790]: I0813 01:47:03.561541 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:47:03.564937 kubelet[2790]: I0813 01:47:03.564880 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:47:03.578257 kubelet[2790]: I0813 01:47:03.578201 2790 kubelet.go:2405] "Pod admission denied" podUID="a03a8c03-c3c5-434f-b86d-eff049fa5199" pod="tigera-operator/tigera-operator-747864d56d-7m765" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.587333 kubelet[2790]: I0813 01:47:03.587301 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:03.587660 kubelet[2790]: I0813 01:47:03.587407 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/csi-node-driver-c7jrc","calico-system/calico-node-tsmrf","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:47:03.587660 kubelet[2790]: E0813 01:47:03.587441 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:47:03.587660 kubelet[2790]: E0813 01:47:03.587451 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:47:03.587660 kubelet[2790]: E0813 01:47:03.587647 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:47:03.587660 kubelet[2790]: E0813 01:47:03.587653 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:47:03.587660 kubelet[2790]: E0813 01:47:03.587660 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:47:03.587984 kubelet[2790]: E0813 01:47:03.587801 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:47:03.587984 kubelet[2790]: E0813 01:47:03.587819 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:47:03.587984 kubelet[2790]: E0813 01:47:03.587829 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:47:03.587984 kubelet[2790]: E0813 01:47:03.587838 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:47:03.587984 kubelet[2790]: E0813 01:47:03.587846 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:47:03.587984 kubelet[2790]: I0813 01:47:03.587858 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:47:03.618259 kubelet[2790]: I0813 01:47:03.618203 2790 kubelet.go:2405] "Pod admission denied" podUID="2c070ead-5429-4d06-9105-881910998fb5" pod="tigera-operator/tigera-operator-747864d56d-l5nlh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.717672 kubelet[2790]: I0813 01:47:03.717608 2790 kubelet.go:2405] "Pod admission denied" podUID="d1d8161c-bef0-4741-89be-198810d0ea4d" pod="tigera-operator/tigera-operator-747864d56d-nsjrm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.921698 kubelet[2790]: I0813 01:47:03.921106 2790 kubelet.go:2405] "Pod admission denied" podUID="ee68f673-7245-4317-8803-2f79a48e0bc2" pod="tigera-operator/tigera-operator-747864d56d-lpjnz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.016995 kubelet[2790]: I0813 01:47:04.016936 2790 kubelet.go:2405] "Pod admission denied" podUID="594dbeca-591c-4c1d-af03-513170a65f20" pod="tigera-operator/tigera-operator-747864d56d-ppznp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.119355 kubelet[2790]: I0813 01:47:04.119288 2790 kubelet.go:2405] "Pod admission denied" podUID="3245e3be-af79-4eaf-b0fd-e995a79e6e48" pod="tigera-operator/tigera-operator-747864d56d-gcz4d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.218547 kubelet[2790]: I0813 01:47:04.218487 2790 kubelet.go:2405] "Pod admission denied" podUID="6e13dc38-00ac-44ca-97b8-653deb8e8f82" pod="tigera-operator/tigera-operator-747864d56d-25ngs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.329658 kubelet[2790]: I0813 01:47:04.329601 2790 kubelet.go:2405] "Pod admission denied" podUID="0830d538-1807-4167-b95d-1bbbb7af1b1b" pod="tigera-operator/tigera-operator-747864d56d-m984t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.417432 kubelet[2790]: I0813 01:47:04.417368 2790 kubelet.go:2405] "Pod admission denied" podUID="e52014d8-cc9c-4985-b648-c56147f66f57" pod="tigera-operator/tigera-operator-747864d56d-8p2fq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.524051 kubelet[2790]: I0813 01:47:04.523880 2790 kubelet.go:2405] "Pod admission denied" podUID="b1a2cf2a-e370-48b5-a446-150c12693590" pod="tigera-operator/tigera-operator-747864d56d-d4dmm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.620389 kubelet[2790]: I0813 01:47:04.620326 2790 kubelet.go:2405] "Pod admission denied" podUID="03a6a1e3-330f-4236-8059-66ad6dd56b9e" pod="tigera-operator/tigera-operator-747864d56d-k9md9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.717465 kubelet[2790]: I0813 01:47:04.717393 2790 kubelet.go:2405] "Pod admission denied" podUID="dcf4c26c-1a54-4c53-814c-952d65c79910" pod="tigera-operator/tigera-operator-747864d56d-kkgjh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.816335 kubelet[2790]: I0813 01:47:04.816196 2790 kubelet.go:2405] "Pod admission denied" podUID="1dd4aa06-9be1-42ff-97fa-78f639808062" pod="tigera-operator/tigera-operator-747864d56d-5cd6h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.916599 kubelet[2790]: I0813 01:47:04.916543 2790 kubelet.go:2405] "Pod admission denied" podUID="e430a81d-ddef-45a2-ae24-a81e40444be0" pod="tigera-operator/tigera-operator-747864d56d-gbgv4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.019403 kubelet[2790]: I0813 01:47:05.019335 2790 kubelet.go:2405] "Pod admission denied" podUID="e1278def-3c5d-45ba-b477-6845cd695db6" pod="tigera-operator/tigera-operator-747864d56d-cwnm6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.117138 kubelet[2790]: I0813 01:47:05.116871 2790 kubelet.go:2405] "Pod admission denied" podUID="38c5f72b-2cc4-466e-9960-dbdc7eb48e8c" pod="tigera-operator/tigera-operator-747864d56d-krp8r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.217298 kubelet[2790]: I0813 01:47:05.217255 2790 kubelet.go:2405] "Pod admission denied" podUID="44126959-9556-49a3-b214-f1e8c0bb69bf" pod="tigera-operator/tigera-operator-747864d56d-hq276" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.316885 kubelet[2790]: I0813 01:47:05.316818 2790 kubelet.go:2405] "Pod admission denied" podUID="4262d98e-33ce-4cd5-b3b5-595669963136" pod="tigera-operator/tigera-operator-747864d56d-npngf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.419958 kubelet[2790]: I0813 01:47:05.419672 2790 kubelet.go:2405] "Pod admission denied" podUID="62b5c21e-41ee-48f7-bd95-8f982583557c" pod="tigera-operator/tigera-operator-747864d56d-rbtdz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.519435 kubelet[2790]: I0813 01:47:05.519378 2790 kubelet.go:2405] "Pod admission denied" podUID="2bd2731f-f4a0-4a3c-bb23-816803e41f58" pod="tigera-operator/tigera-operator-747864d56d-hvshb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.628981 kubelet[2790]: I0813 01:47:05.628538 2790 kubelet.go:2405] "Pod admission denied" podUID="5badee37-2dbe-4304-8519-bf46b77d6fb8" pod="tigera-operator/tigera-operator-747864d56d-llzrr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.695446 kubelet[2790]: E0813 01:47:05.695318 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2386263665: write /var/lib/containerd/tmpmounts/containerd-mount2386263665/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-tsmrf" podUID="517ffc51-1a34-4ced-acf5-d8e5da6a1838" Aug 13 01:47:05.720184 kubelet[2790]: I0813 01:47:05.720134 2790 kubelet.go:2405] "Pod admission denied" podUID="09121411-3fea-4da6-859d-b0b8194f1604" pod="tigera-operator/tigera-operator-747864d56d-whghv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.918793 kubelet[2790]: I0813 01:47:05.918717 2790 kubelet.go:2405] "Pod admission denied" podUID="0a1f5788-cb75-49d4-a3b6-3d4b65d87475" pod="tigera-operator/tigera-operator-747864d56d-nc88r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.016690 kubelet[2790]: I0813 01:47:06.016565 2790 kubelet.go:2405] "Pod admission denied" podUID="494be14e-22d8-4ce0-8269-27cd3b7b60f8" pod="tigera-operator/tigera-operator-747864d56d-nr64v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.117184 kubelet[2790]: I0813 01:47:06.117131 2790 kubelet.go:2405] "Pod admission denied" podUID="83c3fd88-2650-4baf-abb3-6114a3e049f1" pod="tigera-operator/tigera-operator-747864d56d-dcz6z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.218213 kubelet[2790]: I0813 01:47:06.218160 2790 kubelet.go:2405] "Pod admission denied" podUID="53105144-38eb-40fb-8578-71a37cceb28b" pod="tigera-operator/tigera-operator-747864d56d-cq9dm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.318938 kubelet[2790]: I0813 01:47:06.318186 2790 kubelet.go:2405] "Pod admission denied" podUID="e906bb2d-d3e5-4243-b8b5-ac7e10bc0f5d" pod="tigera-operator/tigera-operator-747864d56d-h9mwg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.520301 kubelet[2790]: I0813 01:47:06.520221 2790 kubelet.go:2405] "Pod admission denied" podUID="f688c5d8-0438-466f-a7e8-00599bfbb15a" pod="tigera-operator/tigera-operator-747864d56d-x9r9f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.620030 kubelet[2790]: I0813 01:47:06.619871 2790 kubelet.go:2405] "Pod admission denied" podUID="938dc39b-b284-45e0-be95-acc21583405a" pod="tigera-operator/tigera-operator-747864d56d-86nfd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.668920 kubelet[2790]: I0813 01:47:06.668867 2790 kubelet.go:2405] "Pod admission denied" podUID="6ad025de-c44c-44c5-8379-c919ccad5ca5" pod="tigera-operator/tigera-operator-747864d56d-g2rxw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.770924 kubelet[2790]: I0813 01:47:06.770849 2790 kubelet.go:2405] "Pod admission denied" podUID="39a38ca2-3b02-49e8-8842-9717745277a9" pod="tigera-operator/tigera-operator-747864d56d-vkx2k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.868289 kubelet[2790]: I0813 01:47:06.868232 2790 kubelet.go:2405] "Pod admission denied" podUID="b95f7baf-8b47-4208-83e2-9308f67bb184" pod="tigera-operator/tigera-operator-747864d56d-xdx8j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.920697 kubelet[2790]: I0813 01:47:06.920558 2790 kubelet.go:2405] "Pod admission denied" podUID="7d1624a3-0b26-4ce1-8c3f-3d07a3515c42" pod="tigera-operator/tigera-operator-747864d56d-wczn2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.017554 kubelet[2790]: I0813 01:47:07.017494 2790 kubelet.go:2405] "Pod admission denied" podUID="102f3ccd-d02b-445d-9cb6-354f99f7ed95" pod="tigera-operator/tigera-operator-747864d56d-dv2hq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.218373 kubelet[2790]: I0813 01:47:07.218321 2790 kubelet.go:2405] "Pod admission denied" podUID="fed8676d-4334-4e9c-9321-6e4324fe29b7" pod="tigera-operator/tigera-operator-747864d56d-rx4ff" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.326852 kubelet[2790]: I0813 01:47:07.325796 2790 kubelet.go:2405] "Pod admission denied" podUID="457b6712-979f-4b6d-87be-959f69fa5c25" pod="tigera-operator/tigera-operator-747864d56d-fxblz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.367286 kubelet[2790]: I0813 01:47:07.367234 2790 kubelet.go:2405] "Pod admission denied" podUID="e83bf239-1955-4070-8825-c5e9c7b2d00e" pod="tigera-operator/tigera-operator-747864d56d-ppttn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.468247 kubelet[2790]: I0813 01:47:07.468185 2790 kubelet.go:2405] "Pod admission denied" podUID="1539e8cc-72a4-4d9e-af10-a8167ec5e21a" pod="tigera-operator/tigera-operator-747864d56d-6479r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.569067 kubelet[2790]: I0813 01:47:07.568347 2790 kubelet.go:2405] "Pod admission denied" podUID="5181aabb-9366-42d9-b4da-987a672a9da9" pod="tigera-operator/tigera-operator-747864d56d-vhbl9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.671350 kubelet[2790]: I0813 01:47:07.671298 2790 kubelet.go:2405] "Pod admission denied" podUID="80b3a015-13e7-480a-8c3c-a1298c7587d3" pod="tigera-operator/tigera-operator-747864d56d-5s22g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.695521 containerd[1556]: time="2025-08-13T01:47:07.695453634Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:07.797613 kubelet[2790]: I0813 01:47:07.797002 2790 kubelet.go:2405] "Pod admission denied" podUID="515070bc-48d5-4903-b662-56c8922c2b31" pod="tigera-operator/tigera-operator-747864d56d-jkv9h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.804158 containerd[1556]: time="2025-08-13T01:47:07.804096064Z" level=error msg="Failed to destroy network for sandbox \"f7911accfb35109410a18f1a46e39294f64b9ae53db372de7d624dd8b94867dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:07.810020 systemd[1]: run-netns-cni\x2d4d4522b4\x2dd203\x2de6a7\x2d09db\x2d18ecde2da757.mount: Deactivated successfully. Aug 13 01:47:07.810901 containerd[1556]: time="2025-08-13T01:47:07.810411166Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7911accfb35109410a18f1a46e39294f64b9ae53db372de7d624dd8b94867dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:07.813896 kubelet[2790]: E0813 01:47:07.813842 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7911accfb35109410a18f1a46e39294f64b9ae53db372de7d624dd8b94867dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:07.813970 kubelet[2790]: E0813 01:47:07.813907 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7911accfb35109410a18f1a46e39294f64b9ae53db372de7d624dd8b94867dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:47:07.813970 kubelet[2790]: E0813 01:47:07.813931 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7911accfb35109410a18f1a46e39294f64b9ae53db372de7d624dd8b94867dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:47:07.814059 kubelet[2790]: E0813 01:47:07.813980 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7911accfb35109410a18f1a46e39294f64b9ae53db372de7d624dd8b94867dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:47:07.883178 kubelet[2790]: I0813 01:47:07.883030 2790 kubelet.go:2405] "Pod admission denied" podUID="612f7f36-ee5d-42dd-8836-2f2d8c944b39" pod="tigera-operator/tigera-operator-747864d56d-4sqqv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.969700 kubelet[2790]: I0813 01:47:07.969574 2790 kubelet.go:2405] "Pod admission denied" podUID="1ead479a-9bde-4838-ad26-baf58ab613b8" pod="tigera-operator/tigera-operator-747864d56d-fwnkf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.078826 kubelet[2790]: I0813 01:47:08.078391 2790 kubelet.go:2405] "Pod admission denied" podUID="b94fe7c2-b5f4-4f01-9a8d-76af58ce6572" pod="tigera-operator/tigera-operator-747864d56d-mbwmf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.269736 kubelet[2790]: I0813 01:47:08.269674 2790 kubelet.go:2405] "Pod admission denied" podUID="7abbfa27-3bd1-4b01-9fb8-983a9b638bea" pod="tigera-operator/tigera-operator-747864d56d-sfd2z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.368083 kubelet[2790]: I0813 01:47:08.368010 2790 kubelet.go:2405] "Pod admission denied" podUID="e8a4970e-8221-4ae8-8684-eb803bab2b12" pod="tigera-operator/tigera-operator-747864d56d-f9qz7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.473020 kubelet[2790]: I0813 01:47:08.472945 2790 kubelet.go:2405] "Pod admission denied" podUID="bcd7281e-c274-4ce1-a5be-14a6a1f20cfa" pod="tigera-operator/tigera-operator-747864d56d-knzwp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.575921 kubelet[2790]: I0813 01:47:08.574367 2790 kubelet.go:2405] "Pod admission denied" podUID="166149bf-fe53-4989-be95-4b3449db462a" pod="tigera-operator/tigera-operator-747864d56d-ncvng" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.668823 kubelet[2790]: I0813 01:47:08.668762 2790 kubelet.go:2405] "Pod admission denied" podUID="ddd85c79-9a62-4191-98c7-c8b19dbf0de9" pod="tigera-operator/tigera-operator-747864d56d-b742h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.769192 kubelet[2790]: I0813 01:47:08.769133 2790 kubelet.go:2405] "Pod admission denied" podUID="e2e66272-cd6b-44f9-8790-02871b083b3b" pod="tigera-operator/tigera-operator-747864d56d-2776s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.868417 kubelet[2790]: I0813 01:47:08.867943 2790 kubelet.go:2405] "Pod admission denied" podUID="a8abdb72-e335-4e66-8b9a-1128d570be93" pod="tigera-operator/tigera-operator-747864d56d-h8fhr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.980892 kubelet[2790]: I0813 01:47:08.980813 2790 kubelet.go:2405] "Pod admission denied" podUID="20288f55-93b2-4deb-8b20-a92a5b261ed9" pod="tigera-operator/tigera-operator-747864d56d-wbfgl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.068004 kubelet[2790]: I0813 01:47:09.067939 2790 kubelet.go:2405] "Pod admission denied" podUID="2c221c20-1f7e-45ca-be14-66c3c1e3df53" pod="tigera-operator/tigera-operator-747864d56d-fmpc6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.172552 kubelet[2790]: I0813 01:47:09.172351 2790 kubelet.go:2405] "Pod admission denied" podUID="020609bc-cea4-47d7-9611-50d0c883bd15" pod="tigera-operator/tigera-operator-747864d56d-zjrhv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.219439 kubelet[2790]: I0813 01:47:09.219378 2790 kubelet.go:2405] "Pod admission denied" podUID="d4bafeb7-25ec-4f88-929b-bb46c3e7ea4e" pod="tigera-operator/tigera-operator-747864d56d-6nh87" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.326573 kubelet[2790]: I0813 01:47:09.326505 2790 kubelet.go:2405] "Pod admission denied" podUID="04535564-5283-4d15-b7b0-e040a9aa2634" pod="tigera-operator/tigera-operator-747864d56d-xz5ww" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.520844 kubelet[2790]: I0813 01:47:09.520791 2790 kubelet.go:2405] "Pod admission denied" podUID="febae024-f59f-4224-8e77-3a9266047347" pod="tigera-operator/tigera-operator-747864d56d-rv28x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.617913 kubelet[2790]: I0813 01:47:09.617860 2790 kubelet.go:2405] "Pod admission denied" podUID="33168b00-ace9-44d2-ad69-ad14a9659558" pod="tigera-operator/tigera-operator-747864d56d-5dnh6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.671253 kubelet[2790]: I0813 01:47:09.671180 2790 kubelet.go:2405] "Pod admission denied" podUID="f469e313-4544-4d5a-9ce9-009e2e5fcc73" pod="tigera-operator/tigera-operator-747864d56d-8jp9g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.695363 containerd[1556]: time="2025-08-13T01:47:09.695213557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:09.782482 containerd[1556]: time="2025-08-13T01:47:09.781972061Z" level=error msg="Failed to destroy network for sandbox \"dd4815e43e036d3c10a242405358c8109e1d22699289fdb9838d27f33667343a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:09.785453 kubelet[2790]: I0813 01:47:09.785400 2790 kubelet.go:2405] "Pod admission denied" podUID="5ed0c006-a333-4f03-bdc4-f1f38cb9a3eb" pod="tigera-operator/tigera-operator-747864d56d-2hkcw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.786888 systemd[1]: run-netns-cni\x2df55de4c5\x2dc496\x2dd91d\x2dc43c\x2de943caa7b2b9.mount: Deactivated successfully. Aug 13 01:47:09.788255 containerd[1556]: time="2025-08-13T01:47:09.788171164Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd4815e43e036d3c10a242405358c8109e1d22699289fdb9838d27f33667343a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:09.791843 kubelet[2790]: E0813 01:47:09.789624 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd4815e43e036d3c10a242405358c8109e1d22699289fdb9838d27f33667343a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:09.791843 kubelet[2790]: E0813 01:47:09.789688 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd4815e43e036d3c10a242405358c8109e1d22699289fdb9838d27f33667343a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:47:09.791967 kubelet[2790]: E0813 01:47:09.789729 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd4815e43e036d3c10a242405358c8109e1d22699289fdb9838d27f33667343a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:47:09.792113 kubelet[2790]: E0813 01:47:09.792077 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd4815e43e036d3c10a242405358c8109e1d22699289fdb9838d27f33667343a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:47:09.867214 kubelet[2790]: I0813 01:47:09.867148 2790 kubelet.go:2405] "Pod admission denied" podUID="6600e05a-7efb-45a3-8626-e2f629ae8edf" pod="tigera-operator/tigera-operator-747864d56d-lmqq2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.918487 kubelet[2790]: I0813 01:47:09.918414 2790 kubelet.go:2405] "Pod admission denied" podUID="82f38d1f-7395-4c10-97b1-0ce59e6bf6de" pod="tigera-operator/tigera-operator-747864d56d-6gvql" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.019149 kubelet[2790]: I0813 01:47:10.019068 2790 kubelet.go:2405] "Pod admission denied" podUID="f87d8406-8757-42ad-95bb-b8a023245c06" pod="tigera-operator/tigera-operator-747864d56d-f8xqs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.126621 kubelet[2790]: I0813 01:47:10.125064 2790 kubelet.go:2405] "Pod admission denied" podUID="9605d83b-aaf5-441e-b06f-ef2f83e39fef" pod="tigera-operator/tigera-operator-747864d56d-jbgnl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.167669 kubelet[2790]: I0813 01:47:10.167609 2790 kubelet.go:2405] "Pod admission denied" podUID="c1fc72e5-a41f-4a5d-85c2-42c452adb488" pod="tigera-operator/tigera-operator-747864d56d-w84z2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.268865 kubelet[2790]: I0813 01:47:10.268793 2790 kubelet.go:2405] "Pod admission denied" podUID="cfbd98cc-4b1f-4ac2-ab1d-7cbf6d62cd13" pod="tigera-operator/tigera-operator-747864d56d-jrn78" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.470098 kubelet[2790]: I0813 01:47:10.470042 2790 kubelet.go:2405] "Pod admission denied" podUID="c8ab0f62-377a-404e-a5a6-423ff77cc8b2" pod="tigera-operator/tigera-operator-747864d56d-dbm7g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.578453 kubelet[2790]: I0813 01:47:10.577939 2790 kubelet.go:2405] "Pod admission denied" podUID="e60f7811-f932-4188-88bf-1aaf67cb2cee" pod="tigera-operator/tigera-operator-747864d56d-vzxqx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.618022 kubelet[2790]: I0813 01:47:10.617954 2790 kubelet.go:2405] "Pod admission denied" podUID="8c98fc36-f94f-4c39-8656-f3e462b649c7" pod="tigera-operator/tigera-operator-747864d56d-bnkdx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.718295 kubelet[2790]: I0813 01:47:10.718234 2790 kubelet.go:2405] "Pod admission denied" podUID="a6bf4a66-9e92-4b64-85a0-c67cdcee5cf0" pod="tigera-operator/tigera-operator-747864d56d-kqlhs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.819979 kubelet[2790]: I0813 01:47:10.819840 2790 kubelet.go:2405] "Pod admission denied" podUID="b6c4ead6-c520-424e-a948-52249852e8bc" pod="tigera-operator/tigera-operator-747864d56d-m4mhx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.925599 kubelet[2790]: I0813 01:47:10.925099 2790 kubelet.go:2405] "Pod admission denied" podUID="12053a7b-e35f-4313-94ad-3dc2f8627dcb" pod="tigera-operator/tigera-operator-747864d56d-mt2cd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.019374 kubelet[2790]: I0813 01:47:11.019310 2790 kubelet.go:2405] "Pod admission denied" podUID="081c8d36-3a2d-4314-ae52-cb9f3f05bbfe" pod="tigera-operator/tigera-operator-747864d56d-xnzwf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.118480 kubelet[2790]: I0813 01:47:11.118340 2790 kubelet.go:2405] "Pod admission denied" podUID="0a3e3aba-16a0-4f90-b36c-c0202ab44cfe" pod="tigera-operator/tigera-operator-747864d56d-xnzbp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.219631 kubelet[2790]: I0813 01:47:11.219579 2790 kubelet.go:2405] "Pod admission denied" podUID="1951d47e-9620-458d-a188-60095823a871" pod="tigera-operator/tigera-operator-747864d56d-t6wh5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.276724 kubelet[2790]: I0813 01:47:11.276672 2790 kubelet.go:2405] "Pod admission denied" podUID="b8d5a21a-e4bb-4758-b4f2-e4c4a157b1ec" pod="tigera-operator/tigera-operator-747864d56d-4p55s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.370510 kubelet[2790]: I0813 01:47:11.369901 2790 kubelet.go:2405] "Pod admission denied" podUID="70c89562-7e33-4897-a704-d757a2904b79" pod="tigera-operator/tigera-operator-747864d56d-f5czf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.468469 kubelet[2790]: I0813 01:47:11.468403 2790 kubelet.go:2405] "Pod admission denied" podUID="4658ccb5-9b4d-4021-bdc0-21a20161b6cc" pod="tigera-operator/tigera-operator-747864d56d-6qrks" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.522461 kubelet[2790]: I0813 01:47:11.522401 2790 kubelet.go:2405] "Pod admission denied" podUID="368db1e9-770a-489e-a3a2-4e440d5a350b" pod="tigera-operator/tigera-operator-747864d56d-6c8rs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.618934 kubelet[2790]: I0813 01:47:11.618854 2790 kubelet.go:2405] "Pod admission denied" podUID="4107d1c4-33ce-46a5-a154-2a139b37a8a4" pod="tigera-operator/tigera-operator-747864d56d-54m89" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.719436 kubelet[2790]: I0813 01:47:11.719369 2790 kubelet.go:2405] "Pod admission denied" podUID="594201f8-45e3-4946-9f6e-8c4ec0bb4dba" pod="tigera-operator/tigera-operator-747864d56d-sx2cc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.819344 kubelet[2790]: I0813 01:47:11.819283 2790 kubelet.go:2405] "Pod admission denied" podUID="3d91eb4b-cf3a-474f-bf02-66000e775fc7" pod="tigera-operator/tigera-operator-747864d56d-2dh5d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.928901 kubelet[2790]: I0813 01:47:11.928834 2790 kubelet.go:2405] "Pod admission denied" podUID="68314414-aa98-46fd-8bcd-cb1040627a3e" pod="tigera-operator/tigera-operator-747864d56d-mvf4l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.019414 kubelet[2790]: I0813 01:47:12.019095 2790 kubelet.go:2405] "Pod admission denied" podUID="6f8253d7-a270-40b5-b438-dd53728dfe68" pod="tigera-operator/tigera-operator-747864d56d-r4xt7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.220330 kubelet[2790]: I0813 01:47:12.220262 2790 kubelet.go:2405] "Pod admission denied" podUID="362ba2b7-e85b-4cde-bd1e-a2db9c62b3f3" pod="tigera-operator/tigera-operator-747864d56d-bxxj8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.319310 kubelet[2790]: I0813 01:47:12.319152 2790 kubelet.go:2405] "Pod admission denied" podUID="bb6532db-6d5f-4c73-a714-87862474b633" pod="tigera-operator/tigera-operator-747864d56d-lrmqp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.366460 kubelet[2790]: I0813 01:47:12.366406 2790 kubelet.go:2405] "Pod admission denied" podUID="f4978de0-5f97-4bdf-97c1-b9815713b7b7" pod="tigera-operator/tigera-operator-747864d56d-bbm2h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.470009 kubelet[2790]: I0813 01:47:12.469937 2790 kubelet.go:2405] "Pod admission denied" podUID="01245077-c9b3-44fc-9cc9-dc38c031ded2" pod="tigera-operator/tigera-operator-747864d56d-k8mpk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.671809 kubelet[2790]: I0813 01:47:12.671173 2790 kubelet.go:2405] "Pod admission denied" podUID="07d5789f-c10c-4e05-9683-a8f8ee942403" pod="tigera-operator/tigera-operator-747864d56d-z4ztp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.694416 kubelet[2790]: E0813 01:47:12.693971 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:47:12.696793 containerd[1556]: time="2025-08-13T01:47:12.696009110Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:12.697282 kubelet[2790]: E0813 01:47:12.696179 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:47:12.697282 kubelet[2790]: E0813 01:47:12.696678 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:47:12.697520 containerd[1556]: time="2025-08-13T01:47:12.697498506Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:12.796331 kubelet[2790]: I0813 01:47:12.796247 2790 kubelet.go:2405] "Pod admission denied" podUID="2b6a2cf8-0f39-42d6-a2f5-90ff54d2f05e" pod="tigera-operator/tigera-operator-747864d56d-9czg5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.798578 containerd[1556]: time="2025-08-13T01:47:12.798223621Z" level=error msg="Failed to destroy network for sandbox \"377b5cc42bba7524581b36fde6050eaa968e776c74b6bcf0d7d55b9fe109f8f8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:12.800684 systemd[1]: run-netns-cni\x2d733367bc\x2d8a41\x2d94c4\x2dbb0b\x2dc95b4dd3efb8.mount: Deactivated successfully. Aug 13 01:47:12.806780 containerd[1556]: time="2025-08-13T01:47:12.806695710Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"377b5cc42bba7524581b36fde6050eaa968e776c74b6bcf0d7d55b9fe109f8f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:12.812782 containerd[1556]: time="2025-08-13T01:47:12.811009189Z" level=error msg="Failed to destroy network for sandbox \"c3274ab5e2564726103cffc10bebbcec933347e77a1dde2dae32e76a4921e0f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:12.813833 kubelet[2790]: E0813 01:47:12.813686 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"377b5cc42bba7524581b36fde6050eaa968e776c74b6bcf0d7d55b9fe109f8f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:12.815071 containerd[1556]: time="2025-08-13T01:47:12.814370721Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3274ab5e2564726103cffc10bebbcec933347e77a1dde2dae32e76a4921e0f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:12.818109 kubelet[2790]: E0813 01:47:12.814882 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"377b5cc42bba7524581b36fde6050eaa968e776c74b6bcf0d7d55b9fe109f8f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:47:12.818109 kubelet[2790]: E0813 01:47:12.814912 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"377b5cc42bba7524581b36fde6050eaa968e776c74b6bcf0d7d55b9fe109f8f8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:47:12.818109 kubelet[2790]: E0813 01:47:12.815090 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"377b5cc42bba7524581b36fde6050eaa968e776c74b6bcf0d7d55b9fe109f8f8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vtdcd" podUID="caa5836a-45f9-496b-86c1-95f6e1b6da17" Aug 13 01:47:12.816465 systemd[1]: run-netns-cni\x2d7dfa6413\x2df9b0\x2d89fb\x2d0268\x2d757eefbcb7d1.mount: Deactivated successfully. Aug 13 01:47:12.819012 kubelet[2790]: E0813 01:47:12.818845 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3274ab5e2564726103cffc10bebbcec933347e77a1dde2dae32e76a4921e0f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:12.819012 kubelet[2790]: E0813 01:47:12.818906 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3274ab5e2564726103cffc10bebbcec933347e77a1dde2dae32e76a4921e0f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:47:12.819012 kubelet[2790]: E0813 01:47:12.818931 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c3274ab5e2564726103cffc10bebbcec933347e77a1dde2dae32e76a4921e0f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:47:12.819012 kubelet[2790]: E0813 01:47:12.818974 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c3274ab5e2564726103cffc10bebbcec933347e77a1dde2dae32e76a4921e0f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6rlkc" podUID="21a6ba02-58d5-43c1-a7de-9e24560a65f6" Aug 13 01:47:12.920701 kubelet[2790]: I0813 01:47:12.920602 2790 kubelet.go:2405] "Pod admission denied" podUID="4949f16c-ebed-4160-888a-fdf14b21e388" pod="tigera-operator/tigera-operator-747864d56d-vchnq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.019119 kubelet[2790]: I0813 01:47:13.019051 2790 kubelet.go:2405] "Pod admission denied" podUID="ca8404f1-dbbc-4cbb-bd17-a64b38636943" pod="tigera-operator/tigera-operator-747864d56d-5c5kc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.217112 kubelet[2790]: I0813 01:47:13.217058 2790 kubelet.go:2405] "Pod admission denied" podUID="2c0c99a1-e8ab-492c-b88b-6aedcc71df7f" pod="tigera-operator/tigera-operator-747864d56d-jrdmt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.318856 kubelet[2790]: I0813 01:47:13.318704 2790 kubelet.go:2405] "Pod admission denied" podUID="7050bb75-7b1f-4aca-b875-1bd170636a38" pod="tigera-operator/tigera-operator-747864d56d-m7scr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.417934 kubelet[2790]: I0813 01:47:13.417873 2790 kubelet.go:2405] "Pod admission denied" podUID="8c683829-de49-4a58-8740-1885c70d9ef1" pod="tigera-operator/tigera-operator-747864d56d-9fddc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.519016 kubelet[2790]: I0813 01:47:13.518960 2790 kubelet.go:2405] "Pod admission denied" podUID="e6214db5-2ebd-4017-b213-2e550857bed0" pod="tigera-operator/tigera-operator-747864d56d-7sc8c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.618781 kubelet[2790]: I0813 01:47:13.617599 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:13.618781 kubelet[2790]: I0813 01:47:13.617650 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:47:13.623273 kubelet[2790]: I0813 01:47:13.623239 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:47:13.628050 kubelet[2790]: I0813 01:47:13.628018 2790 kubelet.go:2405] "Pod admission denied" podUID="05bbfa02-4151-4b29-97ab-19a5b074aed2" pod="tigera-operator/tigera-operator-747864d56d-9blsg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.652441 kubelet[2790]: I0813 01:47:13.652168 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:13.652441 kubelet[2790]: I0813 01:47:13.652252 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/calico-node-tsmrf","calico-system/csi-node-driver-c7jrc","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:47:13.652441 kubelet[2790]: E0813 01:47:13.652281 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:47:13.652441 kubelet[2790]: E0813 01:47:13.652291 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:47:13.652441 kubelet[2790]: E0813 01:47:13.652297 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:47:13.652441 kubelet[2790]: E0813 01:47:13.652302 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:47:13.652441 kubelet[2790]: E0813 01:47:13.652309 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:47:13.652441 kubelet[2790]: E0813 01:47:13.652317 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:47:13.652441 kubelet[2790]: E0813 01:47:13.652326 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:47:13.652441 kubelet[2790]: E0813 01:47:13.652334 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:47:13.652441 kubelet[2790]: E0813 01:47:13.652341 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:47:13.652441 kubelet[2790]: E0813 01:47:13.652350 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:47:13.652441 kubelet[2790]: I0813 01:47:13.652359 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:47:13.819098 kubelet[2790]: I0813 01:47:13.819006 2790 kubelet.go:2405] "Pod admission denied" podUID="929a5f37-1d18-4974-9b4e-e21262e28e7a" pod="tigera-operator/tigera-operator-747864d56d-xtbsc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.918998 kubelet[2790]: I0813 01:47:13.918437 2790 kubelet.go:2405] "Pod admission denied" podUID="2cdcc2af-3433-4c8c-91c5-9efde179ab55" pod="tigera-operator/tigera-operator-747864d56d-mqp4f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.019604 kubelet[2790]: I0813 01:47:14.019538 2790 kubelet.go:2405] "Pod admission denied" podUID="f03e7b24-090c-4dbe-ae20-bc43ddd3d78b" pod="tigera-operator/tigera-operator-747864d56d-8mbzc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.118582 kubelet[2790]: I0813 01:47:14.118493 2790 kubelet.go:2405] "Pod admission denied" podUID="fa8269cc-65e5-4b83-b5f4-036fd738b2c8" pod="tigera-operator/tigera-operator-747864d56d-phqxr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.219930 kubelet[2790]: I0813 01:47:14.219866 2790 kubelet.go:2405] "Pod admission denied" podUID="84553e22-a35f-413d-9c34-6f389b9fe745" pod="tigera-operator/tigera-operator-747864d56d-m9vws" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.327776 kubelet[2790]: I0813 01:47:14.326166 2790 kubelet.go:2405] "Pod admission denied" podUID="390ec400-0b56-46a7-a447-3bd7facf0157" pod="tigera-operator/tigera-operator-747864d56d-m7b24" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.419473 kubelet[2790]: I0813 01:47:14.419417 2790 kubelet.go:2405] "Pod admission denied" podUID="081abbdb-0356-40a4-a94d-ddcc092d38ba" pod="tigera-operator/tigera-operator-747864d56d-98hns" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.520791 kubelet[2790]: I0813 01:47:14.520613 2790 kubelet.go:2405] "Pod admission denied" podUID="19f5362b-51df-43bc-bd39-7d1466c7f7c8" pod="tigera-operator/tigera-operator-747864d56d-kp7gh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.619002 kubelet[2790]: I0813 01:47:14.618940 2790 kubelet.go:2405] "Pod admission denied" podUID="9d96fbef-4861-44dd-8735-118b0731d679" pod="tigera-operator/tigera-operator-747864d56d-qhd98" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.719592 kubelet[2790]: I0813 01:47:14.719519 2790 kubelet.go:2405] "Pod admission denied" podUID="f4764c3d-d5ff-4eb0-924b-6002e71e48a8" pod="tigera-operator/tigera-operator-747864d56d-ghntl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.822149 kubelet[2790]: I0813 01:47:14.821003 2790 kubelet.go:2405] "Pod admission denied" podUID="08fe9f8b-712c-49a6-b83e-5144e2a6f2c7" pod="tigera-operator/tigera-operator-747864d56d-csgd2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.920409 kubelet[2790]: I0813 01:47:14.920352 2790 kubelet.go:2405] "Pod admission denied" podUID="e76f0567-c113-4dec-85ca-34c37eb8087f" pod="tigera-operator/tigera-operator-747864d56d-b8mvw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.028422 kubelet[2790]: I0813 01:47:15.027877 2790 kubelet.go:2405] "Pod admission denied" podUID="88d5744c-6f77-441a-9290-2980b7a3834d" pod="tigera-operator/tigera-operator-747864d56d-mjlqw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.120105 kubelet[2790]: I0813 01:47:15.119690 2790 kubelet.go:2405] "Pod admission denied" podUID="9858d570-32fd-4db9-9d67-fb1ac9fb03bb" pod="tigera-operator/tigera-operator-747864d56d-k89pb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.218870 kubelet[2790]: I0813 01:47:15.218808 2790 kubelet.go:2405] "Pod admission denied" podUID="8452de07-45af-4d4c-a02d-d9fbc704d667" pod="tigera-operator/tigera-operator-747864d56d-fm4lb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.422133 kubelet[2790]: I0813 01:47:15.421433 2790 kubelet.go:2405] "Pod admission denied" podUID="82cc91db-df9c-471a-8729-65cac9bbc2c5" pod="tigera-operator/tigera-operator-747864d56d-l8bjj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.524143 kubelet[2790]: I0813 01:47:15.524067 2790 kubelet.go:2405] "Pod admission denied" podUID="659782c9-e11d-419f-afc3-e45ee5b8930f" pod="tigera-operator/tigera-operator-747864d56d-h999b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.620821 kubelet[2790]: I0813 01:47:15.620723 2790 kubelet.go:2405] "Pod admission denied" podUID="5c2fafaf-f0e4-48a6-b1cb-6e5561ad77b4" pod="tigera-operator/tigera-operator-747864d56d-2n58f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.820303 kubelet[2790]: I0813 01:47:15.820243 2790 kubelet.go:2405] "Pod admission denied" podUID="444cb285-bff8-490f-bf12-b040cd1b7bd2" pod="tigera-operator/tigera-operator-747864d56d-7844v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.933340 kubelet[2790]: I0813 01:47:15.933074 2790 kubelet.go:2405] "Pod admission denied" podUID="12d216de-6ce7-4161-9b59-aac8f3bf9237" pod="tigera-operator/tigera-operator-747864d56d-tqxwm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.026997 kubelet[2790]: I0813 01:47:16.026922 2790 kubelet.go:2405] "Pod admission denied" podUID="6c4451b1-a0fd-4fcd-bf82-cd5cb2ea6457" pod="tigera-operator/tigera-operator-747864d56d-jn5vp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.224113 kubelet[2790]: I0813 01:47:16.224044 2790 kubelet.go:2405] "Pod admission denied" podUID="fa7dda70-68ea-4ba5-a1d3-5a2b6b1e6160" pod="tigera-operator/tigera-operator-747864d56d-mkcvs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.322178 kubelet[2790]: I0813 01:47:16.322114 2790 kubelet.go:2405] "Pod admission denied" podUID="2dedc374-5858-462f-8e4a-a82e5f54092f" pod="tigera-operator/tigera-operator-747864d56d-x6hrx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.420778 kubelet[2790]: I0813 01:47:16.420711 2790 kubelet.go:2405] "Pod admission denied" podUID="f99d378f-e13d-4aa8-9784-ee457bc59fd7" pod="tigera-operator/tigera-operator-747864d56d-lrdpb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.620710 kubelet[2790]: I0813 01:47:16.619967 2790 kubelet.go:2405] "Pod admission denied" podUID="69a5c460-b5d6-4fab-857e-d9aaaa4fda2a" pod="tigera-operator/tigera-operator-747864d56d-c5jlv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.701354 containerd[1556]: time="2025-08-13T01:47:16.701244536Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:47:16.744511 kubelet[2790]: I0813 01:47:16.743631 2790 kubelet.go:2405] "Pod admission denied" podUID="778dcf38-3c20-4281-b11f-52dfbf7fb95d" pod="tigera-operator/tigera-operator-747864d56d-qk2tg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.820053 kubelet[2790]: I0813 01:47:16.819982 2790 kubelet.go:2405] "Pod admission denied" podUID="504a5381-1a66-4262-bf73-04fc7fcdae2d" pod="tigera-operator/tigera-operator-747864d56d-zsc8l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.020874 kubelet[2790]: I0813 01:47:17.020816 2790 kubelet.go:2405] "Pod admission denied" podUID="8fd6efc2-d67f-47f6-8e5f-60dee76abdc5" pod="tigera-operator/tigera-operator-747864d56d-chc5d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.123040 kubelet[2790]: I0813 01:47:17.122979 2790 kubelet.go:2405] "Pod admission denied" podUID="609feb03-13b6-47e2-9852-56e912e3587d" pod="tigera-operator/tigera-operator-747864d56d-smblq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.218963 kubelet[2790]: I0813 01:47:17.218910 2790 kubelet.go:2405] "Pod admission denied" podUID="84930ab4-46b5-498f-ad78-8938ebcc3540" pod="tigera-operator/tigera-operator-747864d56d-2ctr9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.429173 kubelet[2790]: I0813 01:47:17.429023 2790 kubelet.go:2405] "Pod admission denied" podUID="b758b698-f4a0-4782-9f18-659b5c518b4b" pod="tigera-operator/tigera-operator-747864d56d-zcgqt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.529176 kubelet[2790]: I0813 01:47:17.529118 2790 kubelet.go:2405] "Pod admission denied" podUID="3101cb2a-ec8b-4231-82bb-bff84e026663" pod="tigera-operator/tigera-operator-747864d56d-gdqq4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.626846 kubelet[2790]: I0813 01:47:17.626780 2790 kubelet.go:2405] "Pod admission denied" podUID="44311e34-d12d-4eeb-bf6c-2d42838e604d" pod="tigera-operator/tigera-operator-747864d56d-xnwdt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.726406 kubelet[2790]: I0813 01:47:17.726358 2790 kubelet.go:2405] "Pod admission denied" podUID="de83f9a2-087a-4385-89ee-a955b876ec24" pod="tigera-operator/tigera-operator-747864d56d-mh6zr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.826192 kubelet[2790]: I0813 01:47:17.826132 2790 kubelet.go:2405] "Pod admission denied" podUID="80c63ae1-e0ec-48ad-ba8c-5c91bc9df971" pod="tigera-operator/tigera-operator-747864d56d-9gb65" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.034988 kubelet[2790]: I0813 01:47:18.034850 2790 kubelet.go:2405] "Pod admission denied" podUID="5d6b8cd0-605b-4415-bd4a-c7756616ee11" pod="tigera-operator/tigera-operator-747864d56d-pdlx5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.131676 kubelet[2790]: I0813 01:47:18.131622 2790 kubelet.go:2405] "Pod admission denied" podUID="8455062a-bcec-4828-8b55-cef5506c1f50" pod="tigera-operator/tigera-operator-747864d56d-7pqkp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.228313 kubelet[2790]: I0813 01:47:18.228270 2790 kubelet.go:2405] "Pod admission denied" podUID="8fa96eee-53fc-4fcd-9042-fd2df1a9ce16" pod="tigera-operator/tigera-operator-747864d56d-v8qc8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.328019 kubelet[2790]: I0813 01:47:18.327845 2790 kubelet.go:2405] "Pod admission denied" podUID="2c040522-f210-4595-981d-32c361cafcef" pod="tigera-operator/tigera-operator-747864d56d-9t98b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.428076 kubelet[2790]: I0813 01:47:18.428008 2790 kubelet.go:2405] "Pod admission denied" podUID="3c2e1b2e-f934-4f95-ac05-836d3c77dba9" pod="tigera-operator/tigera-operator-747864d56d-chsdb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.529603 kubelet[2790]: I0813 01:47:18.529563 2790 kubelet.go:2405] "Pod admission denied" podUID="561706aa-9d30-4fd9-a449-a9bb54d85896" pod="tigera-operator/tigera-operator-747864d56d-74rt2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.630394 kubelet[2790]: I0813 01:47:18.630245 2790 kubelet.go:2405] "Pod admission denied" podUID="698022fa-7398-44f2-81ae-ce13982a861e" pod="tigera-operator/tigera-operator-747864d56d-m5kfx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.695767 containerd[1556]: time="2025-08-13T01:47:18.695515241Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:18.737065 kubelet[2790]: I0813 01:47:18.736081 2790 kubelet.go:2405] "Pod admission denied" podUID="191ba9dd-6d72-4607-b613-e065f574fa78" pod="tigera-operator/tigera-operator-747864d56d-5r7nx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.833380 kubelet[2790]: I0813 01:47:18.833336 2790 kubelet.go:2405] "Pod admission denied" podUID="050ff1f9-2fbc-4e3d-a45f-75bbf6559feb" pod="tigera-operator/tigera-operator-747864d56d-pnnhf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:18.847496 containerd[1556]: time="2025-08-13T01:47:18.847428838Z" level=error msg="Failed to destroy network for sandbox \"0a4a4abc598a9d0afe8b80320b630cfec6af59f31d90c7c3d769e17a931715cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:18.851586 systemd[1]: run-netns-cni\x2da2676882\x2d1028\x2d69de\x2d22b5\x2d6f83b6315f60.mount: Deactivated successfully. Aug 13 01:47:18.852222 containerd[1556]: time="2025-08-13T01:47:18.851775558Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a4a4abc598a9d0afe8b80320b630cfec6af59f31d90c7c3d769e17a931715cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:18.855768 kubelet[2790]: E0813 01:47:18.854967 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a4a4abc598a9d0afe8b80320b630cfec6af59f31d90c7c3d769e17a931715cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:18.856288 kubelet[2790]: E0813 01:47:18.856259 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a4a4abc598a9d0afe8b80320b630cfec6af59f31d90c7c3d769e17a931715cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:47:18.856454 kubelet[2790]: E0813 01:47:18.856383 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a4a4abc598a9d0afe8b80320b630cfec6af59f31d90c7c3d769e17a931715cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:47:18.856944 kubelet[2790]: E0813 01:47:18.856829 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a4a4abc598a9d0afe8b80320b630cfec6af59f31d90c7c3d769e17a931715cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:47:18.928173 kubelet[2790]: I0813 01:47:18.927136 2790 kubelet.go:2405] "Pod admission denied" podUID="7edd4c04-5d74-43b7-822b-714525a2518e" pod="tigera-operator/tigera-operator-747864d56d-hxck6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:19.029272 kubelet[2790]: I0813 01:47:19.028972 2790 kubelet.go:2405] "Pod admission denied" podUID="4372d830-1c93-475b-8f21-fe76f2831fdf" pod="tigera-operator/tigera-operator-747864d56d-xtnvp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:19.131775 kubelet[2790]: I0813 01:47:19.131557 2790 kubelet.go:2405] "Pod admission denied" podUID="bac5b7cd-9fce-48e2-b613-a638bb36263a" pod="tigera-operator/tigera-operator-747864d56d-7ftwv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:19.277772 kubelet[2790]: I0813 01:47:19.272326 2790 kubelet.go:2405] "Pod admission denied" podUID="8ad76210-69cd-43f0-a108-fc1c60c7c30e" pod="tigera-operator/tigera-operator-747864d56d-g4pff" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:19.454849 kubelet[2790]: I0813 01:47:19.454771 2790 kubelet.go:2405] "Pod admission denied" podUID="aff3f62d-8019-40f5-ad6b-cc113dcb87b8" pod="tigera-operator/tigera-operator-747864d56d-7pdg5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:19.492559 kubelet[2790]: I0813 01:47:19.492496 2790 kubelet.go:2405] "Pod admission denied" podUID="e9d39e41-019c-4034-a403-e0e5af20ba05" pod="tigera-operator/tigera-operator-747864d56d-2dcq2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:19.547738 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2566183075.mount: Deactivated successfully. Aug 13 01:47:19.549320 containerd[1556]: time="2025-08-13T01:47:19.548403028Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2566183075: write /var/lib/containerd/tmpmounts/containerd-mount2566183075/usr/bin/calico-node: no space left on device" Aug 13 01:47:19.549320 containerd[1556]: time="2025-08-13T01:47:19.548542318Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:47:19.550479 kubelet[2790]: E0813 01:47:19.548994 2790 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2566183075: write /var/lib/containerd/tmpmounts/containerd-mount2566183075/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:47:19.550479 kubelet[2790]: E0813 01:47:19.549108 2790 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2566183075: write /var/lib/containerd/tmpmounts/containerd-mount2566183075/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:47:19.551830 kubelet[2790]: E0813 01:47:19.551183 2790 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzvb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},StopSignal:nil,},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-tsmrf_calico-system(517ffc51-1a34-4ced-acf5-d8e5da6a1838): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2566183075: write /var/lib/containerd/tmpmounts/containerd-mount2566183075/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:47:19.552468 kubelet[2790]: E0813 01:47:19.552423 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2566183075: write /var/lib/containerd/tmpmounts/containerd-mount2566183075/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-tsmrf" podUID="517ffc51-1a34-4ced-acf5-d8e5da6a1838" Aug 13 01:47:19.574927 kubelet[2790]: I0813 01:47:19.574875 2790 kubelet.go:2405] "Pod admission denied" podUID="ac91265f-6876-4f8d-b814-f7662f35897e" pod="tigera-operator/tigera-operator-747864d56d-pbd66" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:19.772783 kubelet[2790]: I0813 01:47:19.772706 2790 kubelet.go:2405] "Pod admission denied" podUID="94782d2c-c846-4a28-b4dc-38916da9143a" pod="tigera-operator/tigera-operator-747864d56d-h5sr5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:19.869785 kubelet[2790]: I0813 01:47:19.869586 2790 kubelet.go:2405] "Pod admission denied" podUID="137761ad-8213-45dc-a82e-3b4e9cf72d5b" pod="tigera-operator/tigera-operator-747864d56d-6g27f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:19.973532 kubelet[2790]: I0813 01:47:19.973448 2790 kubelet.go:2405] "Pod admission denied" podUID="a04c0e5e-0f7e-40fe-9215-d19b1636b81b" pod="tigera-operator/tigera-operator-747864d56d-hsf6k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.177290 kubelet[2790]: I0813 01:47:20.176305 2790 kubelet.go:2405] "Pod admission denied" podUID="23baaa2f-3b5e-4be0-b817-3132566ac5d0" pod="tigera-operator/tigera-operator-747864d56d-7th4q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.272543 kubelet[2790]: I0813 01:47:20.272460 2790 kubelet.go:2405] "Pod admission denied" podUID="4bfa0fdb-ad68-4dd4-ac36-cfa83a9d217d" pod="tigera-operator/tigera-operator-747864d56d-78h9r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.371947 kubelet[2790]: I0813 01:47:20.371887 2790 kubelet.go:2405] "Pod admission denied" podUID="78b41b03-6175-4dbb-b772-9e3194cea268" pod="tigera-operator/tigera-operator-747864d56d-wbv27" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.470769 kubelet[2790]: I0813 01:47:20.470188 2790 kubelet.go:2405] "Pod admission denied" podUID="2f6322c4-898e-4118-b0a4-f64876dec911" pod="tigera-operator/tigera-operator-747864d56d-gkp7c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.568422 kubelet[2790]: I0813 01:47:20.568369 2790 kubelet.go:2405] "Pod admission denied" podUID="daad82b4-8d77-48c6-8751-b13c7934b9f0" pod="tigera-operator/tigera-operator-747864d56d-gjc2r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.668019 kubelet[2790]: I0813 01:47:20.667968 2790 kubelet.go:2405] "Pod admission denied" podUID="686f54c7-915f-4557-a923-b13bd3603d04" pod="tigera-operator/tigera-operator-747864d56d-7zkg2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.777856 kubelet[2790]: I0813 01:47:20.777223 2790 kubelet.go:2405] "Pod admission denied" podUID="39265984-c4bb-4a1e-aa29-aa99e530a750" pod="tigera-operator/tigera-operator-747864d56d-gnfqz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.870253 kubelet[2790]: I0813 01:47:20.870177 2790 kubelet.go:2405] "Pod admission denied" podUID="3b25ffc8-896a-4931-b9bb-394d9289f980" pod="tigera-operator/tigera-operator-747864d56d-dplpp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:20.970605 kubelet[2790]: I0813 01:47:20.970529 2790 kubelet.go:2405] "Pod admission denied" podUID="14d6d951-ff14-4679-a5a5-7cb9ae8a61ec" pod="tigera-operator/tigera-operator-747864d56d-6vtq4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.080109 kubelet[2790]: I0813 01:47:21.079523 2790 kubelet.go:2405] "Pod admission denied" podUID="f8e837ce-a6f6-4c99-9979-08deea651650" pod="tigera-operator/tigera-operator-747864d56d-j2qsp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.172023 kubelet[2790]: I0813 01:47:21.171949 2790 kubelet.go:2405] "Pod admission denied" podUID="2852cdeb-72c4-4086-bee2-4719d893a3e6" pod="tigera-operator/tigera-operator-747864d56d-kz4z5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.274898 kubelet[2790]: I0813 01:47:21.274833 2790 kubelet.go:2405] "Pod admission denied" podUID="1cb5b41a-d2a9-41a1-8eb8-940cfcb5aa3c" pod="tigera-operator/tigera-operator-747864d56d-x62s9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.379943 kubelet[2790]: I0813 01:47:21.377456 2790 kubelet.go:2405] "Pod admission denied" podUID="df8d4515-faee-443a-8696-eab66526e051" pod="tigera-operator/tigera-operator-747864d56d-4g542" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.471479 kubelet[2790]: I0813 01:47:21.471430 2790 kubelet.go:2405] "Pod admission denied" podUID="2b8f0684-610c-4ac9-a8bf-2e225cb99a3e" pod="tigera-operator/tigera-operator-747864d56d-929bq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.588255 kubelet[2790]: I0813 01:47:21.588196 2790 kubelet.go:2405] "Pod admission denied" podUID="6bc97da0-5bc7-431e-8d3a-776a0f45fd16" pod="tigera-operator/tigera-operator-747864d56d-l5gb8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.676859 kubelet[2790]: I0813 01:47:21.676099 2790 kubelet.go:2405] "Pod admission denied" podUID="6ef4bf0d-ae2e-4895-b809-4256cd9501a6" pod="tigera-operator/tigera-operator-747864d56d-85htt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.695715 containerd[1556]: time="2025-08-13T01:47:21.695666030Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:21.775271 containerd[1556]: time="2025-08-13T01:47:21.772870762Z" level=error msg="Failed to destroy network for sandbox \"61b6b6b707c9c954ab448c3e0f9d612e07260afaa4871d83b6598537f41b04c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:21.775843 containerd[1556]: time="2025-08-13T01:47:21.775786576Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"61b6b6b707c9c954ab448c3e0f9d612e07260afaa4871d83b6598537f41b04c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:21.776416 systemd[1]: run-netns-cni\x2dd8de13e6\x2da2f2\x2d9c09\x2df705\x2d60b18c17dda8.mount: Deactivated successfully. Aug 13 01:47:21.779239 kubelet[2790]: E0813 01:47:21.778781 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61b6b6b707c9c954ab448c3e0f9d612e07260afaa4871d83b6598537f41b04c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:21.779239 kubelet[2790]: E0813 01:47:21.778880 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61b6b6b707c9c954ab448c3e0f9d612e07260afaa4871d83b6598537f41b04c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:47:21.779239 kubelet[2790]: E0813 01:47:21.778913 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61b6b6b707c9c954ab448c3e0f9d612e07260afaa4871d83b6598537f41b04c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:47:21.779239 kubelet[2790]: E0813 01:47:21.778976 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61b6b6b707c9c954ab448c3e0f9d612e07260afaa4871d83b6598537f41b04c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:47:21.781916 kubelet[2790]: I0813 01:47:21.781640 2790 kubelet.go:2405] "Pod admission denied" podUID="b50dc6f0-9eef-49f1-aa71-cc21a2dbe2b3" pod="tigera-operator/tigera-operator-747864d56d-2vdgh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.870595 kubelet[2790]: I0813 01:47:21.870532 2790 kubelet.go:2405] "Pod admission denied" podUID="850a2554-5a8e-4b7e-a5b6-5817ee85dba6" pod="tigera-operator/tigera-operator-747864d56d-2zp7d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:21.978639 kubelet[2790]: I0813 01:47:21.978587 2790 kubelet.go:2405] "Pod admission denied" podUID="f9b65323-7b7d-4103-bb4a-4e7cafb138a3" pod="tigera-operator/tigera-operator-747864d56d-9xph6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:22.173457 kubelet[2790]: I0813 01:47:22.173386 2790 kubelet.go:2405] "Pod admission denied" podUID="fc085c44-8e8f-4ca6-8402-78b92f5acdfe" pod="tigera-operator/tigera-operator-747864d56d-p7twp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:22.271983 kubelet[2790]: I0813 01:47:22.271841 2790 kubelet.go:2405] "Pod admission denied" podUID="b385e7cd-6410-4a7a-bb04-efe99c0e4fea" pod="tigera-operator/tigera-operator-747864d56d-d7l7c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:22.378844 kubelet[2790]: I0813 01:47:22.378788 2790 kubelet.go:2405] "Pod admission denied" podUID="ccb06ef5-df4d-4e76-81cc-9ddf0df460be" pod="tigera-operator/tigera-operator-747864d56d-4h62w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:22.470657 kubelet[2790]: I0813 01:47:22.470600 2790 kubelet.go:2405] "Pod admission denied" podUID="733d5e20-24d7-4f10-b374-17f41f26cac5" pod="tigera-operator/tigera-operator-747864d56d-44jtk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:22.525805 kubelet[2790]: I0813 01:47:22.524450 2790 kubelet.go:2405] "Pod admission denied" podUID="13917493-beef-4e4f-a291-100086475ce4" pod="tigera-operator/tigera-operator-747864d56d-r5jf6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:22.623773 kubelet[2790]: I0813 01:47:22.623412 2790 kubelet.go:2405] "Pod admission denied" podUID="34f05a9b-c1f1-4ea0-a355-5c272cd1a6f7" pod="tigera-operator/tigera-operator-747864d56d-cwdsm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:22.719138 kubelet[2790]: I0813 01:47:22.719084 2790 kubelet.go:2405] "Pod admission denied" podUID="06d64f6e-f4d4-4f43-b0d7-8be373cdd9db" pod="tigera-operator/tigera-operator-747864d56d-m9c85" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:22.770758 kubelet[2790]: I0813 01:47:22.770682 2790 kubelet.go:2405] "Pod admission denied" podUID="e6dbe7c9-95e4-47f3-8bbc-7d3648eadd02" pod="tigera-operator/tigera-operator-747864d56d-f7xj5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:22.875224 kubelet[2790]: I0813 01:47:22.874496 2790 kubelet.go:2405] "Pod admission denied" podUID="83456061-66b6-44b7-863d-2b7ed71de91a" pod="tigera-operator/tigera-operator-747864d56d-7wp2s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:23.071889 kubelet[2790]: I0813 01:47:23.071820 2790 kubelet.go:2405] "Pod admission denied" podUID="e50d62b9-8b03-4f11-bb40-30cc68a1e585" pod="tigera-operator/tigera-operator-747864d56d-54lpb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:23.172633 kubelet[2790]: I0813 01:47:23.172344 2790 kubelet.go:2405] "Pod admission denied" podUID="2722dd94-26dd-4468-89c3-f8c7afdbe6b1" pod="tigera-operator/tigera-operator-747864d56d-w9hqb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:23.275725 kubelet[2790]: I0813 01:47:23.275635 2790 kubelet.go:2405] "Pod admission denied" podUID="a67521f3-a39d-4374-ab8f-b79321449ba9" pod="tigera-operator/tigera-operator-747864d56d-tpg22" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:23.474235 kubelet[2790]: I0813 01:47:23.474151 2790 kubelet.go:2405] "Pod admission denied" podUID="c6da51d2-4291-4787-a7ec-7ff9cea1703b" pod="tigera-operator/tigera-operator-747864d56d-bjg6w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:23.570106 kubelet[2790]: I0813 01:47:23.570046 2790 kubelet.go:2405] "Pod admission denied" podUID="159928e6-022c-45f6-b63e-cb3ad855d8c8" pod="tigera-operator/tigera-operator-747864d56d-kmr7p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:23.626772 kubelet[2790]: I0813 01:47:23.626501 2790 kubelet.go:2405] "Pod admission denied" podUID="0d63270d-ba1f-49f9-ac41-0c039ae68f7a" pod="tigera-operator/tigera-operator-747864d56d-djbnz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:23.675265 kubelet[2790]: I0813 01:47:23.675220 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:23.675439 kubelet[2790]: I0813 01:47:23.675309 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:47:23.677329 kubelet[2790]: I0813 01:47:23.677283 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:47:23.691372 kubelet[2790]: I0813 01:47:23.691324 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:23.691523 kubelet[2790]: I0813 01:47:23.691483 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-76ff444f8d-4xcg9","kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-node-tsmrf","calico-system/csi-node-driver-c7jrc","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:47:23.691596 kubelet[2790]: E0813 01:47:23.691583 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:47:23.691630 kubelet[2790]: E0813 01:47:23.691598 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:47:23.691630 kubelet[2790]: E0813 01:47:23.691609 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:47:23.691630 kubelet[2790]: E0813 01:47:23.691618 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:47:23.691630 kubelet[2790]: E0813 01:47:23.691627 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:47:23.691769 kubelet[2790]: E0813 01:47:23.691651 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:47:23.691769 kubelet[2790]: E0813 01:47:23.691664 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:47:23.691769 kubelet[2790]: E0813 01:47:23.691675 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:47:23.691769 kubelet[2790]: E0813 01:47:23.691686 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:47:23.691769 kubelet[2790]: E0813 01:47:23.691697 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:47:23.691957 kubelet[2790]: I0813 01:47:23.691711 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:47:23.720881 kubelet[2790]: I0813 01:47:23.720811 2790 kubelet.go:2405] "Pod admission denied" podUID="984b9277-b915-4a9d-b578-106854ac6140" pod="tigera-operator/tigera-operator-747864d56d-2xhwv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:23.927358 kubelet[2790]: I0813 01:47:23.926442 2790 kubelet.go:2405] "Pod admission denied" podUID="4ec785be-4857-4fb9-acca-06640686c16c" pod="tigera-operator/tigera-operator-747864d56d-kpncr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.033853 kubelet[2790]: I0813 01:47:24.033455 2790 kubelet.go:2405] "Pod admission denied" podUID="126b964a-734d-4d84-a17a-16e5e51d6108" pod="tigera-operator/tigera-operator-747864d56d-6qwmm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.077091 kubelet[2790]: I0813 01:47:24.077023 2790 kubelet.go:2405] "Pod admission denied" podUID="ae7542c6-7f95-4109-8cc8-8c4e8d1fa2f2" pod="tigera-operator/tigera-operator-747864d56d-6ncl2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.170899 kubelet[2790]: I0813 01:47:24.170834 2790 kubelet.go:2405] "Pod admission denied" podUID="1661b128-342d-4f74-8f9e-a79739ee86a6" pod="tigera-operator/tigera-operator-747864d56d-scgr9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.275773 kubelet[2790]: I0813 01:47:24.274969 2790 kubelet.go:2405] "Pod admission denied" podUID="42d98f17-0d99-4843-a5b5-032c320ab050" pod="tigera-operator/tigera-operator-747864d56d-6j2wf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.371962 kubelet[2790]: I0813 01:47:24.371899 2790 kubelet.go:2405] "Pod admission denied" podUID="7e31412c-7093-417a-b307-ba67dbd51c2f" pod="tigera-operator/tigera-operator-747864d56d-wp77g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.479771 kubelet[2790]: I0813 01:47:24.477420 2790 kubelet.go:2405] "Pod admission denied" podUID="3ed0be26-d085-4210-a106-4adafa317b53" pod="tigera-operator/tigera-operator-747864d56d-9r46d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.536837 kubelet[2790]: I0813 01:47:24.536507 2790 kubelet.go:2405] "Pod admission denied" podUID="20446f11-865d-4b98-b602-ccb875067772" pod="tigera-operator/tigera-operator-747864d56d-tjtnk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.622339 kubelet[2790]: I0813 01:47:24.622262 2790 kubelet.go:2405] "Pod admission denied" podUID="d9e5eb9a-c707-4448-8b7a-a9837a03f834" pod="tigera-operator/tigera-operator-747864d56d-z5r2f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.696178 kubelet[2790]: E0813 01:47:24.695171 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:47:24.696633 kubelet[2790]: E0813 01:47:24.696403 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:47:24.698251 containerd[1556]: time="2025-08-13T01:47:24.698088247Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:24.736301 kubelet[2790]: I0813 01:47:24.736239 2790 kubelet.go:2405] "Pod admission denied" podUID="a9bc280b-fa75-49a6-9a55-a1380e0efc30" pod="tigera-operator/tigera-operator-747864d56d-rddcm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.776771 containerd[1556]: time="2025-08-13T01:47:24.774489190Z" level=error msg="Failed to destroy network for sandbox \"dc118a31e058059bf3903022fb3e448ee461d628ce6462eef078b9fab93dfd1c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:24.777254 systemd[1]: run-netns-cni\x2d2a70e507\x2df13b\x2d0346\x2d7518\x2d6397ae9e3ab0.mount: Deactivated successfully. Aug 13 01:47:24.779112 containerd[1556]: time="2025-08-13T01:47:24.779000072Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc118a31e058059bf3903022fb3e448ee461d628ce6462eef078b9fab93dfd1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:24.779499 kubelet[2790]: E0813 01:47:24.779460 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc118a31e058059bf3903022fb3e448ee461d628ce6462eef078b9fab93dfd1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:24.779577 kubelet[2790]: E0813 01:47:24.779530 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc118a31e058059bf3903022fb3e448ee461d628ce6462eef078b9fab93dfd1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:47:24.779577 kubelet[2790]: E0813 01:47:24.779559 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dc118a31e058059bf3903022fb3e448ee461d628ce6462eef078b9fab93dfd1c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:47:24.779646 kubelet[2790]: E0813 01:47:24.779612 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dc118a31e058059bf3903022fb3e448ee461d628ce6462eef078b9fab93dfd1c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vtdcd" podUID="caa5836a-45f9-496b-86c1-95f6e1b6da17" Aug 13 01:47:24.827883 kubelet[2790]: I0813 01:47:24.827534 2790 kubelet.go:2405] "Pod admission denied" podUID="4d3bdda1-9170-4577-bdba-3e6b1aaf8721" pod="tigera-operator/tigera-operator-747864d56d-n7pcn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.941296 kubelet[2790]: I0813 01:47:24.941238 2790 kubelet.go:2405] "Pod admission denied" podUID="b48f91f9-c0b7-4b70-8fb2-9a5996618671" pod="tigera-operator/tigera-operator-747864d56d-qvcgj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:24.973040 kubelet[2790]: I0813 01:47:24.972973 2790 kubelet.go:2405] "Pod admission denied" podUID="ca4e5fb6-0378-4c99-9e06-2bd882522e1d" pod="tigera-operator/tigera-operator-747864d56d-xcm7n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.076961 kubelet[2790]: I0813 01:47:25.076111 2790 kubelet.go:2405] "Pod admission denied" podUID="c228f8da-4313-45c8-989d-2f682ececa86" pod="tigera-operator/tigera-operator-747864d56d-gvtcp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.172627 kubelet[2790]: I0813 01:47:25.172472 2790 kubelet.go:2405] "Pod admission denied" podUID="5926b208-ab87-445c-a860-25e5d7ce07f9" pod="tigera-operator/tigera-operator-747864d56d-jpfgl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.271768 kubelet[2790]: I0813 01:47:25.271685 2790 kubelet.go:2405] "Pod admission denied" podUID="711d3501-6583-4642-b4ee-f89c7b3ff434" pod="tigera-operator/tigera-operator-747864d56d-dz4rt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.376829 kubelet[2790]: I0813 01:47:25.376756 2790 kubelet.go:2405] "Pod admission denied" podUID="41822247-a5f7-478f-aef3-c9236ee94ed7" pod="tigera-operator/tigera-operator-747864d56d-nqqgc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.432509 kubelet[2790]: I0813 01:47:25.431909 2790 kubelet.go:2405] "Pod admission denied" podUID="d40c2824-9379-4d0a-8a14-1acb6ddaf5e4" pod="tigera-operator/tigera-operator-747864d56d-7fpq2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.527342 kubelet[2790]: I0813 01:47:25.527281 2790 kubelet.go:2405] "Pod admission denied" podUID="cba00e9a-610b-4046-804c-e59be44e01f4" pod="tigera-operator/tigera-operator-747864d56d-stbxz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.626357 kubelet[2790]: I0813 01:47:25.626297 2790 kubelet.go:2405] "Pod admission denied" podUID="ceb62eda-854c-4c37-af74-08c1e8f4f6f4" pod="tigera-operator/tigera-operator-747864d56d-gbrqk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.679307 kubelet[2790]: I0813 01:47:25.679256 2790 kubelet.go:2405] "Pod admission denied" podUID="cd50815b-8eea-4c63-9afc-2423fe33d48a" pod="tigera-operator/tigera-operator-747864d56d-9qwft" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.775241 kubelet[2790]: I0813 01:47:25.775180 2790 kubelet.go:2405] "Pod admission denied" podUID="4bd7d59f-cbdf-4204-bb28-067775d1b9c5" pod="tigera-operator/tigera-operator-747864d56d-sdcnp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.870797 kubelet[2790]: I0813 01:47:25.870728 2790 kubelet.go:2405] "Pod admission denied" podUID="30e17383-9c5e-4265-a645-48a733328a03" pod="tigera-operator/tigera-operator-747864d56d-swdp6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:25.970532 kubelet[2790]: I0813 01:47:25.970470 2790 kubelet.go:2405] "Pod admission denied" podUID="9dd490e4-0e3c-4b8c-a74a-225b405aa5ed" pod="tigera-operator/tigera-operator-747864d56d-6znpz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.172527 kubelet[2790]: I0813 01:47:26.172352 2790 kubelet.go:2405] "Pod admission denied" podUID="bb50e63b-83ce-4a8b-a969-9239c4fdbf38" pod="tigera-operator/tigera-operator-747864d56d-hf2gs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.270368 kubelet[2790]: I0813 01:47:26.270314 2790 kubelet.go:2405] "Pod admission denied" podUID="5a6f97ef-5d00-412e-a59f-50b4287c758d" pod="tigera-operator/tigera-operator-747864d56d-cnc4p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.372224 kubelet[2790]: I0813 01:47:26.372158 2790 kubelet.go:2405] "Pod admission denied" podUID="664d19e1-402f-42c1-ad17-af4be45fc97b" pod="tigera-operator/tigera-operator-747864d56d-jz2nf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.475201 kubelet[2790]: I0813 01:47:26.475142 2790 kubelet.go:2405] "Pod admission denied" podUID="990ceafa-556d-48bb-98d4-5474e5d7a22f" pod="tigera-operator/tigera-operator-747864d56d-m4768" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.522511 kubelet[2790]: I0813 01:47:26.522438 2790 kubelet.go:2405] "Pod admission denied" podUID="402a2beb-1028-44eb-8d30-db4b4f08a7b7" pod="tigera-operator/tigera-operator-747864d56d-pgpbk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.622359 kubelet[2790]: I0813 01:47:26.622304 2790 kubelet.go:2405] "Pod admission denied" podUID="d5e570a3-61a8-4250-8cbd-06fc10886536" pod="tigera-operator/tigera-operator-747864d56d-whr2b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.721565 kubelet[2790]: I0813 01:47:26.721517 2790 kubelet.go:2405] "Pod admission denied" podUID="3d696540-5494-4452-8a34-205f87e1a240" pod="tigera-operator/tigera-operator-747864d56d-j8w9g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.771270 kubelet[2790]: I0813 01:47:26.771127 2790 kubelet.go:2405] "Pod admission denied" podUID="1a0df228-294a-4b49-8bbc-6f37c5e05cd9" pod="tigera-operator/tigera-operator-747864d56d-b9dp5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:26.870207 kubelet[2790]: I0813 01:47:26.870147 2790 kubelet.go:2405] "Pod admission denied" podUID="5b52faeb-a5aa-418c-ab80-d6f9fc2f31ad" pod="tigera-operator/tigera-operator-747864d56d-nkks6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:27.071964 kubelet[2790]: I0813 01:47:27.071593 2790 kubelet.go:2405] "Pod admission denied" podUID="49fe02ee-875b-4457-8984-b2cc3d388401" pod="tigera-operator/tigera-operator-747864d56d-78v5w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:27.171210 kubelet[2790]: I0813 01:47:27.171149 2790 kubelet.go:2405] "Pod admission denied" podUID="ec3cbea0-404d-4081-acf2-cbf90f2d4c95" pod="tigera-operator/tigera-operator-747864d56d-rqtrv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:27.273431 kubelet[2790]: I0813 01:47:27.273358 2790 kubelet.go:2405] "Pod admission denied" podUID="14269af4-2ecc-45f3-8abe-e38b3b68fb61" pod="tigera-operator/tigera-operator-747864d56d-btnw8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:27.374614 kubelet[2790]: I0813 01:47:27.373932 2790 kubelet.go:2405] "Pod admission denied" podUID="42a13953-7af6-42ba-b712-3d790143eaae" pod="tigera-operator/tigera-operator-747864d56d-lk6fp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:27.471162 kubelet[2790]: I0813 01:47:27.471092 2790 kubelet.go:2405] "Pod admission denied" podUID="d1c4715b-4b36-4106-bd48-8c2442f8e73b" pod="tigera-operator/tigera-operator-747864d56d-5fwxp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:27.573363 kubelet[2790]: I0813 01:47:27.573298 2790 kubelet.go:2405] "Pod admission denied" podUID="14832843-ce1a-41e2-9cca-673f003221f7" pod="tigera-operator/tigera-operator-747864d56d-9qcwv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:27.672831 kubelet[2790]: I0813 01:47:27.672652 2790 kubelet.go:2405] "Pod admission denied" podUID="b3251a90-1f4d-4621-bbac-13e6e3065e4f" pod="tigera-operator/tigera-operator-747864d56d-qsfmq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:27.694116 kubelet[2790]: E0813 01:47:27.694072 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:47:27.694678 containerd[1556]: time="2025-08-13T01:47:27.694636707Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:27.761436 containerd[1556]: time="2025-08-13T01:47:27.761379647Z" level=error msg="Failed to destroy network for sandbox \"b891d0db60b932b8dde112f7b13963e10fde5d55b8014a2fb371f81df2a7bc44\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:27.763910 systemd[1]: run-netns-cni\x2dd8a4942b\x2de431\x2dda5c\x2d0111\x2d496d0d9a206c.mount: Deactivated successfully. Aug 13 01:47:27.766201 containerd[1556]: time="2025-08-13T01:47:27.766105518Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b891d0db60b932b8dde112f7b13963e10fde5d55b8014a2fb371f81df2a7bc44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:27.767243 kubelet[2790]: E0813 01:47:27.766420 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b891d0db60b932b8dde112f7b13963e10fde5d55b8014a2fb371f81df2a7bc44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:27.767243 kubelet[2790]: E0813 01:47:27.766582 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b891d0db60b932b8dde112f7b13963e10fde5d55b8014a2fb371f81df2a7bc44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:47:27.767243 kubelet[2790]: E0813 01:47:27.766612 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b891d0db60b932b8dde112f7b13963e10fde5d55b8014a2fb371f81df2a7bc44\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:47:27.767243 kubelet[2790]: E0813 01:47:27.766673 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b891d0db60b932b8dde112f7b13963e10fde5d55b8014a2fb371f81df2a7bc44\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6rlkc" podUID="21a6ba02-58d5-43c1-a7de-9e24560a65f6" Aug 13 01:47:27.781978 kubelet[2790]: I0813 01:47:27.781910 2790 kubelet.go:2405] "Pod admission denied" podUID="f8e5c980-e6fb-476d-b7c5-f697541a8ff2" pod="tigera-operator/tigera-operator-747864d56d-z6w5f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:27.872113 kubelet[2790]: I0813 01:47:27.872046 2790 kubelet.go:2405] "Pod admission denied" podUID="98e8d714-231f-4292-af2e-bc0e9b4d25e5" pod="tigera-operator/tigera-operator-747864d56d-6zjkv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:28.074154 kubelet[2790]: I0813 01:47:28.074089 2790 kubelet.go:2405] "Pod admission denied" podUID="3d36c4e8-840d-4f8e-84e2-10e7a3893923" pod="tigera-operator/tigera-operator-747864d56d-8kgq8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:28.173833 kubelet[2790]: I0813 01:47:28.173692 2790 kubelet.go:2405] "Pod admission denied" podUID="8462ae8b-3ac0-4d3f-9775-345f9ca1ad9b" pod="tigera-operator/tigera-operator-747864d56d-vjm87" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:28.270518 kubelet[2790]: I0813 01:47:28.270450 2790 kubelet.go:2405] "Pod admission denied" podUID="d0643810-2013-4090-9f41-14b74b42eed4" pod="tigera-operator/tigera-operator-747864d56d-wx9lp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:28.373344 kubelet[2790]: I0813 01:47:28.373006 2790 kubelet.go:2405] "Pod admission denied" podUID="56bed33e-a7d1-4907-bc6c-c92e18fd61b6" pod="tigera-operator/tigera-operator-747864d56d-d6fhq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:28.474677 kubelet[2790]: I0813 01:47:28.474610 2790 kubelet.go:2405] "Pod admission denied" podUID="9594ed39-d454-47b4-9f92-7e79be090e44" pod="tigera-operator/tigera-operator-747864d56d-44fvt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:28.575423 kubelet[2790]: I0813 01:47:28.575321 2790 kubelet.go:2405] "Pod admission denied" podUID="5858b5b5-577f-4217-a05e-0af96294aa0a" pod="tigera-operator/tigera-operator-747864d56d-vm5vr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:28.674410 kubelet[2790]: I0813 01:47:28.673709 2790 kubelet.go:2405] "Pod admission denied" podUID="4362e467-14b7-46da-b3db-aa5e6fc1c12f" pod="tigera-operator/tigera-operator-747864d56d-bl6qr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:28.872069 kubelet[2790]: I0813 01:47:28.872020 2790 kubelet.go:2405] "Pod admission denied" podUID="3605e08d-75d2-46ff-9f3b-46134956c287" pod="tigera-operator/tigera-operator-747864d56d-zg6xk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:28.974014 kubelet[2790]: I0813 01:47:28.973951 2790 kubelet.go:2405] "Pod admission denied" podUID="17a15b2b-2b9e-4daf-a137-e089d8f3feb0" pod="tigera-operator/tigera-operator-747864d56d-dxfvg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.073888 kubelet[2790]: I0813 01:47:29.073823 2790 kubelet.go:2405] "Pod admission denied" podUID="f7793066-58d9-4c01-897a-cec5ea964727" pod="tigera-operator/tigera-operator-747864d56d-dr955" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.171337 kubelet[2790]: I0813 01:47:29.171276 2790 kubelet.go:2405] "Pod admission denied" podUID="77ce9321-2c1a-424c-bbd0-70d877bcda90" pod="tigera-operator/tigera-operator-747864d56d-bp5hk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.273961 kubelet[2790]: I0813 01:47:29.273213 2790 kubelet.go:2405] "Pod admission denied" podUID="c01387e7-29d1-47ff-8625-78014ec7b410" pod="tigera-operator/tigera-operator-747864d56d-zrr9t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.472681 kubelet[2790]: I0813 01:47:29.472602 2790 kubelet.go:2405] "Pod admission denied" podUID="36db3515-7b8e-4acf-bffd-45781d2a7af8" pod="tigera-operator/tigera-operator-747864d56d-57z69" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.572061 kubelet[2790]: I0813 01:47:29.571893 2790 kubelet.go:2405] "Pod admission denied" podUID="36aa0114-3902-480b-ba11-b921d75f783b" pod="tigera-operator/tigera-operator-747864d56d-8ws9m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.623029 kubelet[2790]: I0813 01:47:29.622953 2790 kubelet.go:2405] "Pod admission denied" podUID="f8ce778a-9922-4fdd-bdee-d7fca72b8b46" pod="tigera-operator/tigera-operator-747864d56d-687rf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.694539 kubelet[2790]: E0813 01:47:29.694495 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:47:29.720992 kubelet[2790]: I0813 01:47:29.720920 2790 kubelet.go:2405] "Pod admission denied" podUID="c9dc198d-b585-49b2-9943-b30119e23a35" pod="tigera-operator/tigera-operator-747864d56d-jljnm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.819451 kubelet[2790]: I0813 01:47:29.819389 2790 kubelet.go:2405] "Pod admission denied" podUID="72ca5942-75e6-4ba9-8ea5-acb98fd791d0" pod="tigera-operator/tigera-operator-747864d56d-k7k8b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:29.921838 kubelet[2790]: I0813 01:47:29.921653 2790 kubelet.go:2405] "Pod admission denied" podUID="50f6e02a-4e4c-45a4-971b-03603eb7cc40" pod="tigera-operator/tigera-operator-747864d56d-d2p72" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:30.122960 kubelet[2790]: I0813 01:47:30.122899 2790 kubelet.go:2405] "Pod admission denied" podUID="80016966-f493-4e2a-8e86-6ae06dd6bc07" pod="tigera-operator/tigera-operator-747864d56d-dn2ll" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:30.223957 kubelet[2790]: I0813 01:47:30.223907 2790 kubelet.go:2405] "Pod admission denied" podUID="a1786fdd-33db-4fd0-a3d8-bb620a778a07" pod="tigera-operator/tigera-operator-747864d56d-bzkwq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:30.322281 kubelet[2790]: I0813 01:47:30.322214 2790 kubelet.go:2405] "Pod admission denied" podUID="1ee6ef09-c419-4733-8e93-04681d5425bc" pod="tigera-operator/tigera-operator-747864d56d-4hd7s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:30.527720 kubelet[2790]: I0813 01:47:30.527547 2790 kubelet.go:2405] "Pod admission denied" podUID="b6ca2cb0-356f-4426-989f-9d95e4d6a729" pod="tigera-operator/tigera-operator-747864d56d-9wx54" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:30.622131 kubelet[2790]: I0813 01:47:30.622071 2790 kubelet.go:2405] "Pod admission denied" podUID="50d3c265-412c-4e0f-8b42-a4557eea712f" pod="tigera-operator/tigera-operator-747864d56d-hscn6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:30.672135 kubelet[2790]: I0813 01:47:30.672067 2790 kubelet.go:2405] "Pod admission denied" podUID="352348e3-ced0-4375-a3fd-b514617966ed" pod="tigera-operator/tigera-operator-747864d56d-c8smt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:30.776767 kubelet[2790]: I0813 01:47:30.776382 2790 kubelet.go:2405] "Pod admission denied" podUID="b5ad36e9-f9a0-407e-b84a-3376ebc4e7ca" pod="tigera-operator/tigera-operator-747864d56d-nflgt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:30.872578 kubelet[2790]: I0813 01:47:30.872262 2790 kubelet.go:2405] "Pod admission denied" podUID="2429a9ed-7476-4089-90c1-d92c14fcef88" pod="tigera-operator/tigera-operator-747864d56d-dzvgl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:30.974443 kubelet[2790]: I0813 01:47:30.974370 2790 kubelet.go:2405] "Pod admission denied" podUID="8e65b41e-0648-4d66-b7d1-6899866f3204" pod="tigera-operator/tigera-operator-747864d56d-r8hll" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:31.072049 kubelet[2790]: I0813 01:47:31.071998 2790 kubelet.go:2405] "Pod admission denied" podUID="c94e1c2f-933e-46c4-be04-768d684641da" pod="tigera-operator/tigera-operator-747864d56d-ql9dh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:31.173088 kubelet[2790]: I0813 01:47:31.172954 2790 kubelet.go:2405] "Pod admission denied" podUID="a2226bd1-3b8c-48bf-9eac-b0ad5f2a2625" pod="tigera-operator/tigera-operator-747864d56d-gnkxt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:31.376046 kubelet[2790]: I0813 01:47:31.374624 2790 kubelet.go:2405] "Pod admission denied" podUID="38a3a21f-bb29-4c81-bf4b-c0e27677badb" pod="tigera-operator/tigera-operator-747864d56d-pwcbl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:31.473611 kubelet[2790]: I0813 01:47:31.473554 2790 kubelet.go:2405] "Pod admission denied" podUID="cd41a5a8-0451-4cc2-ae6e-defbed2f1bf4" pod="tigera-operator/tigera-operator-747864d56d-4tfs4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:31.524502 kubelet[2790]: I0813 01:47:31.524436 2790 kubelet.go:2405] "Pod admission denied" podUID="bd7ff5bb-2d86-4cc7-8afe-c3f48f2a8f3c" pod="tigera-operator/tigera-operator-747864d56d-68gf8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:31.630906 kubelet[2790]: I0813 01:47:31.630379 2790 kubelet.go:2405] "Pod admission denied" podUID="331727f6-c1aa-4450-b1ee-724d635f27ba" pod="tigera-operator/tigera-operator-747864d56d-vkq5s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:31.720491 kubelet[2790]: I0813 01:47:31.720429 2790 kubelet.go:2405] "Pod admission denied" podUID="4e6beee5-199c-4269-bef7-3c0bc4047141" pod="tigera-operator/tigera-operator-747864d56d-kpllw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:31.822525 kubelet[2790]: I0813 01:47:31.822365 2790 kubelet.go:2405] "Pod admission denied" podUID="8ae62d37-a1bb-41bc-800c-4305a9dce38d" pod="tigera-operator/tigera-operator-747864d56d-dd2xw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:31.928490 kubelet[2790]: I0813 01:47:31.928421 2790 kubelet.go:2405] "Pod admission denied" podUID="2cac1443-e10f-434b-b9f3-696ac6e8d42a" pod="tigera-operator/tigera-operator-747864d56d-hdglg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.021739 kubelet[2790]: I0813 01:47:32.021684 2790 kubelet.go:2405] "Pod admission denied" podUID="be7c29f6-ac80-4e77-b106-eb5f113cda7e" pod="tigera-operator/tigera-operator-747864d56d-498w6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.129275 kubelet[2790]: I0813 01:47:32.127821 2790 kubelet.go:2405] "Pod admission denied" podUID="c0a25cc6-221c-4a6b-9107-4b46fdfddf36" pod="tigera-operator/tigera-operator-747864d56d-7g9dm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.219794 kubelet[2790]: I0813 01:47:32.219719 2790 kubelet.go:2405] "Pod admission denied" podUID="85e41d2c-eee2-49ca-b77a-a74383ec4510" pod="tigera-operator/tigera-operator-747864d56d-n7t5v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.428180 kubelet[2790]: I0813 01:47:32.427783 2790 kubelet.go:2405] "Pod admission denied" podUID="2f6b1b5f-cbe6-4565-9f68-e73afa0c5fe4" pod="tigera-operator/tigera-operator-747864d56d-2fzfm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.526071 kubelet[2790]: I0813 01:47:32.525736 2790 kubelet.go:2405] "Pod admission denied" podUID="10f2a17c-c5f1-401f-a00b-2a4f8b048a47" pod="tigera-operator/tigera-operator-747864d56d-72sl5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.583026 kubelet[2790]: I0813 01:47:32.582959 2790 kubelet.go:2405] "Pod admission denied" podUID="2263567d-6bd1-4f86-86d3-990e1e83cf1d" pod="tigera-operator/tigera-operator-747864d56d-6gl5s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.681633 kubelet[2790]: I0813 01:47:32.681433 2790 kubelet.go:2405] "Pod admission denied" podUID="431a61a1-5708-48a2-afbb-cf225b03c119" pod="tigera-operator/tigera-operator-747864d56d-8s687" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.779082 kubelet[2790]: I0813 01:47:32.779007 2790 kubelet.go:2405] "Pod admission denied" podUID="0a663afc-1487-4d4c-b329-fba3cdaad523" pod="tigera-operator/tigera-operator-747864d56d-5ccxl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.897383 kubelet[2790]: I0813 01:47:32.897318 2790 kubelet.go:2405] "Pod admission denied" podUID="2fa5426f-9f7b-44d9-a0f1-4bd512391787" pod="tigera-operator/tigera-operator-747864d56d-h8jhq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:32.984249 kubelet[2790]: I0813 01:47:32.984185 2790 kubelet.go:2405] "Pod admission denied" podUID="7f7c2c23-04c3-43c7-8b8b-8b1ce827e1d9" pod="tigera-operator/tigera-operator-747864d56d-nprqz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:33.075445 kubelet[2790]: I0813 01:47:33.075371 2790 kubelet.go:2405] "Pod admission denied" podUID="355ce0c0-9a2e-4900-bee4-d00ad1fa4d84" pod="tigera-operator/tigera-operator-747864d56d-2xrdc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:33.273495 kubelet[2790]: I0813 01:47:33.273124 2790 kubelet.go:2405] "Pod admission denied" podUID="82745b58-3fc6-471e-aa5d-5687564a2e2e" pod="tigera-operator/tigera-operator-747864d56d-pkmmb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:33.376135 kubelet[2790]: I0813 01:47:33.376068 2790 kubelet.go:2405] "Pod admission denied" podUID="07ab3e3e-0924-4ba8-b15b-bb47d7e5e63a" pod="tigera-operator/tigera-operator-747864d56d-z7cxq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:33.475556 kubelet[2790]: I0813 01:47:33.475490 2790 kubelet.go:2405] "Pod admission denied" podUID="79851d7a-fd0d-4eba-ae76-6a8437bda8bb" pod="tigera-operator/tigera-operator-747864d56d-mbqmd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:33.573481 kubelet[2790]: I0813 01:47:33.573329 2790 kubelet.go:2405] "Pod admission denied" podUID="38f79d0a-6e9e-4ad5-855e-c6ef0ed592d3" pod="tigera-operator/tigera-operator-747864d56d-pnhdk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:33.673414 kubelet[2790]: I0813 01:47:33.673346 2790 kubelet.go:2405] "Pod admission denied" podUID="6fef90df-2ef5-4908-8af5-e90535bb2fb4" pod="tigera-operator/tigera-operator-747864d56d-krj6m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:33.695770 containerd[1556]: time="2025-08-13T01:47:33.694639761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:33.695770 containerd[1556]: time="2025-08-13T01:47:33.695286130Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:33.742066 kubelet[2790]: I0813 01:47:33.742029 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:33.742066 kubelet[2790]: I0813 01:47:33.742084 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:47:33.746998 kubelet[2790]: I0813 01:47:33.746947 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:47:33.784501 kubelet[2790]: I0813 01:47:33.784455 2790 kubelet.go:2405] "Pod admission denied" podUID="912c035d-b8b0-4bca-bd6e-937f37af4b8f" pod="tigera-operator/tigera-operator-747864d56d-t9m6g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:33.785412 kubelet[2790]: I0813 01:47:33.785387 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:33.785499 kubelet[2790]: I0813 01:47:33.785464 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-kube-controllers-76ff444f8d-4xcg9","kube-system/coredns-674b8bbfcf-vtdcd","calico-system/calico-node-tsmrf","calico-system/csi-node-driver-c7jrc","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:47:33.785499 kubelet[2790]: E0813 01:47:33.785491 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:47:33.785871 kubelet[2790]: E0813 01:47:33.785501 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:47:33.785871 kubelet[2790]: E0813 01:47:33.785509 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:47:33.785871 kubelet[2790]: E0813 01:47:33.785515 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:47:33.785871 kubelet[2790]: E0813 01:47:33.785521 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:47:33.785871 kubelet[2790]: E0813 01:47:33.785529 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:47:33.785871 kubelet[2790]: E0813 01:47:33.785537 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:47:33.785871 kubelet[2790]: E0813 01:47:33.785544 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:47:33.785871 kubelet[2790]: E0813 01:47:33.785554 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:47:33.785871 kubelet[2790]: E0813 01:47:33.785562 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:47:33.785871 kubelet[2790]: I0813 01:47:33.785572 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:47:33.820162 containerd[1556]: time="2025-08-13T01:47:33.819999532Z" level=error msg="Failed to destroy network for sandbox \"c98f471572224a3736c0ec867f3ed925f5a5d9f6b6a6ceeb2f3a68d2e64125cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:33.827579 containerd[1556]: time="2025-08-13T01:47:33.824229085Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c98f471572224a3736c0ec867f3ed925f5a5d9f6b6a6ceeb2f3a68d2e64125cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:33.828944 kubelet[2790]: E0813 01:47:33.826492 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c98f471572224a3736c0ec867f3ed925f5a5d9f6b6a6ceeb2f3a68d2e64125cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:33.828944 kubelet[2790]: E0813 01:47:33.826802 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c98f471572224a3736c0ec867f3ed925f5a5d9f6b6a6ceeb2f3a68d2e64125cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:47:33.828944 kubelet[2790]: E0813 01:47:33.826895 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c98f471572224a3736c0ec867f3ed925f5a5d9f6b6a6ceeb2f3a68d2e64125cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:47:33.828944 kubelet[2790]: E0813 01:47:33.826980 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c98f471572224a3736c0ec867f3ed925f5a5d9f6b6a6ceeb2f3a68d2e64125cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:47:33.825242 systemd[1]: run-netns-cni\x2d9c7ceef9\x2ddace\x2d5c5a\x2d1e08\x2d17cc604865f7.mount: Deactivated successfully. Aug 13 01:47:33.843362 containerd[1556]: time="2025-08-13T01:47:33.843307935Z" level=error msg="Failed to destroy network for sandbox \"a7692c90cc3da2140d39c9ec88e599be29cdb89a65ff15a39783eadc366e9a91\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:33.846815 containerd[1556]: time="2025-08-13T01:47:33.845794061Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7692c90cc3da2140d39c9ec88e599be29cdb89a65ff15a39783eadc366e9a91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:33.846686 systemd[1]: run-netns-cni\x2dd3a9d8ce\x2d2d49\x2da229\x2de6f7\x2d07e7dfc9f981.mount: Deactivated successfully. Aug 13 01:47:33.847578 kubelet[2790]: E0813 01:47:33.847356 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7692c90cc3da2140d39c9ec88e599be29cdb89a65ff15a39783eadc366e9a91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:33.847578 kubelet[2790]: E0813 01:47:33.847435 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7692c90cc3da2140d39c9ec88e599be29cdb89a65ff15a39783eadc366e9a91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:47:33.847578 kubelet[2790]: E0813 01:47:33.847466 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a7692c90cc3da2140d39c9ec88e599be29cdb89a65ff15a39783eadc366e9a91\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:47:33.849448 kubelet[2790]: E0813 01:47:33.848980 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a7692c90cc3da2140d39c9ec88e599be29cdb89a65ff15a39783eadc366e9a91\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:47:33.876882 kubelet[2790]: I0813 01:47:33.876812 2790 kubelet.go:2405] "Pod admission denied" podUID="aa8b922b-7011-4eb0-8d16-a12c76e39c6d" pod="tigera-operator/tigera-operator-747864d56d-bk44m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:34.074312 kubelet[2790]: I0813 01:47:34.074232 2790 kubelet.go:2405] "Pod admission denied" podUID="37de8a01-25b6-4b9e-8180-83a96e11d3a0" pod="tigera-operator/tigera-operator-747864d56d-9hk4t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:34.173677 kubelet[2790]: I0813 01:47:34.173519 2790 kubelet.go:2405] "Pod admission denied" podUID="13405e82-05eb-4504-bf58-7b7c965d4364" pod="tigera-operator/tigera-operator-747864d56d-jgbgx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:34.293778 kubelet[2790]: I0813 01:47:34.292123 2790 kubelet.go:2405] "Pod admission denied" podUID="28f85721-793f-4bf8-99ea-f730d06431cf" pod="tigera-operator/tigera-operator-747864d56d-vskb6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:34.376992 kubelet[2790]: I0813 01:47:34.376920 2790 kubelet.go:2405] "Pod admission denied" podUID="1eaf54a6-d1d5-48b8-acd9-dae0fdab7cee" pod="tigera-operator/tigera-operator-747864d56d-hwmnl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:34.471403 kubelet[2790]: I0813 01:47:34.471327 2790 kubelet.go:2405] "Pod admission denied" podUID="b2eaa350-a91d-4c35-9b37-270a2b9f9dcf" pod="tigera-operator/tigera-operator-747864d56d-7v2dr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:34.684431 kubelet[2790]: I0813 01:47:34.683796 2790 kubelet.go:2405] "Pod admission denied" podUID="cc19fc40-1092-416f-8d4e-cfc24ad37031" pod="tigera-operator/tigera-operator-747864d56d-4zv5m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:34.696561 kubelet[2790]: E0813 01:47:34.696519 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:47:34.698257 kubelet[2790]: E0813 01:47:34.697964 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2566183075: write /var/lib/containerd/tmpmounts/containerd-mount2566183075/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-tsmrf" podUID="517ffc51-1a34-4ced-acf5-d8e5da6a1838" Aug 13 01:47:34.775259 kubelet[2790]: I0813 01:47:34.774341 2790 kubelet.go:2405] "Pod admission denied" podUID="fa420bc6-9446-472f-9359-334f0335ef32" pod="tigera-operator/tigera-operator-747864d56d-ljv7g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:34.884639 kubelet[2790]: I0813 01:47:34.883956 2790 kubelet.go:2405] "Pod admission denied" podUID="09ce7d14-32be-4219-a222-9d4420b14d76" pod="tigera-operator/tigera-operator-747864d56d-fnjkz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.078597 kubelet[2790]: I0813 01:47:35.077925 2790 kubelet.go:2405] "Pod admission denied" podUID="34781fe4-44a6-4abd-b91f-52bbebcdb9f2" pod="tigera-operator/tigera-operator-747864d56d-j86zn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.173725 kubelet[2790]: I0813 01:47:35.173652 2790 kubelet.go:2405] "Pod admission denied" podUID="4f12612a-8ca2-4d5a-8481-c67c2cc78f45" pod="tigera-operator/tigera-operator-747864d56d-tm9f4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.278944 kubelet[2790]: I0813 01:47:35.278874 2790 kubelet.go:2405] "Pod admission denied" podUID="cab0309f-41e7-47d9-96ec-ffa0b3a3621c" pod="tigera-operator/tigera-operator-747864d56d-4cs6d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.482352 kubelet[2790]: I0813 01:47:35.482257 2790 kubelet.go:2405] "Pod admission denied" podUID="69e45a5b-074c-4e2f-95fc-18972c798978" pod="tigera-operator/tigera-operator-747864d56d-kfwjl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.592771 kubelet[2790]: I0813 01:47:35.592659 2790 kubelet.go:2405] "Pod admission denied" podUID="2f4be6a6-1790-490f-8743-2720511fbc42" pod="tigera-operator/tigera-operator-747864d56d-rn4vx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.673812 kubelet[2790]: I0813 01:47:35.673727 2790 kubelet.go:2405] "Pod admission denied" podUID="35e6ff81-3fb7-4e97-9025-1639710bdd79" pod="tigera-operator/tigera-operator-747864d56d-cw5kp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.775586 kubelet[2790]: I0813 01:47:35.775390 2790 kubelet.go:2405] "Pod admission denied" podUID="2c6d1abd-0fe5-43b4-a3a2-71e5914606ec" pod="tigera-operator/tigera-operator-747864d56d-kng4r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.874657 kubelet[2790]: I0813 01:47:35.874600 2790 kubelet.go:2405] "Pod admission denied" podUID="6a7aee1f-1b13-4e69-bf41-1e7081398add" pod="tigera-operator/tigera-operator-747864d56d-8mfc6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:35.973770 kubelet[2790]: I0813 01:47:35.973703 2790 kubelet.go:2405] "Pod admission denied" podUID="be66ef49-23db-40e2-82c4-7e7034b3c872" pod="tigera-operator/tigera-operator-747864d56d-pvbhs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:36.087305 kubelet[2790]: I0813 01:47:36.085496 2790 kubelet.go:2405] "Pod admission denied" podUID="d2f884f5-332d-4bc4-bc26-7a7567decf92" pod="tigera-operator/tigera-operator-747864d56d-4ds28" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:36.173982 kubelet[2790]: I0813 01:47:36.173920 2790 kubelet.go:2405] "Pod admission denied" podUID="88d7ceb6-86fc-4ad2-95a2-91a70e7d3575" pod="tigera-operator/tigera-operator-747864d56d-5cw8s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:36.388468 kubelet[2790]: I0813 01:47:36.388320 2790 kubelet.go:2405] "Pod admission denied" podUID="1a4e2001-aec6-405a-8613-1e11f8183480" pod="tigera-operator/tigera-operator-747864d56d-hvftc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:36.481605 kubelet[2790]: I0813 01:47:36.481236 2790 kubelet.go:2405] "Pod admission denied" podUID="0e7e3ffb-ad82-4a26-bf8a-9def13273262" pod="tigera-operator/tigera-operator-747864d56d-7ptzj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:36.575886 kubelet[2790]: I0813 01:47:36.575820 2790 kubelet.go:2405] "Pod admission denied" podUID="17284bdb-8cf9-49ee-a24d-6eabf788b6f4" pod="tigera-operator/tigera-operator-747864d56d-n8z6w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:36.677448 kubelet[2790]: I0813 01:47:36.677310 2790 kubelet.go:2405] "Pod admission denied" podUID="33a96d4e-c2f9-4ea4-a14c-4a2daaf9167e" pod="tigera-operator/tigera-operator-747864d56d-ktzjt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:36.775203 kubelet[2790]: I0813 01:47:36.775139 2790 kubelet.go:2405] "Pod admission denied" podUID="2ad3d1b3-5ac0-442a-a743-e3c5a757283b" pod="tigera-operator/tigera-operator-747864d56d-w2n25" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:36.872997 kubelet[2790]: I0813 01:47:36.872940 2790 kubelet.go:2405] "Pod admission denied" podUID="1bf72947-6052-445d-b3aa-973cc4e61160" pod="tigera-operator/tigera-operator-747864d56d-zw4ff" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:36.972141 kubelet[2790]: I0813 01:47:36.972087 2790 kubelet.go:2405] "Pod admission denied" podUID="f67862e3-4bc2-4763-8a78-af8bbfad6fa9" pod="tigera-operator/tigera-operator-747864d56d-gc99n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:37.072958 kubelet[2790]: I0813 01:47:37.072872 2790 kubelet.go:2405] "Pod admission denied" podUID="9ad7eaaa-ef00-425d-831a-8ea7a9cf41c7" pod="tigera-operator/tigera-operator-747864d56d-nkd82" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:37.277707 kubelet[2790]: I0813 01:47:37.277551 2790 kubelet.go:2405] "Pod admission denied" podUID="d5b10dd7-a958-41ba-85d9-5c3543930d9a" pod="tigera-operator/tigera-operator-747864d56d-cclvs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:37.376785 kubelet[2790]: I0813 01:47:37.375936 2790 kubelet.go:2405] "Pod admission denied" podUID="4e1c58a8-53b0-45f9-a43b-f973eca95e61" pod="tigera-operator/tigera-operator-747864d56d-n8p4v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:37.480274 kubelet[2790]: I0813 01:47:37.480204 2790 kubelet.go:2405] "Pod admission denied" podUID="a8fa966c-7079-4066-85c2-d8234c9f89c0" pod="tigera-operator/tigera-operator-747864d56d-m2v97" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:37.675733 kubelet[2790]: I0813 01:47:37.675581 2790 kubelet.go:2405] "Pod admission denied" podUID="8d3f6c93-66a2-4dce-ab41-a454767024ce" pod="tigera-operator/tigera-operator-747864d56d-z2g4t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:37.708798 kubelet[2790]: E0813 01:47:37.706566 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:47:37.709031 containerd[1556]: time="2025-08-13T01:47:37.707692522Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:37.777501 kubelet[2790]: I0813 01:47:37.777455 2790 kubelet.go:2405] "Pod admission denied" podUID="8db5b4ac-d962-4773-b2e6-a2d3a200fa2c" pod="tigera-operator/tigera-operator-747864d56d-65sfq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:37.780406 containerd[1556]: time="2025-08-13T01:47:37.778924607Z" level=error msg="Failed to destroy network for sandbox \"cd74a9af9c3f0141d9cb885ea59b9aa67f7760c73a543dfe80139eb5dc6a28ba\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:37.782549 containerd[1556]: time="2025-08-13T01:47:37.782496571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd74a9af9c3f0141d9cb885ea59b9aa67f7760c73a543dfe80139eb5dc6a28ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:37.783043 kubelet[2790]: E0813 01:47:37.782983 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd74a9af9c3f0141d9cb885ea59b9aa67f7760c73a543dfe80139eb5dc6a28ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:37.783106 kubelet[2790]: E0813 01:47:37.783076 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd74a9af9c3f0141d9cb885ea59b9aa67f7760c73a543dfe80139eb5dc6a28ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:47:37.783191 kubelet[2790]: E0813 01:47:37.783111 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cd74a9af9c3f0141d9cb885ea59b9aa67f7760c73a543dfe80139eb5dc6a28ba\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:47:37.783191 kubelet[2790]: E0813 01:47:37.783172 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cd74a9af9c3f0141d9cb885ea59b9aa67f7760c73a543dfe80139eb5dc6a28ba\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vtdcd" podUID="caa5836a-45f9-496b-86c1-95f6e1b6da17" Aug 13 01:47:37.783968 systemd[1]: run-netns-cni\x2d393bcb50\x2de5d9\x2d2a6c\x2dc65d\x2dd6932143ce79.mount: Deactivated successfully. Aug 13 01:47:37.877089 kubelet[2790]: I0813 01:47:37.877018 2790 kubelet.go:2405] "Pod admission denied" podUID="f2073955-161a-4512-b91f-e846886ce9b0" pod="tigera-operator/tigera-operator-747864d56d-b9swk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:37.979061 kubelet[2790]: I0813 01:47:37.977728 2790 kubelet.go:2405] "Pod admission denied" podUID="132347a4-70e7-4430-b555-deba3be62d9a" pod="tigera-operator/tigera-operator-747864d56d-sbpk6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.074679 kubelet[2790]: I0813 01:47:38.074603 2790 kubelet.go:2405] "Pod admission denied" podUID="062c6b68-f787-49bd-ba8d-cd8cdf5e56ab" pod="tigera-operator/tigera-operator-747864d56d-h5gg4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.175548 kubelet[2790]: I0813 01:47:38.175488 2790 kubelet.go:2405] "Pod admission denied" podUID="0d80d4d8-c9ff-4646-8e22-c3bd23c94269" pod="tigera-operator/tigera-operator-747864d56d-vhkhp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.275147 kubelet[2790]: I0813 01:47:38.274265 2790 kubelet.go:2405] "Pod admission denied" podUID="9d89287b-1b74-455a-b701-4119c49ef387" pod="tigera-operator/tigera-operator-747864d56d-hswr8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.474233 kubelet[2790]: I0813 01:47:38.474151 2790 kubelet.go:2405] "Pod admission denied" podUID="ab47fac5-9347-4b6b-bc3b-9b03f9621021" pod="tigera-operator/tigera-operator-747864d56d-vj99f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.581678 kubelet[2790]: I0813 01:47:38.579884 2790 kubelet.go:2405] "Pod admission denied" podUID="dd0c26b9-43e6-4065-8db6-6f0d16fc8471" pod="tigera-operator/tigera-operator-747864d56d-pvwms" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.675453 kubelet[2790]: I0813 01:47:38.675384 2790 kubelet.go:2405] "Pod admission denied" podUID="d5c021ee-f26e-404d-80dd-4351aa9dee54" pod="tigera-operator/tigera-operator-747864d56d-697ch" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.779481 kubelet[2790]: I0813 01:47:38.779423 2790 kubelet.go:2405] "Pod admission denied" podUID="4492bca5-f796-482a-a39e-5ffcf37f57f7" pod="tigera-operator/tigera-operator-747864d56d-qgsbg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.829783 kubelet[2790]: I0813 01:47:38.829680 2790 kubelet.go:2405] "Pod admission denied" podUID="d43befc3-fd25-4b6c-9a4c-c2cd6cd55cfc" pod="tigera-operator/tigera-operator-747864d56d-xj75b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:38.926520 kubelet[2790]: I0813 01:47:38.926361 2790 kubelet.go:2405] "Pod admission denied" podUID="8bfc20cb-c928-4f15-8b55-85c1faefc789" pod="tigera-operator/tigera-operator-747864d56d-h76h6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:39.030775 kubelet[2790]: I0813 01:47:39.030631 2790 kubelet.go:2405] "Pod admission denied" podUID="995a923d-801f-4b51-81db-6816d370a007" pod="tigera-operator/tigera-operator-747864d56d-9hlwh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:39.075456 kubelet[2790]: I0813 01:47:39.075387 2790 kubelet.go:2405] "Pod admission denied" podUID="bf7ca1c5-992a-4ec0-b6eb-9dc1f2d6f8dc" pod="tigera-operator/tigera-operator-747864d56d-wdbp4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:39.174361 kubelet[2790]: I0813 01:47:39.174301 2790 kubelet.go:2405] "Pod admission denied" podUID="c9b34594-c67d-4a9b-9273-c229681d2c46" pod="tigera-operator/tigera-operator-747864d56d-pmwks" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:39.375334 kubelet[2790]: I0813 01:47:39.375251 2790 kubelet.go:2405] "Pod admission denied" podUID="8487e822-04d3-4c99-9e12-7cab83b094da" pod="tigera-operator/tigera-operator-747864d56d-mq674" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:39.472617 kubelet[2790]: I0813 01:47:39.472554 2790 kubelet.go:2405] "Pod admission denied" podUID="1c8f5477-272c-475f-b60f-2e213bdc4a30" pod="tigera-operator/tigera-operator-747864d56d-45dnr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:39.582773 kubelet[2790]: I0813 01:47:39.581834 2790 kubelet.go:2405] "Pod admission denied" podUID="322cd847-d9d7-49ba-ae9f-353567d9360b" pod="tigera-operator/tigera-operator-747864d56d-wfhzp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:39.674506 kubelet[2790]: I0813 01:47:39.674362 2790 kubelet.go:2405] "Pod admission denied" podUID="b872b9c3-6b68-492e-b262-5b6f2798bc49" pod="tigera-operator/tigera-operator-747864d56d-hnggn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:39.772964 kubelet[2790]: I0813 01:47:39.772895 2790 kubelet.go:2405] "Pod admission denied" podUID="fdf93606-67aa-472c-833f-68073ce33a62" pod="tigera-operator/tigera-operator-747864d56d-4brdm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:39.985769 kubelet[2790]: I0813 01:47:39.985704 2790 kubelet.go:2405] "Pod admission denied" podUID="6ffaae2b-4f78-4c03-9e21-bb7f30225d09" pod="tigera-operator/tigera-operator-747864d56d-72kbb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.075269 kubelet[2790]: I0813 01:47:40.075208 2790 kubelet.go:2405] "Pod admission denied" podUID="3389f462-3294-45d5-a438-c6075e8e9605" pod="tigera-operator/tigera-operator-747864d56d-7zn46" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.176427 kubelet[2790]: I0813 01:47:40.176329 2790 kubelet.go:2405] "Pod admission denied" podUID="88ae006a-6798-4e8c-8243-d98a37e1d15b" pod="tigera-operator/tigera-operator-747864d56d-bf2g4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.390438 kubelet[2790]: I0813 01:47:40.390283 2790 kubelet.go:2405] "Pod admission denied" podUID="64bd40d9-b7ee-4172-a0c1-c38cefb492b4" pod="tigera-operator/tigera-operator-747864d56d-bwtjq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.476001 kubelet[2790]: I0813 01:47:40.475934 2790 kubelet.go:2405] "Pod admission denied" podUID="5f469077-1079-4ab3-b061-c114bbf81afd" pod="tigera-operator/tigera-operator-747864d56d-84d4r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.574308 kubelet[2790]: I0813 01:47:40.574240 2790 kubelet.go:2405] "Pod admission denied" podUID="1d91e8d6-b7ed-4404-9500-c315a5bfb518" pod="tigera-operator/tigera-operator-747864d56d-s62jw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.675085 kubelet[2790]: I0813 01:47:40.674484 2790 kubelet.go:2405] "Pod admission denied" podUID="338fa75f-25f4-4dfa-920d-99d9343f3f42" pod="tigera-operator/tigera-operator-747864d56d-4n2qt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.696267 kubelet[2790]: E0813 01:47:40.696216 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:47:40.697762 containerd[1556]: time="2025-08-13T01:47:40.697569533Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:40.737136 kubelet[2790]: I0813 01:47:40.737074 2790 kubelet.go:2405] "Pod admission denied" podUID="1585d153-2f9c-4a37-8710-eca717d2f49b" pod="tigera-operator/tigera-operator-747864d56d-5mlzr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:40.774719 containerd[1556]: time="2025-08-13T01:47:40.774656276Z" level=error msg="Failed to destroy network for sandbox \"bd9fce324e0d806e5ba5d5a5b87deef32513899335bd7942218f1da0e7203d8f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:40.777687 containerd[1556]: time="2025-08-13T01:47:40.776362993Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd9fce324e0d806e5ba5d5a5b87deef32513899335bd7942218f1da0e7203d8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:40.777959 kubelet[2790]: E0813 01:47:40.777905 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd9fce324e0d806e5ba5d5a5b87deef32513899335bd7942218f1da0e7203d8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:40.778072 kubelet[2790]: E0813 01:47:40.777968 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd9fce324e0d806e5ba5d5a5b87deef32513899335bd7942218f1da0e7203d8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:47:40.778072 kubelet[2790]: E0813 01:47:40.777993 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bd9fce324e0d806e5ba5d5a5b87deef32513899335bd7942218f1da0e7203d8f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:47:40.778072 kubelet[2790]: E0813 01:47:40.778039 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bd9fce324e0d806e5ba5d5a5b87deef32513899335bd7942218f1da0e7203d8f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6rlkc" podUID="21a6ba02-58d5-43c1-a7de-9e24560a65f6" Aug 13 01:47:40.779449 systemd[1]: run-netns-cni\x2df0cd377e\x2d27f1\x2da140\x2d5985\x2d3e75703f1336.mount: Deactivated successfully. Aug 13 01:47:40.831779 kubelet[2790]: I0813 01:47:40.831007 2790 kubelet.go:2405] "Pod admission denied" podUID="44ac23e5-a380-4bc3-acf8-7b269b9ae80a" pod="tigera-operator/tigera-operator-747864d56d-fx7pr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:41.025550 kubelet[2790]: I0813 01:47:41.025477 2790 kubelet.go:2405] "Pod admission denied" podUID="a9b5550c-7527-4abb-8613-a209a992207c" pod="tigera-operator/tigera-operator-747864d56d-ts5kq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:41.127821 kubelet[2790]: I0813 01:47:41.127737 2790 kubelet.go:2405] "Pod admission denied" podUID="a45f17e9-7e12-4664-a366-f9ada5ca8705" pod="tigera-operator/tigera-operator-747864d56d-7jlxp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:41.239214 kubelet[2790]: I0813 01:47:41.239139 2790 kubelet.go:2405] "Pod admission denied" podUID="3e2d3b7c-4534-4e76-8021-4e82ccb95dfd" pod="tigera-operator/tigera-operator-747864d56d-mszzr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:41.324132 kubelet[2790]: I0813 01:47:41.323958 2790 kubelet.go:2405] "Pod admission denied" podUID="11a62968-f081-4d32-962c-cc30b52bb460" pod="tigera-operator/tigera-operator-747864d56d-25cfh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:41.428672 kubelet[2790]: I0813 01:47:41.428590 2790 kubelet.go:2405] "Pod admission denied" podUID="54b6ca80-3386-4dcb-a0d4-2d2173223c5f" pod="tigera-operator/tigera-operator-747864d56d-rx4mg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:41.628898 kubelet[2790]: I0813 01:47:41.628588 2790 kubelet.go:2405] "Pod admission denied" podUID="6aea4caa-8930-4c9c-99af-0bfbcc780957" pod="tigera-operator/tigera-operator-747864d56d-r8tzs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:41.723669 kubelet[2790]: I0813 01:47:41.723593 2790 kubelet.go:2405] "Pod admission denied" podUID="31db17a2-78e1-4ad4-812f-1be691a4303f" pod="tigera-operator/tigera-operator-747864d56d-4z7dp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:41.785351 kubelet[2790]: I0813 01:47:41.785271 2790 kubelet.go:2405] "Pod admission denied" podUID="06d8278d-b1c6-43b6-bce8-611f9b8cb04e" pod="tigera-operator/tigera-operator-747864d56d-x2rcg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:41.873966 kubelet[2790]: I0813 01:47:41.873901 2790 kubelet.go:2405] "Pod admission denied" podUID="5dcee604-662e-4bf1-af5a-c56c0e0fae91" pod="tigera-operator/tigera-operator-747864d56d-wf8rf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.076455 kubelet[2790]: I0813 01:47:42.076393 2790 kubelet.go:2405] "Pod admission denied" podUID="27bb9910-96d8-4a05-a97b-315b7879a9a0" pod="tigera-operator/tigera-operator-747864d56d-2qj7t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.175466 kubelet[2790]: I0813 01:47:42.175394 2790 kubelet.go:2405] "Pod admission denied" podUID="91694a79-89c1-44e5-9f88-33d8c609719c" pod="tigera-operator/tigera-operator-747864d56d-gsct4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.274301 kubelet[2790]: I0813 01:47:42.274206 2790 kubelet.go:2405] "Pod admission denied" podUID="bd5230cf-30d6-48f3-a125-5caa7c34e431" pod="tigera-operator/tigera-operator-747864d56d-ttdp9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.385498 kubelet[2790]: I0813 01:47:42.385351 2790 kubelet.go:2405] "Pod admission denied" podUID="a9676c10-1de3-4aab-9463-a44e4fd64257" pod="tigera-operator/tigera-operator-747864d56d-mlg6c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.474468 kubelet[2790]: I0813 01:47:42.474401 2790 kubelet.go:2405] "Pod admission denied" podUID="972e9490-a3b1-4567-b9ad-e0b019e137a7" pod="tigera-operator/tigera-operator-747864d56d-jgh9q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.676599 kubelet[2790]: I0813 01:47:42.675878 2790 kubelet.go:2405] "Pod admission denied" podUID="243ac59b-81ea-4009-8cb5-e87f5af144c8" pod="tigera-operator/tigera-operator-747864d56d-s979q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.777454 kubelet[2790]: I0813 01:47:42.777391 2790 kubelet.go:2405] "Pod admission denied" podUID="f5ec0f75-1dca-4220-93ae-0638b13768c8" pod="tigera-operator/tigera-operator-747864d56d-knlhl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.822147 kubelet[2790]: I0813 01:47:42.822080 2790 kubelet.go:2405] "Pod admission denied" podUID="2d75ad66-0eb7-48ca-89f4-a21fabe6ab88" pod="tigera-operator/tigera-operator-747864d56d-qg8xv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:42.936079 kubelet[2790]: I0813 01:47:42.935268 2790 kubelet.go:2405] "Pod admission denied" podUID="285cfc84-58b3-4e5f-9767-8a3dd18b1c26" pod="tigera-operator/tigera-operator-747864d56d-qls8b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:43.023376 kubelet[2790]: I0813 01:47:43.023297 2790 kubelet.go:2405] "Pod admission denied" podUID="d7aae95a-9590-426d-8a4a-65db29c7e743" pod="tigera-operator/tigera-operator-747864d56d-lthdl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:43.124699 kubelet[2790]: I0813 01:47:43.124636 2790 kubelet.go:2405] "Pod admission denied" podUID="c27f0492-fe18-48b7-b90a-f1e2d41e3706" pod="tigera-operator/tigera-operator-747864d56d-88jr5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:43.223732 kubelet[2790]: I0813 01:47:43.223673 2790 kubelet.go:2405] "Pod admission denied" podUID="0c52c695-92ff-4e15-be4e-119f8ecd7b61" pod="tigera-operator/tigera-operator-747864d56d-bbhb8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:43.324957 kubelet[2790]: I0813 01:47:43.324895 2790 kubelet.go:2405] "Pod admission denied" podUID="55c51d10-96ee-4cef-b5c8-a24621f621a7" pod="tigera-operator/tigera-operator-747864d56d-v4rjd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:43.529983 kubelet[2790]: I0813 01:47:43.529266 2790 kubelet.go:2405] "Pod admission denied" podUID="ff9aa763-3e6f-4a6b-9479-ceab7053c83d" pod="tigera-operator/tigera-operator-747864d56d-s6fmn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:43.623281 kubelet[2790]: I0813 01:47:43.623223 2790 kubelet.go:2405] "Pod admission denied" podUID="509314fb-f213-4feb-9a68-628e32319117" pod="tigera-operator/tigera-operator-747864d56d-px8mg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:43.746910 kubelet[2790]: I0813 01:47:43.746845 2790 kubelet.go:2405] "Pod admission denied" podUID="98c2b211-9f52-49c8-9631-9895c85e66af" pod="tigera-operator/tigera-operator-747864d56d-zsh8p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:43.804025 kubelet[2790]: I0813 01:47:43.803074 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:43.804025 kubelet[2790]: I0813 01:47:43.803116 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:47:43.806634 kubelet[2790]: I0813 01:47:43.806607 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:47:43.817417 kubelet[2790]: I0813 01:47:43.817360 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:43.817585 kubelet[2790]: I0813 01:47:43.817549 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-76ff444f8d-4xcg9","kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-node-tsmrf","calico-system/csi-node-driver-c7jrc","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:47:43.817679 kubelet[2790]: E0813 01:47:43.817599 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:47:43.817679 kubelet[2790]: E0813 01:47:43.817616 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:47:43.817679 kubelet[2790]: E0813 01:47:43.817624 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:47:43.817679 kubelet[2790]: E0813 01:47:43.817630 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:47:43.817679 kubelet[2790]: E0813 01:47:43.817645 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:47:43.817679 kubelet[2790]: E0813 01:47:43.817655 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:47:43.817679 kubelet[2790]: E0813 01:47:43.817667 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:47:43.817679 kubelet[2790]: E0813 01:47:43.817674 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:47:43.817679 kubelet[2790]: E0813 01:47:43.817683 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:47:43.817962 kubelet[2790]: E0813 01:47:43.817691 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:47:43.817962 kubelet[2790]: I0813 01:47:43.817701 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:47:43.923349 kubelet[2790]: I0813 01:47:43.923287 2790 kubelet.go:2405] "Pod admission denied" podUID="7677bd96-c249-4367-be72-d0200b2811fd" pod="tigera-operator/tigera-operator-747864d56d-zqlz2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.022646 kubelet[2790]: I0813 01:47:44.022585 2790 kubelet.go:2405] "Pod admission denied" podUID="47414e3e-1620-446c-b0d9-57a6465f4ab2" pod="tigera-operator/tigera-operator-747864d56d-qxrl9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.136855 kubelet[2790]: I0813 01:47:44.134569 2790 kubelet.go:2405] "Pod admission denied" podUID="976fa61e-1671-4c9b-a6ea-4714ba980681" pod="tigera-operator/tigera-operator-747864d56d-d8c7t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.223197 kubelet[2790]: I0813 01:47:44.223135 2790 kubelet.go:2405] "Pod admission denied" podUID="75f28f97-bd49-4aa8-85bb-ffe19ab9276f" pod="tigera-operator/tigera-operator-747864d56d-nwm5s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.324905 kubelet[2790]: I0813 01:47:44.324844 2790 kubelet.go:2405] "Pod admission denied" podUID="79b7d84f-c50e-4e90-a0a1-143b3a8cd62e" pod="tigera-operator/tigera-operator-747864d56d-wqztq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.428915 kubelet[2790]: I0813 01:47:44.428637 2790 kubelet.go:2405] "Pod admission denied" podUID="8233aa3b-b562-4ca3-afa5-4ad0c18cd265" pod="tigera-operator/tigera-operator-747864d56d-kmw7x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.529174 kubelet[2790]: I0813 01:47:44.529110 2790 kubelet.go:2405] "Pod admission denied" podUID="4fe8755f-2440-47e9-84f4-4cea547cb600" pod="tigera-operator/tigera-operator-747864d56d-8rnt8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.631279 kubelet[2790]: I0813 01:47:44.631217 2790 kubelet.go:2405] "Pod admission denied" podUID="c97ab34b-bfbe-444c-932f-2cc9cec25d7e" pod="tigera-operator/tigera-operator-747864d56d-4qgs6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.724039 kubelet[2790]: I0813 01:47:44.723976 2790 kubelet.go:2405] "Pod admission denied" podUID="ea8c8739-6f4b-4a76-aeb1-32cf3e16f93d" pod="tigera-operator/tigera-operator-747864d56d-dtkxd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.822609 kubelet[2790]: I0813 01:47:44.822517 2790 kubelet.go:2405] "Pod admission denied" podUID="e07cdd53-f4b1-4e61-850c-8b0c30981282" pod="tigera-operator/tigera-operator-747864d56d-5mwxk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:44.928055 kubelet[2790]: I0813 01:47:44.927983 2790 kubelet.go:2405] "Pod admission denied" podUID="941231f1-0c3d-4a12-9e1f-e407acf09d5f" pod="tigera-operator/tigera-operator-747864d56d-2sxzn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:45.126342 kubelet[2790]: I0813 01:47:45.125654 2790 kubelet.go:2405] "Pod admission denied" podUID="c658c37e-d18c-411c-973b-a6e4dbd203ef" pod="tigera-operator/tigera-operator-747864d56d-g9959" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:45.229772 kubelet[2790]: I0813 01:47:45.229316 2790 kubelet.go:2405] "Pod admission denied" podUID="367bf566-eaf8-4a55-929f-cf3634ac2881" pod="tigera-operator/tigera-operator-747864d56d-rdz9h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:45.326343 kubelet[2790]: I0813 01:47:45.326284 2790 kubelet.go:2405] "Pod admission denied" podUID="39f208cd-3945-4837-a43a-9ca2a9aec902" pod="tigera-operator/tigera-operator-747864d56d-cm6dh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:45.526087 kubelet[2790]: I0813 01:47:45.526028 2790 kubelet.go:2405] "Pod admission denied" podUID="debafabb-04c4-473c-bf2a-26be6e831356" pod="tigera-operator/tigera-operator-747864d56d-k8gnk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:45.626853 kubelet[2790]: I0813 01:47:45.626774 2790 kubelet.go:2405] "Pod admission denied" podUID="65c4d1c5-08b8-4361-bb3b-a6bc39cbb175" pod="tigera-operator/tigera-operator-747864d56d-8w8cq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:45.695728 containerd[1556]: time="2025-08-13T01:47:45.695660288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:45.734351 kubelet[2790]: I0813 01:47:45.734266 2790 kubelet.go:2405] "Pod admission denied" podUID="9dea9a8f-1c6a-4070-86f0-e685cec85c04" pod="tigera-operator/tigera-operator-747864d56d-wwt2m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:45.782963 containerd[1556]: time="2025-08-13T01:47:45.780960919Z" level=error msg="Failed to destroy network for sandbox \"4283e66eef98e2784f3103d287a3ed5b7b7cb7c99446c3dbf8a9ae4f10d82381\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:45.783709 systemd[1]: run-netns-cni\x2d46a30502\x2dc5c9\x2d67ad\x2d332a\x2d59dc8a72b8e3.mount: Deactivated successfully. Aug 13 01:47:45.784419 containerd[1556]: time="2025-08-13T01:47:45.784303984Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4283e66eef98e2784f3103d287a3ed5b7b7cb7c99446c3dbf8a9ae4f10d82381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:45.784932 kubelet[2790]: E0813 01:47:45.784892 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4283e66eef98e2784f3103d287a3ed5b7b7cb7c99446c3dbf8a9ae4f10d82381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:45.785078 kubelet[2790]: E0813 01:47:45.784951 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4283e66eef98e2784f3103d287a3ed5b7b7cb7c99446c3dbf8a9ae4f10d82381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:47:45.785078 kubelet[2790]: E0813 01:47:45.784992 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4283e66eef98e2784f3103d287a3ed5b7b7cb7c99446c3dbf8a9ae4f10d82381\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:47:45.785271 kubelet[2790]: E0813 01:47:45.785115 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4283e66eef98e2784f3103d287a3ed5b7b7cb7c99446c3dbf8a9ae4f10d82381\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:47:45.936465 kubelet[2790]: I0813 01:47:45.936083 2790 kubelet.go:2405] "Pod admission denied" podUID="3577eaf3-9e54-40ee-aacd-699c7c1cb2d2" pod="tigera-operator/tigera-operator-747864d56d-xtpl9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:46.032021 kubelet[2790]: I0813 01:47:46.031954 2790 kubelet.go:2405] "Pod admission denied" podUID="630247a1-a3bb-46e5-9be8-cca20616a0a7" pod="tigera-operator/tigera-operator-747864d56d-pt7ck" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:46.128933 kubelet[2790]: I0813 01:47:46.128770 2790 kubelet.go:2405] "Pod admission denied" podUID="0ccc040f-2425-4884-9fc8-330cf8c01791" pod="tigera-operator/tigera-operator-747864d56d-54746" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:46.225236 kubelet[2790]: I0813 01:47:46.225153 2790 kubelet.go:2405] "Pod admission denied" podUID="66311b44-5c47-4bbf-8219-a10f022722b8" pod="tigera-operator/tigera-operator-747864d56d-hsgmg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:46.326707 kubelet[2790]: I0813 01:47:46.326627 2790 kubelet.go:2405] "Pod admission denied" podUID="61d4658f-d96c-42d6-a39b-655ce6c621b4" pod="tigera-operator/tigera-operator-747864d56d-v54vs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:46.538772 kubelet[2790]: I0813 01:47:46.537505 2790 kubelet.go:2405] "Pod admission denied" podUID="d6237012-c428-4abd-87a7-ffc5fe801c9d" pod="tigera-operator/tigera-operator-747864d56d-gc88q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:46.624689 kubelet[2790]: I0813 01:47:46.624628 2790 kubelet.go:2405] "Pod admission denied" podUID="026cbe87-f976-47f0-b38e-5a17675b551a" pod="tigera-operator/tigera-operator-747864d56d-g7j58" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:46.673519 kubelet[2790]: I0813 01:47:46.673455 2790 kubelet.go:2405] "Pod admission denied" podUID="10d3c454-d822-4fb0-9a5d-1df610f38cff" pod="tigera-operator/tigera-operator-747864d56d-jjbf8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:46.713509 containerd[1556]: time="2025-08-13T01:47:46.713459982Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:46.789797 containerd[1556]: time="2025-08-13T01:47:46.789635816Z" level=error msg="Failed to destroy network for sandbox \"f7121215fad6310538b19eb9f505c484695d4c6cceb5072152d971f64ac3f93f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:46.793361 systemd[1]: run-netns-cni\x2d7b5ff114\x2db5d7\x2d2b40\x2dce77\x2de5b578019ae8.mount: Deactivated successfully. Aug 13 01:47:46.795415 containerd[1556]: time="2025-08-13T01:47:46.795351749Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7121215fad6310538b19eb9f505c484695d4c6cceb5072152d971f64ac3f93f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:46.796986 kubelet[2790]: E0813 01:47:46.796398 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7121215fad6310538b19eb9f505c484695d4c6cceb5072152d971f64ac3f93f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:46.796986 kubelet[2790]: E0813 01:47:46.796512 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7121215fad6310538b19eb9f505c484695d4c6cceb5072152d971f64ac3f93f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:47:46.796986 kubelet[2790]: E0813 01:47:46.796549 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f7121215fad6310538b19eb9f505c484695d4c6cceb5072152d971f64ac3f93f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:47:46.796986 kubelet[2790]: E0813 01:47:46.796622 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f7121215fad6310538b19eb9f505c484695d4c6cceb5072152d971f64ac3f93f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:47:46.809347 kubelet[2790]: I0813 01:47:46.809295 2790 kubelet.go:2405] "Pod admission denied" podUID="acaf942b-50cd-4eca-9e9b-5708548748d9" pod="tigera-operator/tigera-operator-747864d56d-qmzkk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:46.993475 kubelet[2790]: I0813 01:47:46.993419 2790 kubelet.go:2405] "Pod admission denied" podUID="e932cb8e-197c-42ea-8b69-2a525c4dc497" pod="tigera-operator/tigera-operator-747864d56d-wtg7r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:47.088332 kubelet[2790]: I0813 01:47:47.088162 2790 kubelet.go:2405] "Pod admission denied" podUID="f8e14422-07e3-445b-a5b8-0ffe7ef97c0b" pod="tigera-operator/tigera-operator-747864d56d-2jkd6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:47.174188 kubelet[2790]: I0813 01:47:47.174121 2790 kubelet.go:2405] "Pod admission denied" podUID="316c401a-809c-418f-a28e-58db656ff890" pod="tigera-operator/tigera-operator-747864d56d-bcnhh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:47.276008 kubelet[2790]: I0813 01:47:47.275945 2790 kubelet.go:2405] "Pod admission denied" podUID="efcc3d2e-4cdf-46c5-9dc8-6c5ac52960eb" pod="tigera-operator/tigera-operator-747864d56d-9rvhs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:47.377518 kubelet[2790]: I0813 01:47:47.375985 2790 kubelet.go:2405] "Pod admission denied" podUID="c8fc88cf-dcca-4bb0-829c-bfd3b49790fe" pod="tigera-operator/tigera-operator-747864d56d-bg77t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:47.475802 kubelet[2790]: I0813 01:47:47.475730 2790 kubelet.go:2405] "Pod admission denied" podUID="5f7c3bcd-3941-405a-8efe-fb6759ed1683" pod="tigera-operator/tigera-operator-747864d56d-mks7p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:47.532767 kubelet[2790]: I0813 01:47:47.531757 2790 kubelet.go:2405] "Pod admission denied" podUID="5af627da-2891-4ea1-afc1-75858c8d5561" pod="tigera-operator/tigera-operator-747864d56d-888tn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:47.634044 kubelet[2790]: I0813 01:47:47.632477 2790 kubelet.go:2405] "Pod admission denied" podUID="6374ead3-508b-4dac-8fbd-26badd939ea4" pod="tigera-operator/tigera-operator-747864d56d-v7rn6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:47.694351 kubelet[2790]: E0813 01:47:47.694315 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:47:47.825813 kubelet[2790]: I0813 01:47:47.825720 2790 kubelet.go:2405] "Pod admission denied" podUID="c4acf66a-5bb9-4f54-9de9-ae405f7da5ed" pod="tigera-operator/tigera-operator-747864d56d-6tddb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:47.926523 kubelet[2790]: I0813 01:47:47.925711 2790 kubelet.go:2405] "Pod admission denied" podUID="3903c1ea-5600-4bf0-acb4-0bfaaf6acc91" pod="tigera-operator/tigera-operator-747864d56d-znsch" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:48.038454 kubelet[2790]: I0813 01:47:48.038392 2790 kubelet.go:2405] "Pod admission denied" podUID="a2020adf-a4ce-4517-9c0f-2e48233bdf05" pod="tigera-operator/tigera-operator-747864d56d-gwzgv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:48.140370 kubelet[2790]: I0813 01:47:48.140283 2790 kubelet.go:2405] "Pod admission denied" podUID="635b4178-117a-463c-baa0-f060c8e9fac4" pod="tigera-operator/tigera-operator-747864d56d-xgl7p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:48.225095 kubelet[2790]: I0813 01:47:48.224981 2790 kubelet.go:2405] "Pod admission denied" podUID="fd1a9015-d704-4e9d-97f7-da7e0a5f03f5" pod="tigera-operator/tigera-operator-747864d56d-7zm8z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:48.427597 kubelet[2790]: I0813 01:47:48.427532 2790 kubelet.go:2405] "Pod admission denied" podUID="db58af38-6466-4d09-b509-fc8965fd61a5" pod="tigera-operator/tigera-operator-747864d56d-ppnrs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:48.526266 kubelet[2790]: I0813 01:47:48.525526 2790 kubelet.go:2405] "Pod admission denied" podUID="756e9dc0-4da8-4f06-8ee0-e979b0de76ab" pod="tigera-operator/tigera-operator-747864d56d-wcxx4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:48.625135 kubelet[2790]: I0813 01:47:48.625044 2790 kubelet.go:2405] "Pod admission denied" podUID="9334b727-14fb-4c87-a161-63c1bc692640" pod="tigera-operator/tigera-operator-747864d56d-dnpks" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:48.734854 kubelet[2790]: I0813 01:47:48.734796 2790 kubelet.go:2405] "Pod admission denied" podUID="c6e5dc94-d317-462c-879f-4fb3f015340c" pod="tigera-operator/tigera-operator-747864d56d-9wl4v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:48.828498 kubelet[2790]: I0813 01:47:48.827404 2790 kubelet.go:2405] "Pod admission denied" podUID="e9a8d099-78cd-4b5d-a2d5-3a17d7633fb3" pod="tigera-operator/tigera-operator-747864d56d-k65zj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:49.030213 kubelet[2790]: I0813 01:47:49.030146 2790 kubelet.go:2405] "Pod admission denied" podUID="e5caeab1-0eaa-459e-8fb2-e37f9cb9836e" pod="tigera-operator/tigera-operator-747864d56d-8t5tw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:49.127820 kubelet[2790]: I0813 01:47:49.127647 2790 kubelet.go:2405] "Pod admission denied" podUID="f9b53172-d231-4824-a284-9f74a77e0d99" pod="tigera-operator/tigera-operator-747864d56d-k4lm4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:49.173841 kubelet[2790]: I0813 01:47:49.173779 2790 kubelet.go:2405] "Pod admission denied" podUID="53d68cf6-7e47-4776-a025-24b204f2d545" pod="tigera-operator/tigera-operator-747864d56d-ncxl8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:49.290586 kubelet[2790]: I0813 01:47:49.289065 2790 kubelet.go:2405] "Pod admission denied" podUID="783e26c9-76bf-412a-9544-9f77b6fe5dea" pod="tigera-operator/tigera-operator-747864d56d-bfpsn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:49.382402 kubelet[2790]: I0813 01:47:49.381756 2790 kubelet.go:2405] "Pod admission denied" podUID="7931bbc6-584c-4d61-a40f-cc842b0c6358" pod="tigera-operator/tigera-operator-747864d56d-cq8vl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:49.479412 kubelet[2790]: I0813 01:47:49.479339 2790 kubelet.go:2405] "Pod admission denied" podUID="712651a4-f9db-4328-ba06-bc52e58f638a" pod="tigera-operator/tigera-operator-747864d56d-wzwvq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:49.576173 kubelet[2790]: I0813 01:47:49.576108 2790 kubelet.go:2405] "Pod admission denied" podUID="442bfa4d-ff8c-448e-ab00-db154d43703e" pod="tigera-operator/tigera-operator-747864d56d-x6qnw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:49.678562 kubelet[2790]: I0813 01:47:49.677796 2790 kubelet.go:2405] "Pod admission denied" podUID="00e9ec9e-38a1-4eae-9f54-6254f328bd10" pod="tigera-operator/tigera-operator-747864d56d-svh5w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:49.697011 kubelet[2790]: E0813 01:47:49.696653 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2566183075: write /var/lib/containerd/tmpmounts/containerd-mount2566183075/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-tsmrf" podUID="517ffc51-1a34-4ced-acf5-d8e5da6a1838" Aug 13 01:47:49.874822 kubelet[2790]: I0813 01:47:49.874711 2790 kubelet.go:2405] "Pod admission denied" podUID="620287d0-a08e-45d0-9d00-687e3c3202fa" pod="tigera-operator/tigera-operator-747864d56d-4m524" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:49.976722 kubelet[2790]: I0813 01:47:49.976639 2790 kubelet.go:2405] "Pod admission denied" podUID="a7a5eccc-f23b-4024-b572-25ac3df10781" pod="tigera-operator/tigera-operator-747864d56d-87bgm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:50.090771 kubelet[2790]: I0813 01:47:50.090190 2790 kubelet.go:2405] "Pod admission denied" podUID="2ab2e9e6-5d72-4a0c-956a-8ab274603872" pod="tigera-operator/tigera-operator-747864d56d-tvgx2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:50.275529 kubelet[2790]: I0813 01:47:50.275352 2790 kubelet.go:2405] "Pod admission denied" podUID="3e69d164-8f38-4745-af56-a47658fba06d" pod="tigera-operator/tigera-operator-747864d56d-l7l9g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:50.380120 kubelet[2790]: I0813 01:47:50.380049 2790 kubelet.go:2405] "Pod admission denied" podUID="bd6d8a22-ac8f-4c88-904d-b8ecce2a2069" pod="tigera-operator/tigera-operator-747864d56d-6zkdh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:50.477338 kubelet[2790]: I0813 01:47:50.477263 2790 kubelet.go:2405] "Pod admission denied" podUID="6ac6ce50-ed80-4159-b489-46f0dfecb76d" pod="tigera-operator/tigera-operator-747864d56d-ts74r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:50.577177 kubelet[2790]: I0813 01:47:50.576872 2790 kubelet.go:2405] "Pod admission denied" podUID="0977e089-9362-4c14-a959-75202b82952f" pod="tigera-operator/tigera-operator-747864d56d-dlnnl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:50.694171 kubelet[2790]: I0813 01:47:50.694103 2790 kubelet.go:2405] "Pod admission denied" podUID="28b06c2f-ee90-41fa-908b-b815065b3aa7" pod="tigera-operator/tigera-operator-747864d56d-l2jkq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:50.876798 kubelet[2790]: I0813 01:47:50.875915 2790 kubelet.go:2405] "Pod admission denied" podUID="6dcae171-cd3e-484d-94f6-244dcb8f4f69" pod="tigera-operator/tigera-operator-747864d56d-t7zqd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:50.982777 kubelet[2790]: I0813 01:47:50.981795 2790 kubelet.go:2405] "Pod admission denied" podUID="12364c45-f9ca-4493-b236-d3f910c0e4e8" pod="tigera-operator/tigera-operator-747864d56d-qjsfg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:51.086261 kubelet[2790]: I0813 01:47:51.086193 2790 kubelet.go:2405] "Pod admission denied" podUID="2caf1a44-e08f-44aa-a9f8-4ba92a464e13" pod="tigera-operator/tigera-operator-747864d56d-lnhqf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:51.183026 kubelet[2790]: I0813 01:47:51.182625 2790 kubelet.go:2405] "Pod admission denied" podUID="153e48cb-ba55-406d-96d9-a51221baa5c7" pod="tigera-operator/tigera-operator-747864d56d-4mm8w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:51.279096 kubelet[2790]: I0813 01:47:51.279012 2790 kubelet.go:2405] "Pod admission denied" podUID="a571e8a5-59d0-4031-8261-e0f42ab7b360" pod="tigera-operator/tigera-operator-747864d56d-nrpjq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:51.478788 kubelet[2790]: I0813 01:47:51.478703 2790 kubelet.go:2405] "Pod admission denied" podUID="e9961e36-9194-41bb-96c8-220b74818d9a" pod="tigera-operator/tigera-operator-747864d56d-qmd4n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:51.587365 kubelet[2790]: I0813 01:47:51.587286 2790 kubelet.go:2405] "Pod admission denied" podUID="542d3fda-d48d-4a5c-a0f2-92ed6aad6f8f" pod="tigera-operator/tigera-operator-747864d56d-xmlp2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:51.676780 kubelet[2790]: I0813 01:47:51.676618 2790 kubelet.go:2405] "Pod admission denied" podUID="a789803d-357d-4a17-8dea-0612ef5928e4" pod="tigera-operator/tigera-operator-747864d56d-gkxtn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:51.694771 kubelet[2790]: E0813 01:47:51.694124 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:47:51.695582 containerd[1556]: time="2025-08-13T01:47:51.695467705Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:51.786781 kubelet[2790]: I0813 01:47:51.786063 2790 kubelet.go:2405] "Pod admission denied" podUID="ec8ad1be-f70d-4a57-ad30-732e66ef34ef" pod="tigera-operator/tigera-operator-747864d56d-lqbpk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:51.792973 containerd[1556]: time="2025-08-13T01:47:51.792874561Z" level=error msg="Failed to destroy network for sandbox \"0c4dc18c719b12fdc0795559c0d2cef02f86adc5088d5ecce5a046b252223bec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:51.799289 containerd[1556]: time="2025-08-13T01:47:51.799007614Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c4dc18c719b12fdc0795559c0d2cef02f86adc5088d5ecce5a046b252223bec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:51.799451 systemd[1]: run-netns-cni\x2d8fad6f9a\x2d41d8\x2dd0cb\x2db82a\x2dac845f509006.mount: Deactivated successfully. Aug 13 01:47:51.800994 kubelet[2790]: E0813 01:47:51.799993 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c4dc18c719b12fdc0795559c0d2cef02f86adc5088d5ecce5a046b252223bec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:51.800994 kubelet[2790]: E0813 01:47:51.800082 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c4dc18c719b12fdc0795559c0d2cef02f86adc5088d5ecce5a046b252223bec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:47:51.800994 kubelet[2790]: E0813 01:47:51.800124 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0c4dc18c719b12fdc0795559c0d2cef02f86adc5088d5ecce5a046b252223bec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:47:51.802878 kubelet[2790]: E0813 01:47:51.802804 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0c4dc18c719b12fdc0795559c0d2cef02f86adc5088d5ecce5a046b252223bec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vtdcd" podUID="caa5836a-45f9-496b-86c1-95f6e1b6da17" Aug 13 01:47:51.877561 kubelet[2790]: I0813 01:47:51.877498 2790 kubelet.go:2405] "Pod admission denied" podUID="040a4402-9db5-4cae-a01c-560b12a86694" pod="tigera-operator/tigera-operator-747864d56d-pb7jl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:51.977192 kubelet[2790]: I0813 01:47:51.977136 2790 kubelet.go:2405] "Pod admission denied" podUID="933b680c-7cea-4fdf-902d-e70b301a4a69" pod="tigera-operator/tigera-operator-747864d56d-pplqz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:52.092468 kubelet[2790]: I0813 01:47:52.091880 2790 kubelet.go:2405] "Pod admission denied" podUID="c6b77fe4-9809-4a9b-a635-7b202ef9ccdf" pod="tigera-operator/tigera-operator-747864d56d-ctt4k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:52.176965 kubelet[2790]: I0813 01:47:52.176905 2790 kubelet.go:2405] "Pod admission denied" podUID="32923574-a8ea-4356-925e-352e8df4d72b" pod="tigera-operator/tigera-operator-747864d56d-zt9mt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:52.278259 kubelet[2790]: I0813 01:47:52.278168 2790 kubelet.go:2405] "Pod admission denied" podUID="cfc42549-44cb-4c90-8b4b-44411a7a3a81" pod="tigera-operator/tigera-operator-747864d56d-phkqw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:52.380274 kubelet[2790]: I0813 01:47:52.380077 2790 kubelet.go:2405] "Pod admission denied" podUID="cd70c61e-a9c7-4539-b48b-9b7a20f746bf" pod="tigera-operator/tigera-operator-747864d56d-hmk4x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:52.476929 kubelet[2790]: I0813 01:47:52.476860 2790 kubelet.go:2405] "Pod admission denied" podUID="aff6387d-db29-41a3-9bc6-3c4d44dc1174" pod="tigera-operator/tigera-operator-747864d56d-w9xjz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:52.587536 kubelet[2790]: I0813 01:47:52.587470 2790 kubelet.go:2405] "Pod admission denied" podUID="3c2e2a48-9e92-4145-b78b-a31b82fa489f" pod="tigera-operator/tigera-operator-747864d56d-j92nh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:52.636797 kubelet[2790]: I0813 01:47:52.635428 2790 kubelet.go:2405] "Pod admission denied" podUID="ed85f14d-c645-4e2f-8bc5-34b7523ebaa3" pod="tigera-operator/tigera-operator-747864d56d-wcr44" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:52.694912 kubelet[2790]: E0813 01:47:52.694715 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:47:52.696071 containerd[1556]: time="2025-08-13T01:47:52.695876026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:52.739420 kubelet[2790]: I0813 01:47:52.739358 2790 kubelet.go:2405] "Pod admission denied" podUID="0649f79f-b5ca-46e0-9d12-e684113018b9" pod="tigera-operator/tigera-operator-747864d56d-pfxbp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:52.785354 containerd[1556]: time="2025-08-13T01:47:52.785268293Z" level=error msg="Failed to destroy network for sandbox \"a365cb19496c76adbe7bf3adf9532902ca3050c31366c74f7195bf1d692c7cb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:52.787958 containerd[1556]: time="2025-08-13T01:47:52.786880901Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a365cb19496c76adbe7bf3adf9532902ca3050c31366c74f7195bf1d692c7cb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:52.789307 systemd[1]: run-netns-cni\x2d710702f3\x2dfb69\x2dc84f\x2d1042\x2d3f946ccfdae4.mount: Deactivated successfully. Aug 13 01:47:52.792465 kubelet[2790]: E0813 01:47:52.792415 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a365cb19496c76adbe7bf3adf9532902ca3050c31366c74f7195bf1d692c7cb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:52.792566 kubelet[2790]: E0813 01:47:52.792485 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a365cb19496c76adbe7bf3adf9532902ca3050c31366c74f7195bf1d692c7cb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:47:52.792566 kubelet[2790]: E0813 01:47:52.792510 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a365cb19496c76adbe7bf3adf9532902ca3050c31366c74f7195bf1d692c7cb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:47:52.792657 kubelet[2790]: E0813 01:47:52.792567 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a365cb19496c76adbe7bf3adf9532902ca3050c31366c74f7195bf1d692c7cb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6rlkc" podUID="21a6ba02-58d5-43c1-a7de-9e24560a65f6" Aug 13 01:47:52.834031 kubelet[2790]: I0813 01:47:52.833964 2790 kubelet.go:2405] "Pod admission denied" podUID="2ed37014-7a5c-466b-bc05-cccbc4dc932c" pod="tigera-operator/tigera-operator-747864d56d-85sd2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:52.931220 kubelet[2790]: I0813 01:47:52.931031 2790 kubelet.go:2405] "Pod admission denied" podUID="c714b049-56a3-4d29-98d5-0241daa2de6a" pod="tigera-operator/tigera-operator-747864d56d-qmhk7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:53.137771 kubelet[2790]: I0813 01:47:53.137047 2790 kubelet.go:2405] "Pod admission denied" podUID="a75213f5-b68b-4a43-a1c7-9e1c8818c442" pod="tigera-operator/tigera-operator-747864d56d-tzzlx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:53.226718 kubelet[2790]: I0813 01:47:53.226660 2790 kubelet.go:2405] "Pod admission denied" podUID="7d65efd9-b825-40c9-aaa7-b2f9c0865c58" pod="tigera-operator/tigera-operator-747864d56d-q7gf9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:53.327615 kubelet[2790]: I0813 01:47:53.327565 2790 kubelet.go:2405] "Pod admission denied" podUID="72e58df2-871f-4777-86a7-03660f65c472" pod="tigera-operator/tigera-operator-747864d56d-4fxkt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:53.446735 kubelet[2790]: I0813 01:47:53.446666 2790 kubelet.go:2405] "Pod admission denied" podUID="1e442773-1206-4a60-9437-ae91ec9a8d5f" pod="tigera-operator/tigera-operator-747864d56d-xddmn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:53.531796 kubelet[2790]: I0813 01:47:53.531498 2790 kubelet.go:2405] "Pod admission denied" podUID="90f16d50-7bd2-4905-bdd2-da265ceca278" pod="tigera-operator/tigera-operator-747864d56d-ccgr5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:53.639781 kubelet[2790]: I0813 01:47:53.639240 2790 kubelet.go:2405] "Pod admission denied" podUID="43c88c46-4e12-4ebc-b954-1160a8126dc8" pod="tigera-operator/tigera-operator-747864d56d-nnsmm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:53.727264 kubelet[2790]: I0813 01:47:53.727207 2790 kubelet.go:2405] "Pod admission denied" podUID="9c5188f1-4d08-4dbe-b1af-282745e7e652" pod="tigera-operator/tigera-operator-747864d56d-527hk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:53.831089 kubelet[2790]: I0813 01:47:53.830097 2790 kubelet.go:2405] "Pod admission denied" podUID="527b80a7-28c0-4857-8a56-c7138bcf24e0" pod="tigera-operator/tigera-operator-747864d56d-p2ztw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:53.847874 kubelet[2790]: I0813 01:47:53.847838 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:53.848046 kubelet[2790]: I0813 01:47:53.848034 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:47:53.857124 kubelet[2790]: I0813 01:47:53.857089 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:47:53.897543 kubelet[2790]: I0813 01:47:53.897514 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:53.898020 kubelet[2790]: I0813 01:47:53.897941 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/csi-node-driver-c7jrc","calico-system/calico-node-tsmrf","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:47:53.898020 kubelet[2790]: E0813 01:47:53.897983 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:47:53.898446 kubelet[2790]: E0813 01:47:53.898114 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:47:53.898446 kubelet[2790]: E0813 01:47:53.898126 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:47:53.898446 kubelet[2790]: E0813 01:47:53.898134 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:47:53.898446 kubelet[2790]: E0813 01:47:53.898140 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:47:53.898446 kubelet[2790]: E0813 01:47:53.898151 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:47:53.898859 kubelet[2790]: E0813 01:47:53.898365 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:47:53.898859 kubelet[2790]: E0813 01:47:53.898729 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:47:53.899006 kubelet[2790]: E0813 01:47:53.898952 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:47:53.899006 kubelet[2790]: E0813 01:47:53.898975 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:47:53.899006 kubelet[2790]: I0813 01:47:53.898987 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:47:53.929165 kubelet[2790]: I0813 01:47:53.929110 2790 kubelet.go:2405] "Pod admission denied" podUID="c02af8f9-f581-4d72-a36e-e7867947ea14" pod="tigera-operator/tigera-operator-747864d56d-znmvh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:54.027133 kubelet[2790]: I0813 01:47:54.027068 2790 kubelet.go:2405] "Pod admission denied" podUID="65e61bc9-6fbf-424d-a18a-beadeee73446" pod="tigera-operator/tigera-operator-747864d56d-qcgn8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:54.145486 kubelet[2790]: I0813 01:47:54.144600 2790 kubelet.go:2405] "Pod admission denied" podUID="39168118-886d-4979-ab23-08193f01a2af" pod="tigera-operator/tigera-operator-747864d56d-hgkr5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:54.227957 kubelet[2790]: I0813 01:47:54.227882 2790 kubelet.go:2405] "Pod admission denied" podUID="76fbc072-6be4-4eae-b10a-334944aa89d5" pod="tigera-operator/tigera-operator-747864d56d-rhkf9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:54.326244 kubelet[2790]: I0813 01:47:54.326184 2790 kubelet.go:2405] "Pod admission denied" podUID="4dcdbce3-8d86-4283-964b-4ae3ab76dee5" pod="tigera-operator/tigera-operator-747864d56d-47cnb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:54.528083 kubelet[2790]: I0813 01:47:54.528018 2790 kubelet.go:2405] "Pod admission denied" podUID="716d0a5c-059c-4dc2-86d0-fc9cbce2bf81" pod="tigera-operator/tigera-operator-747864d56d-lx6vr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:54.626698 kubelet[2790]: I0813 01:47:54.626640 2790 kubelet.go:2405] "Pod admission denied" podUID="1777e8ab-e7a5-4297-ac78-7a82e01af703" pod="tigera-operator/tigera-operator-747864d56d-5jwkt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:54.727080 kubelet[2790]: I0813 01:47:54.727013 2790 kubelet.go:2405] "Pod admission denied" podUID="56b6c0e0-601e-4717-8288-fe0885d230f1" pod="tigera-operator/tigera-operator-747864d56d-r46l9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:54.831367 kubelet[2790]: I0813 01:47:54.831220 2790 kubelet.go:2405] "Pod admission denied" podUID="1219d8cd-4ecd-4874-b732-a06721ddf976" pod="tigera-operator/tigera-operator-747864d56d-2sxh6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:54.929960 kubelet[2790]: I0813 01:47:54.929914 2790 kubelet.go:2405] "Pod admission denied" podUID="c537efdf-aa18-44fc-9a1b-3075f5c76f8e" pod="tigera-operator/tigera-operator-747864d56d-7kml9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:55.039920 kubelet[2790]: I0813 01:47:55.039858 2790 kubelet.go:2405] "Pod admission denied" podUID="c60ae342-acd7-4677-9e31-4a819d3c3446" pod="tigera-operator/tigera-operator-747864d56d-qjnht" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:55.098456 kubelet[2790]: I0813 01:47:55.098310 2790 kubelet.go:2405] "Pod admission denied" podUID="708a3ece-33bb-4e0e-9542-1bb632d83fe9" pod="tigera-operator/tigera-operator-747864d56d-6tntd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:55.234592 kubelet[2790]: I0813 01:47:55.234505 2790 kubelet.go:2405] "Pod admission denied" podUID="230b2805-84da-45eb-8aed-1e1de2f50a56" pod="tigera-operator/tigera-operator-747864d56d-klclh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:55.281551 kubelet[2790]: I0813 01:47:55.281484 2790 kubelet.go:2405] "Pod admission denied" podUID="8fd1d9f8-c27b-4c27-bd9e-18496c139725" pod="tigera-operator/tigera-operator-747864d56d-xwg6h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:55.388637 kubelet[2790]: I0813 01:47:55.387336 2790 kubelet.go:2405] "Pod admission denied" podUID="dc4bc599-3b95-4d67-a825-01c8acdcbc85" pod="tigera-operator/tigera-operator-747864d56d-5n66r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:55.592261 kubelet[2790]: I0813 01:47:55.592172 2790 kubelet.go:2405] "Pod admission denied" podUID="f87421ad-de3d-45b7-9b96-0447f40bf3a5" pod="tigera-operator/tigera-operator-747864d56d-lsx7b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:55.861507 kubelet[2790]: I0813 01:47:55.861435 2790 kubelet.go:2405] "Pod admission denied" podUID="f0eafe03-01a1-4f89-b4b7-6782042d8cf1" pod="tigera-operator/tigera-operator-747864d56d-n2wsw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:55.986433 kubelet[2790]: I0813 01:47:55.986372 2790 kubelet.go:2405] "Pod admission denied" podUID="2e720e94-5cdd-47a1-ab90-f6d8069919f8" pod="tigera-operator/tigera-operator-747864d56d-cp8jh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:56.082839 kubelet[2790]: I0813 01:47:56.082786 2790 kubelet.go:2405] "Pod admission denied" podUID="f0562bac-10e0-4657-814b-807d2bc2b5f4" pod="tigera-operator/tigera-operator-747864d56d-6zc4r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:56.209239 kubelet[2790]: I0813 01:47:56.209084 2790 kubelet.go:2405] "Pod admission denied" podUID="c67401e0-6c70-485a-be94-9cffb6a6f71e" pod="tigera-operator/tigera-operator-747864d56d-wcgnk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:56.363627 kubelet[2790]: I0813 01:47:56.363581 2790 kubelet.go:2405] "Pod admission denied" podUID="c55558f5-fa6c-4841-865f-c16df95cd2de" pod="tigera-operator/tigera-operator-747864d56d-96pwh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:56.508791 kubelet[2790]: I0813 01:47:56.508719 2790 kubelet.go:2405] "Pod admission denied" podUID="c0a6d30f-0659-40a0-aae6-735ff983bc0d" pod="tigera-operator/tigera-operator-747864d56d-hrpdl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:56.587638 kubelet[2790]: I0813 01:47:56.587585 2790 kubelet.go:2405] "Pod admission denied" podUID="186137dd-acec-4a71-b0cb-4db09a4e627a" pod="tigera-operator/tigera-operator-747864d56d-ltzvd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:56.722522 kubelet[2790]: I0813 01:47:56.722306 2790 kubelet.go:2405] "Pod admission denied" podUID="d45eccc2-3f96-452c-b736-979ddf41bcd8" pod="tigera-operator/tigera-operator-747864d56d-kqhv2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:56.840238 kubelet[2790]: I0813 01:47:56.839304 2790 kubelet.go:2405] "Pod admission denied" podUID="5ad4cb92-f4ae-4987-a20f-5cdac6581e42" pod="tigera-operator/tigera-operator-747864d56d-wbzph" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:56.992602 kubelet[2790]: I0813 01:47:56.992544 2790 kubelet.go:2405] "Pod admission denied" podUID="15d6f872-55c1-49d2-9767-c91aed279ce4" pod="tigera-operator/tigera-operator-747864d56d-n6fqs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:57.084139 kubelet[2790]: I0813 01:47:57.084081 2790 kubelet.go:2405] "Pod admission denied" podUID="53b65b86-2fd0-4830-b942-56a153f46e42" pod="tigera-operator/tigera-operator-747864d56d-wztpq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:57.192279 kubelet[2790]: I0813 01:47:57.191265 2790 kubelet.go:2405] "Pod admission denied" podUID="f867d18e-f70b-4bfa-9ae2-11b402d96086" pod="tigera-operator/tigera-operator-747864d56d-9vwfj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:57.375161 kubelet[2790]: I0813 01:47:57.374720 2790 kubelet.go:2405] "Pod admission denied" podUID="3d90867b-9048-4ca1-9c78-e9044a32fed8" pod="tigera-operator/tigera-operator-747864d56d-qgmpp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:57.494334 kubelet[2790]: I0813 01:47:57.494222 2790 kubelet.go:2405] "Pod admission denied" podUID="08052f90-64cf-4918-a0af-014b8eb275b6" pod="tigera-operator/tigera-operator-747864d56d-qqzsn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:57.661319 kubelet[2790]: I0813 01:47:57.657873 2790 kubelet.go:2405] "Pod admission denied" podUID="112f9910-1bb4-43ac-a97b-520ee8000b2f" pod="tigera-operator/tigera-operator-747864d56d-fgcmd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:57.800908 kubelet[2790]: I0813 01:47:57.800371 2790 kubelet.go:2405] "Pod admission denied" podUID="10e49735-5312-482d-a285-57e4776dc21e" pod="tigera-operator/tigera-operator-747864d56d-6948l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:57.897236 kubelet[2790]: I0813 01:47:57.895877 2790 kubelet.go:2405] "Pod admission denied" podUID="05b071e3-0074-4844-9753-d1c0652caffe" pod="tigera-operator/tigera-operator-747864d56d-cqsfp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:58.188478 kubelet[2790]: I0813 01:47:58.187700 2790 kubelet.go:2405] "Pod admission denied" podUID="72f3e6f9-8c17-49be-ac1f-fbe861c83c5f" pod="tigera-operator/tigera-operator-747864d56d-vlltf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:58.329344 kubelet[2790]: I0813 01:47:58.329256 2790 kubelet.go:2405] "Pod admission denied" podUID="97c5dea0-20a1-42da-b7ae-bc0d966faae3" pod="tigera-operator/tigera-operator-747864d56d-qc9xs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:58.494634 kubelet[2790]: I0813 01:47:58.494553 2790 kubelet.go:2405] "Pod admission denied" podUID="2d4fb52b-9f90-4839-a77c-9e2528198ed7" pod="tigera-operator/tigera-operator-747864d56d-v5mxx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:58.653787 kubelet[2790]: I0813 01:47:58.653511 2790 kubelet.go:2405] "Pod admission denied" podUID="f36aada7-0a53-411c-879d-66b35c94e1b1" pod="tigera-operator/tigera-operator-747864d56d-58v7q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:58.802623 kubelet[2790]: I0813 01:47:58.802416 2790 kubelet.go:2405] "Pod admission denied" podUID="a12b625c-5372-4f42-bcd3-7f3da6b5c851" pod="tigera-operator/tigera-operator-747864d56d-cqfm8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:58.966097 kubelet[2790]: I0813 01:47:58.956985 2790 kubelet.go:2405] "Pod admission denied" podUID="9ea03012-4d86-4ede-9380-48973655e2c8" pod="tigera-operator/tigera-operator-747864d56d-d77d4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:59.115310 kubelet[2790]: I0813 01:47:59.113310 2790 kubelet.go:2405] "Pod admission denied" podUID="9fdae3a8-ee9b-4324-b421-f0eab872ffbe" pod="tigera-operator/tigera-operator-747864d56d-f8p85" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:59.249891 kubelet[2790]: I0813 01:47:59.246702 2790 kubelet.go:2405] "Pod admission denied" podUID="8c4dec01-045a-4596-8067-c8c737ddff56" pod="tigera-operator/tigera-operator-747864d56d-7npvc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:59.419778 kubelet[2790]: I0813 01:47:59.419571 2790 kubelet.go:2405] "Pod admission denied" podUID="f75969dc-51ea-4f69-88a6-2be2d036ed90" pod="tigera-operator/tigera-operator-747864d56d-plv86" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:59.552923 kubelet[2790]: I0813 01:47:59.552843 2790 kubelet.go:2405] "Pod admission denied" podUID="684f9e23-95d5-4261-96f6-4de9df23e0d2" pod="tigera-operator/tigera-operator-747864d56d-g7bbc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:59.734964 kubelet[2790]: I0813 01:47:59.734897 2790 kubelet.go:2405] "Pod admission denied" podUID="36a07f3d-fe03-4a53-90f9-1e79f633a69f" pod="tigera-operator/tigera-operator-747864d56d-4zm6t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:59.872317 kubelet[2790]: I0813 01:47:59.872076 2790 kubelet.go:2405] "Pod admission denied" podUID="e8f58faa-3122-4dfd-8ad3-2747a420e06b" pod="tigera-operator/tigera-operator-747864d56d-j2682" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:00.095803 kubelet[2790]: I0813 01:48:00.095550 2790 kubelet.go:2405] "Pod admission denied" podUID="e00c3470-88ba-4778-8fe8-91dc42b32093" pod="tigera-operator/tigera-operator-747864d56d-x7nbt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:00.274407 kubelet[2790]: I0813 01:48:00.274316 2790 kubelet.go:2405] "Pod admission denied" podUID="de48b1c7-ba23-4b77-ace8-d1bf7695e9b2" pod="tigera-operator/tigera-operator-747864d56d-mx22p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:00.427782 kubelet[2790]: I0813 01:48:00.427160 2790 kubelet.go:2405] "Pod admission denied" podUID="cb694e26-6285-40d4-b66c-25211139b125" pod="tigera-operator/tigera-operator-747864d56d-nd864" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:00.613873 kubelet[2790]: I0813 01:48:00.613721 2790 kubelet.go:2405] "Pod admission denied" podUID="83efb3dd-3911-41c2-a7dd-da1d8ebb33ff" pod="tigera-operator/tigera-operator-747864d56d-nj5tx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:00.693084 kubelet[2790]: I0813 01:48:00.692184 2790 kubelet.go:2405] "Pod admission denied" podUID="b603174c-ae71-486d-88f3-447c74ef56c3" pod="tigera-operator/tigera-operator-747864d56d-k7hn7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:00.696898 containerd[1556]: time="2025-08-13T01:48:00.696112906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:00.696898 containerd[1556]: time="2025-08-13T01:48:00.696710866Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:00.703509 containerd[1556]: time="2025-08-13T01:48:00.703299589Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:48:00.890423 kubelet[2790]: I0813 01:48:00.887766 2790 kubelet.go:2405] "Pod admission denied" podUID="4e67a610-3e64-4b76-b504-5d282e4649b1" pod="tigera-operator/tigera-operator-747864d56d-zsjvn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:00.897702 containerd[1556]: time="2025-08-13T01:48:00.897408250Z" level=error msg="Failed to destroy network for sandbox \"60aa586c481376e6fb85294fb67a83a528b5efd7a3dbcfdafc082d434402eaf5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:00.904299 systemd[1]: run-netns-cni\x2d8383cb32\x2df4d4\x2d3be3\x2dd031\x2dbf0609be55cf.mount: Deactivated successfully. Aug 13 01:48:00.911038 containerd[1556]: time="2025-08-13T01:48:00.910512127Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"60aa586c481376e6fb85294fb67a83a528b5efd7a3dbcfdafc082d434402eaf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:00.912372 kubelet[2790]: E0813 01:48:00.912304 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60aa586c481376e6fb85294fb67a83a528b5efd7a3dbcfdafc082d434402eaf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:00.914714 kubelet[2790]: E0813 01:48:00.914684 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60aa586c481376e6fb85294fb67a83a528b5efd7a3dbcfdafc082d434402eaf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:48:00.915492 kubelet[2790]: E0813 01:48:00.914959 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"60aa586c481376e6fb85294fb67a83a528b5efd7a3dbcfdafc082d434402eaf5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:48:00.915711 kubelet[2790]: E0813 01:48:00.915677 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"60aa586c481376e6fb85294fb67a83a528b5efd7a3dbcfdafc082d434402eaf5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:48:00.916776 containerd[1556]: time="2025-08-13T01:48:00.916024351Z" level=error msg="Failed to destroy network for sandbox \"95f9655634b7b0fbda8ea4e38faf1e15d244a7cddc51501de69bbc829d50c5ad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:00.923126 systemd[1]: run-netns-cni\x2de854f32d\x2d3c29\x2df973\x2d64d0\x2db10d0cafab43.mount: Deactivated successfully. Aug 13 01:48:00.926986 containerd[1556]: time="2025-08-13T01:48:00.925624272Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"95f9655634b7b0fbda8ea4e38faf1e15d244a7cddc51501de69bbc829d50c5ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:00.935436 kubelet[2790]: E0813 01:48:00.935383 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95f9655634b7b0fbda8ea4e38faf1e15d244a7cddc51501de69bbc829d50c5ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:00.937431 kubelet[2790]: E0813 01:48:00.935974 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95f9655634b7b0fbda8ea4e38faf1e15d244a7cddc51501de69bbc829d50c5ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:48:00.937431 kubelet[2790]: E0813 01:48:00.936005 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"95f9655634b7b0fbda8ea4e38faf1e15d244a7cddc51501de69bbc829d50c5ad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:48:00.937431 kubelet[2790]: E0813 01:48:00.936062 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"95f9655634b7b0fbda8ea4e38faf1e15d244a7cddc51501de69bbc829d50c5ad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:48:00.981364 kubelet[2790]: I0813 01:48:00.981280 2790 kubelet.go:2405] "Pod admission denied" podUID="09febbe3-87cc-4e90-9f6d-f6004b3a09c7" pod="tigera-operator/tigera-operator-747864d56d-nncs9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:01.042771 kubelet[2790]: I0813 01:48:01.042582 2790 kubelet.go:2405] "Pod admission denied" podUID="e9ff76b4-88a9-481f-be7e-5984ce0a945e" pod="tigera-operator/tigera-operator-747864d56d-6269v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:01.133125 kubelet[2790]: I0813 01:48:01.133058 2790 kubelet.go:2405] "Pod admission denied" podUID="8092f444-be9c-478f-827c-53acb4909b9e" pod="tigera-operator/tigera-operator-747864d56d-wx8g9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:01.231939 kubelet[2790]: I0813 01:48:01.231776 2790 kubelet.go:2405] "Pod admission denied" podUID="c3fbf73f-3a09-459e-8a74-edced06e7388" pod="tigera-operator/tigera-operator-747864d56d-6rdlg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:01.347114 kubelet[2790]: I0813 01:48:01.346483 2790 kubelet.go:2405] "Pod admission denied" podUID="d81ed065-fc6e-4856-b9df-a0a0d8a63526" pod="tigera-operator/tigera-operator-747864d56d-xmcr6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:01.530892 kubelet[2790]: I0813 01:48:01.530603 2790 kubelet.go:2405] "Pod admission denied" podUID="8d7a84ac-93da-4c01-b7b7-368c085c6dfc" pod="tigera-operator/tigera-operator-747864d56d-kxd6j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:01.640117 kubelet[2790]: I0813 01:48:01.639957 2790 kubelet.go:2405] "Pod admission denied" podUID="996deebe-6d37-4fc3-91ac-82822db8b7db" pod="tigera-operator/tigera-operator-747864d56d-mdblh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:01.785780 kubelet[2790]: I0813 01:48:01.785518 2790 kubelet.go:2405] "Pod admission denied" podUID="bccf0c0d-ce6f-41fd-b08d-6fd87ca7b75e" pod="tigera-operator/tigera-operator-747864d56d-b4n6b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:01.850459 kubelet[2790]: I0813 01:48:01.850411 2790 kubelet.go:2405] "Pod admission denied" podUID="42ac2c3e-e7d0-4552-aee4-25a08d400f96" pod="tigera-operator/tigera-operator-747864d56d-ftxq7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:02.005814 kubelet[2790]: I0813 01:48:02.005178 2790 kubelet.go:2405] "Pod admission denied" podUID="86c18ac9-b061-462f-bfc4-4f9700f1954b" pod="tigera-operator/tigera-operator-747864d56d-wt2p7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:02.134382 kubelet[2790]: I0813 01:48:02.134143 2790 kubelet.go:2405] "Pod admission denied" podUID="91112445-2913-4a11-8f6c-daa60ece9f58" pod="tigera-operator/tigera-operator-747864d56d-qjv6g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:02.186909 kubelet[2790]: I0813 01:48:02.186845 2790 kubelet.go:2405] "Pod admission denied" podUID="8a50ad01-41c1-406e-be0a-dde9d759b12e" pod="tigera-operator/tigera-operator-747864d56d-q8wrd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:02.293090 kubelet[2790]: I0813 01:48:02.293022 2790 kubelet.go:2405] "Pod admission denied" podUID="2902ab9a-a1d2-4a6f-b7d4-0b1814815110" pod="tigera-operator/tigera-operator-747864d56d-hsrxm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:02.489707 kubelet[2790]: I0813 01:48:02.489626 2790 kubelet.go:2405] "Pod admission denied" podUID="21766c3e-cf05-4822-b4da-b7d1710063cc" pod="tigera-operator/tigera-operator-747864d56d-n7pnd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:02.610957 kubelet[2790]: I0813 01:48:02.609629 2790 kubelet.go:2405] "Pod admission denied" podUID="2df21fea-c2e8-427e-a723-41be14d6b27c" pod="tigera-operator/tigera-operator-747864d56d-xhgtk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:02.742954 kubelet[2790]: I0813 01:48:02.742682 2790 kubelet.go:2405] "Pod admission denied" podUID="2977cdbb-3489-455e-836c-d6fa6169841a" pod="tigera-operator/tigera-operator-747864d56d-6fb9x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:02.808042 kubelet[2790]: I0813 01:48:02.807981 2790 kubelet.go:2405] "Pod admission denied" podUID="cedc6598-44b0-4265-9c55-347b2ef52132" pod="tigera-operator/tigera-operator-747864d56d-8m7rj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:02.941505 kubelet[2790]: I0813 01:48:02.941455 2790 kubelet.go:2405] "Pod admission denied" podUID="db7527bc-c8fb-478b-a44b-f64a672db226" pod="tigera-operator/tigera-operator-747864d56d-v2hpw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:03.042227 kubelet[2790]: I0813 01:48:03.042076 2790 kubelet.go:2405] "Pod admission denied" podUID="b8ebdfe2-dee4-49b5-ab0c-11edf7c13532" pod="tigera-operator/tigera-operator-747864d56d-rrnv4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:03.164706 kubelet[2790]: I0813 01:48:03.164659 2790 kubelet.go:2405] "Pod admission denied" podUID="cdf28b5d-1104-483d-8fab-ae5872b0f084" pod="tigera-operator/tigera-operator-747864d56d-zt44w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:03.294540 kubelet[2790]: I0813 01:48:03.294230 2790 kubelet.go:2405] "Pod admission denied" podUID="bee774f3-159c-4a1e-aa34-26b2b7c782be" pod="tigera-operator/tigera-operator-747864d56d-82s7v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:03.387146 kubelet[2790]: I0813 01:48:03.387104 2790 kubelet.go:2405] "Pod admission denied" podUID="64d2946b-9848-41df-9f11-1abecf8d2ad5" pod="tigera-operator/tigera-operator-747864d56d-c24td" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:03.480997 kubelet[2790]: I0813 01:48:03.480934 2790 kubelet.go:2405] "Pod admission denied" podUID="235bc418-dc32-41d4-b177-8f23c6b87c15" pod="tigera-operator/tigera-operator-747864d56d-fplbr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:03.587801 kubelet[2790]: I0813 01:48:03.587422 2790 kubelet.go:2405] "Pod admission denied" podUID="54346273-8328-4fb5-8b69-a08de123aa28" pod="tigera-operator/tigera-operator-747864d56d-29xg6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:03.594923 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2096206682.mount: Deactivated successfully. Aug 13 01:48:03.598538 containerd[1556]: time="2025-08-13T01:48:03.597668562Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2096206682: write /var/lib/containerd/tmpmounts/containerd-mount2096206682/usr/bin/calico-node: no space left on device" Aug 13 01:48:03.598538 containerd[1556]: time="2025-08-13T01:48:03.598494101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:48:03.599664 kubelet[2790]: E0813 01:48:03.599624 2790 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2096206682: write /var/lib/containerd/tmpmounts/containerd-mount2096206682/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:48:03.599846 kubelet[2790]: E0813 01:48:03.599669 2790 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2096206682: write /var/lib/containerd/tmpmounts/containerd-mount2096206682/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:48:03.600294 kubelet[2790]: E0813 01:48:03.600016 2790 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-hzvb2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},StopSignal:nil,},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-tsmrf_calico-system(517ffc51-1a34-4ced-acf5-d8e5da6a1838): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2096206682: write /var/lib/containerd/tmpmounts/containerd-mount2096206682/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:48:03.601920 kubelet[2790]: E0813 01:48:03.601890 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2096206682: write /var/lib/containerd/tmpmounts/containerd-mount2096206682/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-tsmrf" podUID="517ffc51-1a34-4ced-acf5-d8e5da6a1838" Aug 13 01:48:03.781562 kubelet[2790]: I0813 01:48:03.781492 2790 kubelet.go:2405] "Pod admission denied" podUID="41ccc0c6-f495-4a9d-b6bb-11abf4f3a74c" pod="tigera-operator/tigera-operator-747864d56d-x2r9z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:03.876452 kubelet[2790]: I0813 01:48:03.876304 2790 kubelet.go:2405] "Pod admission denied" podUID="1d2711de-3284-4935-bd82-775b07ac5792" pod="tigera-operator/tigera-operator-747864d56d-pjrt9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:03.939807 kubelet[2790]: I0813 01:48:03.939766 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:48:03.939807 kubelet[2790]: I0813 01:48:03.939812 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:48:03.945055 kubelet[2790]: I0813 01:48:03.944912 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:48:03.947784 kubelet[2790]: I0813 01:48:03.947508 2790 image_gc_manager.go:514] "Removing image to free bytes" imageID="sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1" size=58938593 runtimeHandler="" Aug 13 01:48:03.948402 containerd[1556]: time="2025-08-13T01:48:03.948363336Z" level=info msg="RemoveImage \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 01:48:03.949846 containerd[1556]: time="2025-08-13T01:48:03.949761855Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.21-0\"" Aug 13 01:48:03.950300 containerd[1556]: time="2025-08-13T01:48:03.950252595Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\"" Aug 13 01:48:03.950729 containerd[1556]: time="2025-08-13T01:48:03.950700614Z" level=info msg="RemoveImage \"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\" returns successfully" Aug 13 01:48:03.950888 containerd[1556]: time="2025-08-13T01:48:03.950841484Z" level=info msg="ImageDelete event name:\"sha256:499038711c0816eda03a1ad96a8eb0440c005baa6949698223c6176b7f5077e1\"" Aug 13 01:48:03.951021 kubelet[2790]: I0813 01:48:03.950991 2790 image_gc_manager.go:514] "Removing image to free bytes" imageID="sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b" size=20939036 runtimeHandler="" Aug 13 01:48:03.951251 containerd[1556]: time="2025-08-13T01:48:03.951141294Z" level=info msg="RemoveImage \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 01:48:03.952402 containerd[1556]: time="2025-08-13T01:48:03.952243053Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 01:48:03.953342 containerd[1556]: time="2025-08-13T01:48:03.953245772Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\"" Aug 13 01:48:03.954018 containerd[1556]: time="2025-08-13T01:48:03.953818031Z" level=info msg="RemoveImage \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" returns successfully" Aug 13 01:48:03.954282 containerd[1556]: time="2025-08-13T01:48:03.954229251Z" level=info msg="ImageDelete event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 01:48:03.956721 kubelet[2790]: I0813 01:48:03.956666 2790 kubelet.go:2405] "Pod admission denied" podUID="a3f5d8ed-d29f-4133-82ef-2651513b4079" pod="tigera-operator/tigera-operator-747864d56d-2pglm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:03.978313 kubelet[2790]: I0813 01:48:03.978271 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:48:03.978502 kubelet[2790]: I0813 01:48:03.978368 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/calico-node-tsmrf","calico-system/csi-node-driver-c7jrc","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:48:03.978502 kubelet[2790]: E0813 01:48:03.978400 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:48:03.978502 kubelet[2790]: E0813 01:48:03.978412 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:48:03.978502 kubelet[2790]: E0813 01:48:03.978419 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:48:03.978502 kubelet[2790]: E0813 01:48:03.978426 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:48:03.978502 kubelet[2790]: E0813 01:48:03.978432 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:48:03.978502 kubelet[2790]: E0813 01:48:03.978442 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:48:03.978502 kubelet[2790]: E0813 01:48:03.978452 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:48:03.978502 kubelet[2790]: E0813 01:48:03.978459 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:48:03.978502 kubelet[2790]: E0813 01:48:03.978468 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:48:03.978502 kubelet[2790]: E0813 01:48:03.978476 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:48:03.978502 kubelet[2790]: I0813 01:48:03.978488 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:48:04.085663 kubelet[2790]: I0813 01:48:04.085589 2790 kubelet.go:2405] "Pod admission denied" podUID="d470cc32-c5f5-4382-b2f0-614847ba41ee" pod="tigera-operator/tigera-operator-747864d56d-8hh54" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:04.205860 kubelet[2790]: I0813 01:48:04.205594 2790 kubelet.go:2405] "Pod admission denied" podUID="c5f0ec1f-c61f-4cba-9651-3e22a3c714e1" pod="tigera-operator/tigera-operator-747864d56d-xzvws" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:04.279217 kubelet[2790]: I0813 01:48:04.279128 2790 kubelet.go:2405] "Pod admission denied" podUID="de2e7a41-e115-4a83-8a3f-617b35a2ed8c" pod="tigera-operator/tigera-operator-747864d56d-lrcv9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:04.389771 kubelet[2790]: I0813 01:48:04.389581 2790 kubelet.go:2405] "Pod admission denied" podUID="0adf9ca1-d99d-4125-bab7-07d59a304801" pod="tigera-operator/tigera-operator-747864d56d-qqpvf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:04.479721 kubelet[2790]: I0813 01:48:04.479658 2790 kubelet.go:2405] "Pod admission denied" podUID="81c2da80-4d09-4687-9157-053c2751d126" pod="tigera-operator/tigera-operator-747864d56d-m45k7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:04.576933 kubelet[2790]: I0813 01:48:04.576870 2790 kubelet.go:2405] "Pod admission denied" podUID="005b9c7b-3656-4647-be2b-582611511dfa" pod="tigera-operator/tigera-operator-747864d56d-q26cr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:04.678831 kubelet[2790]: I0813 01:48:04.678696 2790 kubelet.go:2405] "Pod admission denied" podUID="fe75bf69-6b70-4958-947b-d4816b522664" pod="tigera-operator/tigera-operator-747864d56d-ssr7s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:04.696515 kubelet[2790]: E0813 01:48:04.696113 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:48:04.697168 containerd[1556]: time="2025-08-13T01:48:04.697137318Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:04.787873 containerd[1556]: time="2025-08-13T01:48:04.787643430Z" level=error msg="Failed to destroy network for sandbox \"16c89e3ff6399e87d4c110a5717ee91baa81fb67132d1aaa9869154c6bf435bf\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:04.790822 containerd[1556]: time="2025-08-13T01:48:04.790089237Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"16c89e3ff6399e87d4c110a5717ee91baa81fb67132d1aaa9869154c6bf435bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:04.792439 kubelet[2790]: I0813 01:48:04.792303 2790 kubelet.go:2405] "Pod admission denied" podUID="46a11035-a1ed-455b-98f3-fc7bdc1a5dde" pod="tigera-operator/tigera-operator-747864d56d-492cr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:04.792679 systemd[1]: run-netns-cni\x2ddd8dd4b6\x2dc399\x2d99a1\x2d6b7d\x2d480f900a6097.mount: Deactivated successfully. Aug 13 01:48:04.793834 kubelet[2790]: E0813 01:48:04.793260 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16c89e3ff6399e87d4c110a5717ee91baa81fb67132d1aaa9869154c6bf435bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:04.793834 kubelet[2790]: E0813 01:48:04.793314 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16c89e3ff6399e87d4c110a5717ee91baa81fb67132d1aaa9869154c6bf435bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:48:04.793834 kubelet[2790]: E0813 01:48:04.793342 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"16c89e3ff6399e87d4c110a5717ee91baa81fb67132d1aaa9869154c6bf435bf\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:48:04.793834 kubelet[2790]: E0813 01:48:04.793386 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"16c89e3ff6399e87d4c110a5717ee91baa81fb67132d1aaa9869154c6bf435bf\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6rlkc" podUID="21a6ba02-58d5-43c1-a7de-9e24560a65f6" Aug 13 01:48:04.993514 kubelet[2790]: I0813 01:48:04.993432 2790 kubelet.go:2405] "Pod admission denied" podUID="287d4cb0-897f-4ba5-a01f-8e02d92c4f26" pod="tigera-operator/tigera-operator-747864d56d-v4776" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:05.079516 kubelet[2790]: I0813 01:48:05.079331 2790 kubelet.go:2405] "Pod admission denied" podUID="a3ee4a0b-fa46-43e2-9b13-ebb38534fd73" pod="tigera-operator/tigera-operator-747864d56d-6kbzc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:05.178692 kubelet[2790]: I0813 01:48:05.178633 2790 kubelet.go:2405] "Pod admission denied" podUID="e899330a-419f-40c5-aca7-3e528296d394" pod="tigera-operator/tigera-operator-747864d56d-n4qg8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:05.278102 kubelet[2790]: I0813 01:48:05.278034 2790 kubelet.go:2405] "Pod admission denied" podUID="26e69723-67d3-46ff-b198-0459f2313495" pod="tigera-operator/tigera-operator-747864d56d-flg8p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:05.380936 kubelet[2790]: I0813 01:48:05.380439 2790 kubelet.go:2405] "Pod admission denied" podUID="121fb838-cfaf-463b-9296-b5ff2f77e278" pod="tigera-operator/tigera-operator-747864d56d-hjp99" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:05.489771 kubelet[2790]: I0813 01:48:05.488854 2790 kubelet.go:2405] "Pod admission denied" podUID="5ff811e6-5b79-4da4-b6b6-1ee1a774a41f" pod="tigera-operator/tigera-operator-747864d56d-pprxd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:05.581613 kubelet[2790]: I0813 01:48:05.581555 2790 kubelet.go:2405] "Pod admission denied" podUID="a88cbea5-cd0a-4546-8806-f45dd57590c7" pod="tigera-operator/tigera-operator-747864d56d-sft59" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:05.694894 kubelet[2790]: E0813 01:48:05.694441 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:48:05.695365 containerd[1556]: time="2025-08-13T01:48:05.695021055Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:05.756773 containerd[1556]: time="2025-08-13T01:48:05.756701176Z" level=error msg="Failed to destroy network for sandbox \"357c85d8e77c6c1197689ce26f8d81bcce64c555ab1253993084506a0ea14a3c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:05.759237 containerd[1556]: time="2025-08-13T01:48:05.759172514Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"357c85d8e77c6c1197689ce26f8d81bcce64c555ab1253993084506a0ea14a3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:05.761186 kubelet[2790]: E0813 01:48:05.759412 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"357c85d8e77c6c1197689ce26f8d81bcce64c555ab1253993084506a0ea14a3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:05.761186 kubelet[2790]: E0813 01:48:05.759474 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"357c85d8e77c6c1197689ce26f8d81bcce64c555ab1253993084506a0ea14a3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:48:05.761186 kubelet[2790]: E0813 01:48:05.759499 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"357c85d8e77c6c1197689ce26f8d81bcce64c555ab1253993084506a0ea14a3c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:48:05.761186 kubelet[2790]: E0813 01:48:05.759548 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"357c85d8e77c6c1197689ce26f8d81bcce64c555ab1253993084506a0ea14a3c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vtdcd" podUID="caa5836a-45f9-496b-86c1-95f6e1b6da17" Aug 13 01:48:05.762557 systemd[1]: run-netns-cni\x2d390ad7cb\x2dd27e\x2d1123\x2d394f\x2d2c3371bdee60.mount: Deactivated successfully. Aug 13 01:48:05.785983 kubelet[2790]: I0813 01:48:05.785929 2790 kubelet.go:2405] "Pod admission denied" podUID="c089f92f-ede5-495c-bb3d-1e20675de26d" pod="tigera-operator/tigera-operator-747864d56d-j4pd7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:05.879620 kubelet[2790]: I0813 01:48:05.879566 2790 kubelet.go:2405] "Pod admission denied" podUID="3a2703d9-216c-4e83-8dca-2f4f15d1fbaa" pod="tigera-operator/tigera-operator-747864d56d-bdh5n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:05.928024 kubelet[2790]: I0813 01:48:05.927951 2790 kubelet.go:2405] "Pod admission denied" podUID="b94d3a1c-0b06-491b-bef1-e9dae3e104c5" pod="tigera-operator/tigera-operator-747864d56d-mkg5j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:06.037086 kubelet[2790]: I0813 01:48:06.036900 2790 kubelet.go:2405] "Pod admission denied" podUID="157908bc-3f86-425f-a1f7-6af9eb44582a" pod="tigera-operator/tigera-operator-747864d56d-z84ps" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:06.126704 kubelet[2790]: I0813 01:48:06.126644 2790 kubelet.go:2405] "Pod admission denied" podUID="baf13114-947e-4aa0-b53a-1f133c2d7925" pod="tigera-operator/tigera-operator-747864d56d-rs5g7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:06.227606 kubelet[2790]: I0813 01:48:06.227551 2790 kubelet.go:2405] "Pod admission denied" podUID="1a9f9e63-9714-4a88-9998-1c0346162472" pod="tigera-operator/tigera-operator-747864d56d-v96gb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:06.327779 kubelet[2790]: I0813 01:48:06.326820 2790 kubelet.go:2405] "Pod admission denied" podUID="628c0b25-1050-47d2-88e2-d43aaff6f383" pod="tigera-operator/tigera-operator-747864d56d-zvkst" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:06.427645 kubelet[2790]: I0813 01:48:06.427587 2790 kubelet.go:2405] "Pod admission denied" podUID="0f74dbd7-fec9-47a8-ab9c-4a588b5da244" pod="tigera-operator/tigera-operator-747864d56d-wps4v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:06.638436 kubelet[2790]: I0813 01:48:06.637290 2790 kubelet.go:2405] "Pod admission denied" podUID="6cb656f4-160c-4176-94e5-26566a864bd5" pod="tigera-operator/tigera-operator-747864d56d-wvdvl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:06.727535 kubelet[2790]: I0813 01:48:06.727466 2790 kubelet.go:2405] "Pod admission denied" podUID="2a7815c5-0f9d-44ef-bc32-ceaddb6cc4cb" pod="tigera-operator/tigera-operator-747864d56d-gkzk6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:06.830299 kubelet[2790]: I0813 01:48:06.830236 2790 kubelet.go:2405] "Pod admission denied" podUID="81dae685-7453-4b44-8437-8c334111f23c" pod="tigera-operator/tigera-operator-747864d56d-62hkm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:06.931628 kubelet[2790]: I0813 01:48:06.931465 2790 kubelet.go:2405] "Pod admission denied" podUID="f3dd6a1f-6e54-46c0-9082-c44733cac4d0" pod="tigera-operator/tigera-operator-747864d56d-9vws2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:07.028550 kubelet[2790]: I0813 01:48:07.028489 2790 kubelet.go:2405] "Pod admission denied" podUID="e560f4db-c127-4325-a8e6-d0abf9d192b6" pod="tigera-operator/tigera-operator-747864d56d-rzq5w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:07.241445 kubelet[2790]: I0813 01:48:07.241392 2790 kubelet.go:2405] "Pod admission denied" podUID="a73f1eb8-0888-4f0f-b5eb-52afc591bfe6" pod="tigera-operator/tigera-operator-747864d56d-tzdsp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:07.328275 kubelet[2790]: I0813 01:48:07.328217 2790 kubelet.go:2405] "Pod admission denied" podUID="5485b5c7-c2a5-448b-bbcd-2a9e188c7834" pod="tigera-operator/tigera-operator-747864d56d-gj7l5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:07.430000 kubelet[2790]: I0813 01:48:07.429941 2790 kubelet.go:2405] "Pod admission denied" podUID="b4d6b943-0b93-4252-8f88-aed38b6b2316" pod="tigera-operator/tigera-operator-747864d56d-ptmhk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:07.631800 kubelet[2790]: I0813 01:48:07.630834 2790 kubelet.go:2405] "Pod admission denied" podUID="c751536b-e8e3-4914-a13f-a3891b64246d" pod="tigera-operator/tigera-operator-747864d56d-5f92s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:07.730238 kubelet[2790]: I0813 01:48:07.730154 2790 kubelet.go:2405] "Pod admission denied" podUID="f7e35fe9-69c3-4e2f-828e-93b1518a5989" pod="tigera-operator/tigera-operator-747864d56d-gqs25" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:07.832373 kubelet[2790]: I0813 01:48:07.832300 2790 kubelet.go:2405] "Pod admission denied" podUID="7d14bfdd-4583-4751-9ccf-4ee54adfdd4e" pod="tigera-operator/tigera-operator-747864d56d-k9tq5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:08.028393 kubelet[2790]: I0813 01:48:08.028323 2790 kubelet.go:2405] "Pod admission denied" podUID="9b82e08d-9649-448b-a850-2521e4d2777b" pod="tigera-operator/tigera-operator-747864d56d-44fz6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:08.130423 kubelet[2790]: I0813 01:48:08.130352 2790 kubelet.go:2405] "Pod admission denied" podUID="16fb0210-2b77-4439-9a8e-67318d413233" pod="tigera-operator/tigera-operator-747864d56d-q86xm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:08.180206 kubelet[2790]: I0813 01:48:08.180145 2790 kubelet.go:2405] "Pod admission denied" podUID="94fb58b1-f3c4-4465-823c-8c03f32b64f7" pod="tigera-operator/tigera-operator-747864d56d-29dv4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:08.283658 kubelet[2790]: I0813 01:48:08.283474 2790 kubelet.go:2405] "Pod admission denied" podUID="e31b2f0f-bb78-4e6f-920f-a86f79b7b59a" pod="tigera-operator/tigera-operator-747864d56d-g4lmk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:08.480726 kubelet[2790]: I0813 01:48:08.480622 2790 kubelet.go:2405] "Pod admission denied" podUID="ffff20c8-d699-4ed5-a14d-5bdb32416aea" pod="tigera-operator/tigera-operator-747864d56d-qc49v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:08.583701 kubelet[2790]: I0813 01:48:08.583541 2790 kubelet.go:2405] "Pod admission denied" podUID="aab1e4e3-155f-492a-a152-1544fea8d3da" pod="tigera-operator/tigera-operator-747864d56d-bnk85" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:08.700717 kubelet[2790]: I0813 01:48:08.700664 2790 kubelet.go:2405] "Pod admission denied" podUID="aa2da01a-a4cd-4d0b-8dd0-2e2fe584d867" pod="tigera-operator/tigera-operator-747864d56d-9x7n4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:08.791768 kubelet[2790]: I0813 01:48:08.791606 2790 kubelet.go:2405] "Pod admission denied" podUID="6153b99a-83fb-42c6-8bbc-fa5f7fb4090f" pod="tigera-operator/tigera-operator-747864d56d-tz2jj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:08.879526 kubelet[2790]: I0813 01:48:08.879248 2790 kubelet.go:2405] "Pod admission denied" podUID="1e894bf7-73c6-4b4a-8f8d-1c6276cd63c7" pod="tigera-operator/tigera-operator-747864d56d-49ckq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:09.082222 kubelet[2790]: I0813 01:48:09.082135 2790 kubelet.go:2405] "Pod admission denied" podUID="2e19548d-49d8-4019-ac07-be55bb04fd4e" pod="tigera-operator/tigera-operator-747864d56d-5scbl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:09.181963 kubelet[2790]: I0813 01:48:09.181784 2790 kubelet.go:2405] "Pod admission denied" podUID="eb1ef5db-316d-4c49-b918-ba7965827991" pod="tigera-operator/tigera-operator-747864d56d-5jfmj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:09.282690 kubelet[2790]: I0813 01:48:09.282603 2790 kubelet.go:2405] "Pod admission denied" podUID="ac09a61b-d2f2-41ad-a7a6-f223fc6268e3" pod="tigera-operator/tigera-operator-747864d56d-qxz5b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:09.394771 kubelet[2790]: I0813 01:48:09.393475 2790 kubelet.go:2405] "Pod admission denied" podUID="3301491f-98e6-4823-b67c-fcc4f9084a73" pod="tigera-operator/tigera-operator-747864d56d-l25c8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:09.481571 kubelet[2790]: I0813 01:48:09.481517 2790 kubelet.go:2405] "Pod admission denied" podUID="31326c32-c82b-4234-a32b-3dfdcc025bc0" pod="tigera-operator/tigera-operator-747864d56d-vqx6r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:09.599882 kubelet[2790]: I0813 01:48:09.599826 2790 kubelet.go:2405] "Pod admission denied" podUID="312f2a6c-e61a-4f36-a3fc-c6ad036f2474" pod="tigera-operator/tigera-operator-747864d56d-vhcqh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:09.735913 kubelet[2790]: I0813 01:48:09.735592 2790 kubelet.go:2405] "Pod admission denied" podUID="b6b69ffe-e571-4cc9-9741-9cbeba0df0b2" pod="tigera-operator/tigera-operator-747864d56d-fjk7h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:09.834298 kubelet[2790]: I0813 01:48:09.834218 2790 kubelet.go:2405] "Pod admission denied" podUID="2757f0c8-8ab6-4f16-b68b-780225617034" pod="tigera-operator/tigera-operator-747864d56d-xv9rg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:09.950772 kubelet[2790]: I0813 01:48:09.950188 2790 kubelet.go:2405] "Pod admission denied" podUID="53959cdd-d884-41da-877f-6f1de95a02c1" pod="tigera-operator/tigera-operator-747864d56d-zxmnc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:10.079906 kubelet[2790]: I0813 01:48:10.079764 2790 kubelet.go:2405] "Pod admission denied" podUID="55b5b2f6-7e7a-4937-b890-43fa85579a2b" pod="tigera-operator/tigera-operator-747864d56d-q642q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:10.248027 kubelet[2790]: I0813 01:48:10.247947 2790 kubelet.go:2405] "Pod admission denied" podUID="7b8bd0d1-e2fe-416d-937f-12f817378eea" pod="tigera-operator/tigera-operator-747864d56d-kfcsl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:10.318978 kubelet[2790]: I0813 01:48:10.318909 2790 kubelet.go:2405] "Pod admission denied" podUID="722202dc-7e28-4997-abe3-5c1dda350edc" pod="tigera-operator/tigera-operator-747864d56d-xjk6h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:10.431119 kubelet[2790]: I0813 01:48:10.430821 2790 kubelet.go:2405] "Pod admission denied" podUID="6685c289-c0b4-4418-b5ec-3c76456c38c0" pod="tigera-operator/tigera-operator-747864d56d-7j8gp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:10.539771 kubelet[2790]: I0813 01:48:10.538719 2790 kubelet.go:2405] "Pod admission denied" podUID="053284e4-51bd-4867-9342-557bbb543c43" pod="tigera-operator/tigera-operator-747864d56d-j7g88" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:10.727181 kubelet[2790]: I0813 01:48:10.727118 2790 kubelet.go:2405] "Pod admission denied" podUID="7b6cb7a6-d65e-4bc3-b7d0-571e79c2bc13" pod="tigera-operator/tigera-operator-747864d56d-vxhmz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:10.830772 kubelet[2790]: I0813 01:48:10.830259 2790 kubelet.go:2405] "Pod admission denied" podUID="8e8d661f-6364-4f29-9aa5-7648b9c994c7" pod="tigera-operator/tigera-operator-747864d56d-p9cnf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:10.932619 kubelet[2790]: I0813 01:48:10.932439 2790 kubelet.go:2405] "Pod admission denied" podUID="9dcc5a9e-7f5f-4d2c-97cf-5d589a73967c" pod="tigera-operator/tigera-operator-747864d56d-gfcvv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:11.134851 kubelet[2790]: I0813 01:48:11.134536 2790 kubelet.go:2405] "Pod admission denied" podUID="8e5294f3-df5d-43c6-9557-d1b4797ca7a7" pod="tigera-operator/tigera-operator-747864d56d-789vz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:11.229551 kubelet[2790]: I0813 01:48:11.229483 2790 kubelet.go:2405] "Pod admission denied" podUID="b29ba998-6c9e-4709-847b-fe3157a74307" pod="tigera-operator/tigera-operator-747864d56d-g47q2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:11.328957 kubelet[2790]: I0813 01:48:11.328894 2790 kubelet.go:2405] "Pod admission denied" podUID="7e4b6df1-2c07-4750-8200-92378a502587" pod="tigera-operator/tigera-operator-747864d56d-p6hpp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:11.532517 kubelet[2790]: I0813 01:48:11.532444 2790 kubelet.go:2405] "Pod admission denied" podUID="94364073-4a6f-4752-b93a-e5e4a53d43d5" pod="tigera-operator/tigera-operator-747864d56d-x9hvt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:11.628353 kubelet[2790]: I0813 01:48:11.628283 2790 kubelet.go:2405] "Pod admission denied" podUID="f4feba5f-d0f7-48c6-aade-9e7d27e57b33" pod="tigera-operator/tigera-operator-747864d56d-pz6sb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:11.735432 kubelet[2790]: I0813 01:48:11.735355 2790 kubelet.go:2405] "Pod admission denied" podUID="3337bd8a-98a3-4e5c-b321-1a44e5f16bcc" pod="tigera-operator/tigera-operator-747864d56d-c5xfl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:11.832409 kubelet[2790]: I0813 01:48:11.832250 2790 kubelet.go:2405] "Pod admission denied" podUID="a09c5af5-e9d9-4e55-96a1-728b5105aece" pod="tigera-operator/tigera-operator-747864d56d-4hqkl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:11.931868 kubelet[2790]: I0813 01:48:11.931795 2790 kubelet.go:2405] "Pod admission denied" podUID="b19b1365-5b69-49d3-b870-963010b23a0e" pod="tigera-operator/tigera-operator-747864d56d-rkvd8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:12.032491 kubelet[2790]: I0813 01:48:12.032423 2790 kubelet.go:2405] "Pod admission denied" podUID="08397a24-dabe-4bcf-94ca-deb19cc95fd9" pod="tigera-operator/tigera-operator-747864d56d-qdv24" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:12.098608 kubelet[2790]: I0813 01:48:12.097768 2790 kubelet.go:2405] "Pod admission denied" podUID="aea872e2-9042-4822-acc6-800baa18e2e1" pod="tigera-operator/tigera-operator-747864d56d-5kmcw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:12.180206 kubelet[2790]: I0813 01:48:12.180140 2790 kubelet.go:2405] "Pod admission denied" podUID="84a07918-e6b9-4004-b186-457c0e63f522" pod="tigera-operator/tigera-operator-747864d56d-crpjr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:12.284775 kubelet[2790]: I0813 01:48:12.283995 2790 kubelet.go:2405] "Pod admission denied" podUID="524d258f-9b5a-430b-804f-41055f9db2f3" pod="tigera-operator/tigera-operator-747864d56d-z7w72" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:12.380421 kubelet[2790]: I0813 01:48:12.380278 2790 kubelet.go:2405] "Pod admission denied" podUID="1a81afd1-cc9a-4d30-bb13-2252b7b75960" pod="tigera-operator/tigera-operator-747864d56d-jzh9g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:12.483987 kubelet[2790]: I0813 01:48:12.483932 2790 kubelet.go:2405] "Pod admission denied" podUID="0f2410a1-dfb8-4b48-bc54-2d2d9e40ff6b" pod="tigera-operator/tigera-operator-747864d56d-79dgs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:12.538440 kubelet[2790]: I0813 01:48:12.538379 2790 kubelet.go:2405] "Pod admission denied" podUID="51bfd7a6-cad2-4b62-b0d8-88521cfc0cc8" pod="tigera-operator/tigera-operator-747864d56d-vbmj4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:12.637937 kubelet[2790]: I0813 01:48:12.637246 2790 kubelet.go:2405] "Pod admission denied" podUID="0de6c3ab-b5c2-4e44-9b20-8a0dc3686a86" pod="tigera-operator/tigera-operator-747864d56d-g8k7b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:12.831272 kubelet[2790]: I0813 01:48:12.831208 2790 kubelet.go:2405] "Pod admission denied" podUID="b9ebbacd-7964-475d-a82d-7f3c4fd53ab6" pod="tigera-operator/tigera-operator-747864d56d-8n9qf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:12.936884 kubelet[2790]: I0813 01:48:12.936541 2790 kubelet.go:2405] "Pod admission denied" podUID="517225bb-6d54-49c4-ad25-d598fee579c3" pod="tigera-operator/tigera-operator-747864d56d-xkf8k" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:13.227460 kubelet[2790]: I0813 01:48:13.227362 2790 kubelet.go:2405] "Pod admission denied" podUID="4106ce14-02ad-4971-a3a5-6c02d64eb6ff" pod="tigera-operator/tigera-operator-747864d56d-qdqzx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:13.280786 kubelet[2790]: I0813 01:48:13.280019 2790 kubelet.go:2405] "Pod admission denied" podUID="16d1641f-f77a-4c4c-85df-8a1f0dcd8291" pod="tigera-operator/tigera-operator-747864d56d-2p2pv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:13.341703 kubelet[2790]: I0813 01:48:13.341653 2790 kubelet.go:2405] "Pod admission denied" podUID="20e99c13-2869-475e-8482-ba3780770d1d" pod="tigera-operator/tigera-operator-747864d56d-j86mz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:13.428798 kubelet[2790]: I0813 01:48:13.428679 2790 kubelet.go:2405] "Pod admission denied" podUID="c154eba7-0fad-44a1-91c3-b0729bc905f2" pod="tigera-operator/tigera-operator-747864d56d-zghqn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:13.533806 kubelet[2790]: I0813 01:48:13.533632 2790 kubelet.go:2405] "Pod admission denied" podUID="3dc2cb9a-cfb0-4a1c-8770-fc63365ef3f5" pod="tigera-operator/tigera-operator-747864d56d-5tjvc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:13.695159 containerd[1556]: time="2025-08-13T01:48:13.695073358Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:13.742510 kubelet[2790]: I0813 01:48:13.742386 2790 kubelet.go:2405] "Pod admission denied" podUID="66313027-bd7a-400d-9c9a-65183143195b" pod="tigera-operator/tigera-operator-747864d56d-sw5vz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:13.785813 containerd[1556]: time="2025-08-13T01:48:13.784847609Z" level=error msg="Failed to destroy network for sandbox \"c6f5aa6960fb56b8dec5aedb5900be0df384a6cb0a933fa594a8e01a722f6dbb\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:13.787531 containerd[1556]: time="2025-08-13T01:48:13.787282727Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6f5aa6960fb56b8dec5aedb5900be0df384a6cb0a933fa594a8e01a722f6dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:13.789078 systemd[1]: run-netns-cni\x2d9c763d30\x2dfded\x2ddb7c\x2df3df\x2df41581b51dd7.mount: Deactivated successfully. Aug 13 01:48:13.789451 kubelet[2790]: E0813 01:48:13.789313 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6f5aa6960fb56b8dec5aedb5900be0df384a6cb0a933fa594a8e01a722f6dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:13.789451 kubelet[2790]: E0813 01:48:13.789371 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6f5aa6960fb56b8dec5aedb5900be0df384a6cb0a933fa594a8e01a722f6dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:48:13.789451 kubelet[2790]: E0813 01:48:13.789398 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c6f5aa6960fb56b8dec5aedb5900be0df384a6cb0a933fa594a8e01a722f6dbb\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:48:13.789567 kubelet[2790]: E0813 01:48:13.789455 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c6f5aa6960fb56b8dec5aedb5900be0df384a6cb0a933fa594a8e01a722f6dbb\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:48:13.838980 kubelet[2790]: I0813 01:48:13.838917 2790 kubelet.go:2405] "Pod admission denied" podUID="a6b705ba-0175-4899-b1ba-fe035dfe9bc3" pod="tigera-operator/tigera-operator-747864d56d-pc7vh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:13.931180 kubelet[2790]: I0813 01:48:13.931110 2790 kubelet.go:2405] "Pod admission denied" podUID="514d0904-759c-424c-86ed-082c11e2bc2d" pod="tigera-operator/tigera-operator-747864d56d-8rbhr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:14.129998 kubelet[2790]: I0813 01:48:14.129867 2790 kubelet.go:2405] "Pod admission denied" podUID="6ddf5b82-e40c-45ee-976e-af5719f1d6aa" pod="tigera-operator/tigera-operator-747864d56d-zs8wh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:14.226766 kubelet[2790]: I0813 01:48:14.226693 2790 kubelet.go:2405] "Pod admission denied" podUID="32a34286-4634-4d00-a420-c8fb670a0b21" pod="tigera-operator/tigera-operator-747864d56d-tqx4b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:14.335785 kubelet[2790]: I0813 01:48:14.334863 2790 kubelet.go:2405] "Pod admission denied" podUID="780d7e3d-00a0-45de-8fce-6e37340e3d79" pod="tigera-operator/tigera-operator-747864d56d-8rckc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:14.430909 kubelet[2790]: I0813 01:48:14.430737 2790 kubelet.go:2405] "Pod admission denied" podUID="f17fd258-fc21-4092-ae0b-e12823d21f32" pod="tigera-operator/tigera-operator-747864d56d-82vn7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:14.535387 kubelet[2790]: I0813 01:48:14.533862 2790 kubelet.go:2405] "Pod admission denied" podUID="10c57bce-8c2a-4c17-a82e-ec8ce12bd6a0" pod="tigera-operator/tigera-operator-747864d56d-b5hwm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:14.628641 kubelet[2790]: I0813 01:48:14.628572 2790 kubelet.go:2405] "Pod admission denied" podUID="cec96b16-b41c-4baf-bb94-0de222eae771" pod="tigera-operator/tigera-operator-747864d56d-89xtz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:14.686112 kubelet[2790]: I0813 01:48:14.684819 2790 kubelet.go:2405] "Pod admission denied" podUID="e4375d07-13d8-498e-b944-ed51ac1c82cf" pod="tigera-operator/tigera-operator-747864d56d-d5hn9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:14.702989 containerd[1556]: time="2025-08-13T01:48:14.702831026Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:14.788815 kubelet[2790]: I0813 01:48:14.788754 2790 kubelet.go:2405] "Pod admission denied" podUID="05ab2ee0-390f-45da-a196-2b5d22a11f5c" pod="tigera-operator/tigera-operator-747864d56d-2p7vh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:14.795780 containerd[1556]: time="2025-08-13T01:48:14.795676345Z" level=error msg="Failed to destroy network for sandbox \"b0dcf74d186ffe988330bbd0293bd39974b639166882f71ce926467bfbbdb594\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:14.797497 containerd[1556]: time="2025-08-13T01:48:14.797417513Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0dcf74d186ffe988330bbd0293bd39974b639166882f71ce926467bfbbdb594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:14.797964 kubelet[2790]: E0813 01:48:14.797926 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0dcf74d186ffe988330bbd0293bd39974b639166882f71ce926467bfbbdb594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:14.798053 kubelet[2790]: E0813 01:48:14.797980 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0dcf74d186ffe988330bbd0293bd39974b639166882f71ce926467bfbbdb594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:48:14.798053 kubelet[2790]: E0813 01:48:14.798012 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b0dcf74d186ffe988330bbd0293bd39974b639166882f71ce926467bfbbdb594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:48:14.798104 kubelet[2790]: E0813 01:48:14.798057 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b0dcf74d186ffe988330bbd0293bd39974b639166882f71ce926467bfbbdb594\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:48:14.800190 systemd[1]: run-netns-cni\x2d4b02ac47\x2daff9\x2dff87\x2d4187\x2df2a508a75019.mount: Deactivated successfully. Aug 13 01:48:14.988147 kubelet[2790]: I0813 01:48:14.988062 2790 kubelet.go:2405] "Pod admission denied" podUID="68f8a079-bb0d-49ac-925d-c630ae9e1c40" pod="tigera-operator/tigera-operator-747864d56d-h8cdd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:15.089732 kubelet[2790]: I0813 01:48:15.089635 2790 kubelet.go:2405] "Pod admission denied" podUID="7871dc51-efe8-44c6-a203-54c3bfae659f" pod="tigera-operator/tigera-operator-747864d56d-hpzfp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:15.178854 kubelet[2790]: I0813 01:48:15.178786 2790 kubelet.go:2405] "Pod admission denied" podUID="27d16120-6290-4831-ad82-906b0e09148a" pod="tigera-operator/tigera-operator-747864d56d-rkrf2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:15.301368 kubelet[2790]: I0813 01:48:15.299989 2790 kubelet.go:2405] "Pod admission denied" podUID="b7f9cde5-a878-4bc5-b83d-ed9e039d346f" pod="tigera-operator/tigera-operator-747864d56d-rbx58" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:15.432852 kubelet[2790]: I0813 01:48:15.432786 2790 kubelet.go:2405] "Pod admission denied" podUID="231f19d7-ac1c-42b8-90ac-2c786a550aed" pod="tigera-operator/tigera-operator-747864d56d-cvhfw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:15.533560 kubelet[2790]: I0813 01:48:15.533495 2790 kubelet.go:2405] "Pod admission denied" podUID="6d6fe5a3-e537-49e1-af2d-622afb5200e3" pod="tigera-operator/tigera-operator-747864d56d-p5rf5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:15.640279 kubelet[2790]: I0813 01:48:15.640119 2790 kubelet.go:2405] "Pod admission denied" podUID="8eb77ea0-aaa2-4796-8480-9393f6839512" pod="tigera-operator/tigera-operator-747864d56d-wcm9r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:15.734480 kubelet[2790]: I0813 01:48:15.734403 2790 kubelet.go:2405] "Pod admission denied" podUID="096ab2aa-e9ea-43c0-8fdc-80a988bb1e04" pod="tigera-operator/tigera-operator-747864d56d-6zqfb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:15.843279 kubelet[2790]: I0813 01:48:15.843218 2790 kubelet.go:2405] "Pod admission denied" podUID="7b2466ce-ea63-43d1-abd7-241597b21ccf" pod="tigera-operator/tigera-operator-747864d56d-l9xht" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:15.932038 kubelet[2790]: I0813 01:48:15.931896 2790 kubelet.go:2405] "Pod admission denied" podUID="709f2739-23d6-4787-9657-d9f4753a559f" pod="tigera-operator/tigera-operator-747864d56d-57hx4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:16.133774 kubelet[2790]: I0813 01:48:16.133702 2790 kubelet.go:2405] "Pod admission denied" podUID="4f3d8d6d-9c0a-4a84-80d0-639742a9b29d" pod="tigera-operator/tigera-operator-747864d56d-c7tnw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:16.230897 kubelet[2790]: I0813 01:48:16.230831 2790 kubelet.go:2405] "Pod admission denied" podUID="e84b5064-50ea-4882-b874-f179c3f7a07d" pod="tigera-operator/tigera-operator-747864d56d-987df" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:16.318796 kubelet[2790]: I0813 01:48:16.318522 2790 kubelet.go:2405] "Pod admission denied" podUID="c81c75b8-78c6-4069-beb0-507257564fa7" pod="tigera-operator/tigera-operator-747864d56d-czf8n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:16.431473 kubelet[2790]: I0813 01:48:16.431412 2790 kubelet.go:2405] "Pod admission denied" podUID="3e4ae4f4-5e1d-494b-afdd-a74df3c893ef" pod="tigera-operator/tigera-operator-747864d56d-wnmrj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:16.534850 kubelet[2790]: I0813 01:48:16.534483 2790 kubelet.go:2405] "Pod admission denied" podUID="6e8e9562-07a8-4f13-8941-9de8a6d060c8" pod="tigera-operator/tigera-operator-747864d56d-8flh6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:16.701215 kubelet[2790]: E0813 01:48:16.701159 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2096206682: write /var/lib/containerd/tmpmounts/containerd-mount2096206682/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-tsmrf" podUID="517ffc51-1a34-4ced-acf5-d8e5da6a1838" Aug 13 01:48:16.732769 kubelet[2790]: I0813 01:48:16.732278 2790 kubelet.go:2405] "Pod admission denied" podUID="a9addf18-26ef-4d27-8fc6-95e2d3ee24d2" pod="tigera-operator/tigera-operator-747864d56d-7gpjp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:16.834925 kubelet[2790]: I0813 01:48:16.834762 2790 kubelet.go:2405] "Pod admission denied" podUID="5fa4680e-f162-447c-ba29-50fa981f7c83" pod="tigera-operator/tigera-operator-747864d56d-gbppp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:16.934340 kubelet[2790]: I0813 01:48:16.934261 2790 kubelet.go:2405] "Pod admission denied" podUID="3d2a0122-c1f9-4012-9c0b-3b3db4ee00d5" pod="tigera-operator/tigera-operator-747864d56d-t7w9h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:17.132952 kubelet[2790]: I0813 01:48:17.132733 2790 kubelet.go:2405] "Pod admission denied" podUID="1b5fc1c1-e020-49dc-b0e1-8e9a5f67660d" pod="tigera-operator/tigera-operator-747864d56d-f5cpj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:17.230313 kubelet[2790]: I0813 01:48:17.230252 2790 kubelet.go:2405] "Pod admission denied" podUID="926667e2-f945-4048-bd23-acf440a1fc46" pod="tigera-operator/tigera-operator-747864d56d-6slcn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:17.342275 kubelet[2790]: I0813 01:48:17.341555 2790 kubelet.go:2405] "Pod admission denied" podUID="0c509cfe-e97b-4711-8914-5d31bf63537d" pod="tigera-operator/tigera-operator-747864d56d-zqc98" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:17.533976 kubelet[2790]: I0813 01:48:17.533912 2790 kubelet.go:2405] "Pod admission denied" podUID="cc16b155-b9d1-4e98-9159-4c21e8a5353d" pod="tigera-operator/tigera-operator-747864d56d-kdwvw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:17.632539 kubelet[2790]: I0813 01:48:17.632478 2790 kubelet.go:2405] "Pod admission denied" podUID="b0dc1724-d9d9-44a8-9207-c9ff8ef4493a" pod="tigera-operator/tigera-operator-747864d56d-k4j6c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:17.694794 kubelet[2790]: E0813 01:48:17.694491 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:48:17.694794 kubelet[2790]: E0813 01:48:17.694491 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:48:17.695289 containerd[1556]: time="2025-08-13T01:48:17.695254328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:17.696221 containerd[1556]: time="2025-08-13T01:48:17.695726167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:17.759108 kubelet[2790]: I0813 01:48:17.758991 2790 kubelet.go:2405] "Pod admission denied" podUID="1f3cc074-b059-4016-8d9e-adc903463c0f" pod="tigera-operator/tigera-operator-747864d56d-dbfjr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:17.837549 containerd[1556]: time="2025-08-13T01:48:17.837229037Z" level=error msg="Failed to destroy network for sandbox \"49ecb111fb2d921e07d3a9e1676e4006cda0efafc9d0949748c0fcd13a5bd58c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:17.843314 systemd[1]: run-netns-cni\x2d10a2fccb\x2d7f8f\x2d131c\x2d069c\x2dbab4450718b6.mount: Deactivated successfully. Aug 13 01:48:17.845586 containerd[1556]: time="2025-08-13T01:48:17.845529920Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"49ecb111fb2d921e07d3a9e1676e4006cda0efafc9d0949748c0fcd13a5bd58c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:17.847605 kubelet[2790]: E0813 01:48:17.846264 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49ecb111fb2d921e07d3a9e1676e4006cda0efafc9d0949748c0fcd13a5bd58c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:17.847605 kubelet[2790]: E0813 01:48:17.846336 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49ecb111fb2d921e07d3a9e1676e4006cda0efafc9d0949748c0fcd13a5bd58c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:48:17.847605 kubelet[2790]: E0813 01:48:17.846364 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49ecb111fb2d921e07d3a9e1676e4006cda0efafc9d0949748c0fcd13a5bd58c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:48:17.847605 kubelet[2790]: E0813 01:48:17.846421 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49ecb111fb2d921e07d3a9e1676e4006cda0efafc9d0949748c0fcd13a5bd58c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6rlkc" podUID="21a6ba02-58d5-43c1-a7de-9e24560a65f6" Aug 13 01:48:17.854053 kubelet[2790]: I0813 01:48:17.854000 2790 kubelet.go:2405] "Pod admission denied" podUID="a0d8c418-d998-4239-a91e-3a529219c5d6" pod="tigera-operator/tigera-operator-747864d56d-dmrpk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:17.863025 containerd[1556]: time="2025-08-13T01:48:17.862381196Z" level=error msg="Failed to destroy network for sandbox \"296b9e87a36b58fd63300f96158b630838ba77e8ea1ca036b990e44eecbc50bc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:17.866187 systemd[1]: run-netns-cni\x2d91e46fd2\x2db4df\x2d9edf\x2d1432\x2dd52d237d1a60.mount: Deactivated successfully. Aug 13 01:48:17.868419 containerd[1556]: time="2025-08-13T01:48:17.867393202Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"296b9e87a36b58fd63300f96158b630838ba77e8ea1ca036b990e44eecbc50bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:17.868571 kubelet[2790]: E0813 01:48:17.867815 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"296b9e87a36b58fd63300f96158b630838ba77e8ea1ca036b990e44eecbc50bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:17.868571 kubelet[2790]: E0813 01:48:17.867879 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"296b9e87a36b58fd63300f96158b630838ba77e8ea1ca036b990e44eecbc50bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:48:17.868571 kubelet[2790]: E0813 01:48:17.867900 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"296b9e87a36b58fd63300f96158b630838ba77e8ea1ca036b990e44eecbc50bc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:48:17.868571 kubelet[2790]: E0813 01:48:17.867969 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"296b9e87a36b58fd63300f96158b630838ba77e8ea1ca036b990e44eecbc50bc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vtdcd" podUID="caa5836a-45f9-496b-86c1-95f6e1b6da17" Aug 13 01:48:17.983393 kubelet[2790]: I0813 01:48:17.983323 2790 kubelet.go:2405] "Pod admission denied" podUID="10dd4035-b279-48da-86c0-24c6a4f98d1b" pod="tigera-operator/tigera-operator-747864d56d-cbzgp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:18.087472 kubelet[2790]: I0813 01:48:18.087403 2790 kubelet.go:2405] "Pod admission denied" podUID="1c2624b8-0d08-440d-ae42-92ead74640e1" pod="tigera-operator/tigera-operator-747864d56d-wp5db" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:18.183313 kubelet[2790]: I0813 01:48:18.182342 2790 kubelet.go:2405] "Pod admission denied" podUID="5f4ca9d0-1649-44f5-ac80-cb788f4d29c5" pod="tigera-operator/tigera-operator-747864d56d-2s6lc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:18.283724 kubelet[2790]: I0813 01:48:18.283667 2790 kubelet.go:2405] "Pod admission denied" podUID="311ad91b-11cb-4668-9e32-0d47a6132bbf" pod="tigera-operator/tigera-operator-747864d56d-hpdmw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:18.485577 kubelet[2790]: I0813 01:48:18.485472 2790 kubelet.go:2405] "Pod admission denied" podUID="44ebc57d-0d8a-40ff-bc5d-d4a3a6cca721" pod="tigera-operator/tigera-operator-747864d56d-6hprr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:18.584993 kubelet[2790]: I0813 01:48:18.584907 2790 kubelet.go:2405] "Pod admission denied" podUID="028f2c25-55d0-4ab8-83e6-dd6ddf6390a2" pod="tigera-operator/tigera-operator-747864d56d-9tk9v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:18.656692 kubelet[2790]: I0813 01:48:18.656490 2790 kubelet.go:2405] "Pod admission denied" podUID="764f4581-d24a-4cea-884d-86b4d82a17b0" pod="tigera-operator/tigera-operator-747864d56d-m6746" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:18.780048 kubelet[2790]: I0813 01:48:18.779877 2790 kubelet.go:2405] "Pod admission denied" podUID="35aefd44-25da-4be7-9b8b-27b040d4d92a" pod="tigera-operator/tigera-operator-747864d56d-5vnrt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:18.907394 kubelet[2790]: I0813 01:48:18.906431 2790 kubelet.go:2405] "Pod admission denied" podUID="0ba40058-6d44-4768-ab11-b315e2b0b166" pod="tigera-operator/tigera-operator-747864d56d-ktxrp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:18.989071 kubelet[2790]: I0813 01:48:18.988955 2790 kubelet.go:2405] "Pod admission denied" podUID="921e31ca-5e5f-4dab-92c3-2cb927dc310c" pod="tigera-operator/tigera-operator-747864d56d-jfslz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:19.088246 kubelet[2790]: I0813 01:48:19.088096 2790 kubelet.go:2405] "Pod admission denied" podUID="860f9106-ac54-4add-889a-a4aab96cf7c9" pod="tigera-operator/tigera-operator-747864d56d-8tnrg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:19.182345 kubelet[2790]: I0813 01:48:19.182257 2790 kubelet.go:2405] "Pod admission denied" podUID="b0fa95bc-f6fb-43d9-bec9-3380c8a3a0cd" pod="tigera-operator/tigera-operator-747864d56d-v8zkm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:19.232321 kubelet[2790]: I0813 01:48:19.232276 2790 kubelet.go:2405] "Pod admission denied" podUID="86377fe9-3b46-4833-bdf5-a4e19ca2ff8d" pod="tigera-operator/tigera-operator-747864d56d-hqdbq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:19.340770 kubelet[2790]: I0813 01:48:19.340273 2790 kubelet.go:2405] "Pod admission denied" podUID="0bdca767-e4f1-4a81-b492-5ac062e2088a" pod="tigera-operator/tigera-operator-747864d56d-pg72g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:19.435350 kubelet[2790]: I0813 01:48:19.435274 2790 kubelet.go:2405] "Pod admission denied" podUID="f20f29f5-a195-4a5f-8d3f-51e586a9de7c" pod="tigera-operator/tigera-operator-747864d56d-nsbd6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:19.533830 kubelet[2790]: I0813 01:48:19.533762 2790 kubelet.go:2405] "Pod admission denied" podUID="89b37430-f2b9-449f-abe8-012154ae6297" pod="tigera-operator/tigera-operator-747864d56d-wg7tb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:19.635811 kubelet[2790]: I0813 01:48:19.635459 2790 kubelet.go:2405] "Pod admission denied" podUID="d527aad2-7b69-49ab-ac87-f62cc9c70837" pod="tigera-operator/tigera-operator-747864d56d-z4pz9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:19.733029 kubelet[2790]: I0813 01:48:19.732935 2790 kubelet.go:2405] "Pod admission denied" podUID="064ad7d9-9202-4292-bc4a-86621e054bb2" pod="tigera-operator/tigera-operator-747864d56d-lwxzv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:19.932657 kubelet[2790]: I0813 01:48:19.932511 2790 kubelet.go:2405] "Pod admission denied" podUID="9c590d26-a19f-4c3e-b3e5-2bd4c9f1992c" pod="tigera-operator/tigera-operator-747864d56d-sp576" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:20.035442 kubelet[2790]: I0813 01:48:20.035172 2790 kubelet.go:2405] "Pod admission denied" podUID="273d9217-46b7-48f2-b125-80b29a6b08e8" pod="tigera-operator/tigera-operator-747864d56d-r4pxx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:20.149771 kubelet[2790]: I0813 01:48:20.149383 2790 kubelet.go:2405] "Pod admission denied" podUID="99d565c1-a464-4405-ba05-8b20cfba14aa" pod="tigera-operator/tigera-operator-747864d56d-rs9nk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:20.231242 kubelet[2790]: I0813 01:48:20.231178 2790 kubelet.go:2405] "Pod admission denied" podUID="21c1f5b8-8d5e-4459-be78-ec87b7078b16" pod="tigera-operator/tigera-operator-747864d56d-97ttr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:20.332209 kubelet[2790]: I0813 01:48:20.332130 2790 kubelet.go:2405] "Pod admission denied" podUID="fe45b5c7-8551-4d16-ae09-94bbf18ed3fa" pod="tigera-operator/tigera-operator-747864d56d-dkb5t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:20.551613 kubelet[2790]: I0813 01:48:20.551386 2790 kubelet.go:2405] "Pod admission denied" podUID="a481761f-e5f4-4f72-a73d-dbdee3ca038a" pod="tigera-operator/tigera-operator-747864d56d-tcfnx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:20.782291 kubelet[2790]: I0813 01:48:20.782187 2790 kubelet.go:2405] "Pod admission denied" podUID="c72a201c-50fe-4ed8-b06f-1349df8ea9e0" pod="tigera-operator/tigera-operator-747864d56d-pdf6r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:20.899456 kubelet[2790]: I0813 01:48:20.898086 2790 kubelet.go:2405] "Pod admission denied" podUID="d4e1f16d-1ed3-4bd8-836f-c7ccba652c3a" pod="tigera-operator/tigera-operator-747864d56d-48gpj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:20.987340 kubelet[2790]: I0813 01:48:20.987292 2790 kubelet.go:2405] "Pod admission denied" podUID="8be64e87-d972-4ef4-ab75-531e29a59a22" pod="tigera-operator/tigera-operator-747864d56d-rwwhm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:21.095025 kubelet[2790]: I0813 01:48:21.094683 2790 kubelet.go:2405] "Pod admission denied" podUID="7c7f9889-05be-46ba-8b0b-9634a05b33fd" pod="tigera-operator/tigera-operator-747864d56d-f6j2p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:48:22.076950 systemd[1]: Started sshd@9-172.232.7.67:22-147.75.109.163:49180.service - OpenSSH per-connection server daemon (147.75.109.163:49180). Aug 13 01:48:22.443116 sshd[4808]: Accepted publickey for core from 147.75.109.163 port 49180 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:22.445669 sshd-session[4808]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:22.454485 systemd-logind[1532]: New session 10 of user core. Aug 13 01:48:22.459924 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 01:48:22.789927 sshd[4810]: Connection closed by 147.75.109.163 port 49180 Aug 13 01:48:22.790598 sshd-session[4808]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:22.794804 systemd-logind[1532]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:48:22.795204 systemd[1]: sshd@9-172.232.7.67:22-147.75.109.163:49180.service: Deactivated successfully. Aug 13 01:48:22.797757 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:48:22.800423 systemd-logind[1532]: Removed session 10. Aug 13 01:48:25.694622 kubelet[2790]: E0813 01:48:25.694554 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:48:27.855687 systemd[1]: Started sshd@10-172.232.7.67:22-147.75.109.163:49196.service - OpenSSH per-connection server daemon (147.75.109.163:49196). Aug 13 01:48:28.210484 sshd[4822]: Accepted publickey for core from 147.75.109.163 port 49196 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:28.212007 sshd-session[4822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:28.216802 systemd-logind[1532]: New session 11 of user core. Aug 13 01:48:28.227880 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 01:48:28.527159 sshd[4824]: Connection closed by 147.75.109.163 port 49196 Aug 13 01:48:28.528500 sshd-session[4822]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:28.534256 systemd[1]: sshd@10-172.232.7.67:22-147.75.109.163:49196.service: Deactivated successfully. Aug 13 01:48:28.536860 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:48:28.538138 systemd-logind[1532]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:48:28.539914 systemd-logind[1532]: Removed session 11. Aug 13 01:48:28.697784 containerd[1556]: time="2025-08-13T01:48:28.696788227Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:28.790905 containerd[1556]: time="2025-08-13T01:48:28.790726684Z" level=error msg="Failed to destroy network for sandbox \"92c26837e191749021ff0248dee6bc4841d1f72948e8161f040da90fc618c1a9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:28.795014 containerd[1556]: time="2025-08-13T01:48:28.794941851Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c26837e191749021ff0248dee6bc4841d1f72948e8161f040da90fc618c1a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:28.795395 kubelet[2790]: E0813 01:48:28.795323 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c26837e191749021ff0248dee6bc4841d1f72948e8161f040da90fc618c1a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:28.795915 kubelet[2790]: E0813 01:48:28.795399 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c26837e191749021ff0248dee6bc4841d1f72948e8161f040da90fc618c1a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:48:28.795915 kubelet[2790]: E0813 01:48:28.795432 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"92c26837e191749021ff0248dee6bc4841d1f72948e8161f040da90fc618c1a9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:48:28.795915 kubelet[2790]: E0813 01:48:28.795499 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"92c26837e191749021ff0248dee6bc4841d1f72948e8161f040da90fc618c1a9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:48:28.799632 systemd[1]: run-netns-cni\x2d3075dc20\x2df614\x2d5465\x2d9aba\x2d93521ad78525.mount: Deactivated successfully. Aug 13 01:48:29.696632 containerd[1556]: time="2025-08-13T01:48:29.696548115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:29.698561 kubelet[2790]: E0813 01:48:29.698516 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2096206682: write /var/lib/containerd/tmpmounts/containerd-mount2096206682/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-tsmrf" podUID="517ffc51-1a34-4ced-acf5-d8e5da6a1838" Aug 13 01:48:29.829596 containerd[1556]: time="2025-08-13T01:48:29.829514113Z" level=error msg="Failed to destroy network for sandbox \"fc05847e397fa97443d2eb4fbb524aeb62dc705ee1fd3a8e54fd84f9e0f48b35\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:29.832668 systemd[1]: run-netns-cni\x2d05e9213c\x2db377\x2d3d86\x2d3cad\x2d8a5257f30127.mount: Deactivated successfully. Aug 13 01:48:29.833814 containerd[1556]: time="2025-08-13T01:48:29.833465530Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc05847e397fa97443d2eb4fbb524aeb62dc705ee1fd3a8e54fd84f9e0f48b35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:29.835104 kubelet[2790]: E0813 01:48:29.834262 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc05847e397fa97443d2eb4fbb524aeb62dc705ee1fd3a8e54fd84f9e0f48b35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:29.835104 kubelet[2790]: E0813 01:48:29.834326 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc05847e397fa97443d2eb4fbb524aeb62dc705ee1fd3a8e54fd84f9e0f48b35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:48:29.835104 kubelet[2790]: E0813 01:48:29.834348 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fc05847e397fa97443d2eb4fbb524aeb62dc705ee1fd3a8e54fd84f9e0f48b35\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:48:29.835104 kubelet[2790]: E0813 01:48:29.834397 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fc05847e397fa97443d2eb4fbb524aeb62dc705ee1fd3a8e54fd84f9e0f48b35\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:48:31.695313 kubelet[2790]: E0813 01:48:31.695233 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:48:31.697843 containerd[1556]: time="2025-08-13T01:48:31.697446585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:31.764326 containerd[1556]: time="2025-08-13T01:48:31.764272884Z" level=error msg="Failed to destroy network for sandbox \"3ab83e887717d1e0f5233e5bfe434cb379fdd6a4e7bf13a543b6a7e3d2c7476d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:31.767312 systemd[1]: run-netns-cni\x2d13a7854e\x2d75dd\x2d2424\x2d1a47\x2ddac1fe12d3b8.mount: Deactivated successfully. Aug 13 01:48:31.768168 containerd[1556]: time="2025-08-13T01:48:31.768050731Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ab83e887717d1e0f5233e5bfe434cb379fdd6a4e7bf13a543b6a7e3d2c7476d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:31.768648 kubelet[2790]: E0813 01:48:31.768561 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ab83e887717d1e0f5233e5bfe434cb379fdd6a4e7bf13a543b6a7e3d2c7476d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:31.768735 kubelet[2790]: E0813 01:48:31.768704 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ab83e887717d1e0f5233e5bfe434cb379fdd6a4e7bf13a543b6a7e3d2c7476d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:48:31.769652 kubelet[2790]: E0813 01:48:31.768732 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3ab83e887717d1e0f5233e5bfe434cb379fdd6a4e7bf13a543b6a7e3d2c7476d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:48:31.769652 kubelet[2790]: E0813 01:48:31.768869 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3ab83e887717d1e0f5233e5bfe434cb379fdd6a4e7bf13a543b6a7e3d2c7476d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6rlkc" podUID="21a6ba02-58d5-43c1-a7de-9e24560a65f6" Aug 13 01:48:32.695104 kubelet[2790]: E0813 01:48:32.694583 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:48:32.695927 containerd[1556]: time="2025-08-13T01:48:32.695639310Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:32.757876 containerd[1556]: time="2025-08-13T01:48:32.757819003Z" level=error msg="Failed to destroy network for sandbox \"4226efb7b4dddb1d7784ff6b1fc039f0e199acb244d90b083d29655059a8a219\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:32.761396 containerd[1556]: time="2025-08-13T01:48:32.761304250Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4226efb7b4dddb1d7784ff6b1fc039f0e199acb244d90b083d29655059a8a219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:32.762078 kubelet[2790]: E0813 01:48:32.761698 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4226efb7b4dddb1d7784ff6b1fc039f0e199acb244d90b083d29655059a8a219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:32.762078 kubelet[2790]: E0813 01:48:32.761969 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4226efb7b4dddb1d7784ff6b1fc039f0e199acb244d90b083d29655059a8a219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:48:32.762078 kubelet[2790]: E0813 01:48:32.761998 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4226efb7b4dddb1d7784ff6b1fc039f0e199acb244d90b083d29655059a8a219\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:48:32.762614 systemd[1]: run-netns-cni\x2d8e7b357b\x2d3ebf\x2de71a\x2dfad4\x2dc1f98c4bd9ec.mount: Deactivated successfully. Aug 13 01:48:32.763701 kubelet[2790]: E0813 01:48:32.763046 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4226efb7b4dddb1d7784ff6b1fc039f0e199acb244d90b083d29655059a8a219\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vtdcd" podUID="caa5836a-45f9-496b-86c1-95f6e1b6da17" Aug 13 01:48:33.588689 systemd[1]: Started sshd@11-172.232.7.67:22-147.75.109.163:37254.service - OpenSSH per-connection server daemon (147.75.109.163:37254). Aug 13 01:48:33.923882 sshd[4944]: Accepted publickey for core from 147.75.109.163 port 37254 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:33.925654 sshd-session[4944]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:33.932821 systemd-logind[1532]: New session 12 of user core. Aug 13 01:48:33.937920 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 01:48:34.229118 sshd[4946]: Connection closed by 147.75.109.163 port 37254 Aug 13 01:48:34.229891 sshd-session[4944]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:34.234590 systemd-logind[1532]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:48:34.235581 systemd[1]: sshd@11-172.232.7.67:22-147.75.109.163:37254.service: Deactivated successfully. Aug 13 01:48:34.237616 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:48:34.240427 systemd-logind[1532]: Removed session 12. Aug 13 01:48:34.292835 systemd[1]: Started sshd@12-172.232.7.67:22-147.75.109.163:37258.service - OpenSSH per-connection server daemon (147.75.109.163:37258). Aug 13 01:48:34.648009 sshd[4959]: Accepted publickey for core from 147.75.109.163 port 37258 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:34.649596 sshd-session[4959]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:34.655896 systemd-logind[1532]: New session 13 of user core. Aug 13 01:48:34.660905 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 01:48:35.019516 sshd[4961]: Connection closed by 147.75.109.163 port 37258 Aug 13 01:48:35.019019 sshd-session[4959]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:35.025559 systemd-logind[1532]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:48:35.026421 systemd[1]: sshd@12-172.232.7.67:22-147.75.109.163:37258.service: Deactivated successfully. Aug 13 01:48:35.030222 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:48:35.033070 systemd-logind[1532]: Removed session 13. Aug 13 01:48:35.079584 systemd[1]: Started sshd@13-172.232.7.67:22-147.75.109.163:37268.service - OpenSSH per-connection server daemon (147.75.109.163:37268). Aug 13 01:48:35.424784 sshd[4971]: Accepted publickey for core from 147.75.109.163 port 37268 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:35.426877 sshd-session[4971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:35.435931 systemd-logind[1532]: New session 14 of user core. Aug 13 01:48:35.440950 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 01:48:35.694977 kubelet[2790]: E0813 01:48:35.694857 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:48:35.737796 sshd[4973]: Connection closed by 147.75.109.163 port 37268 Aug 13 01:48:35.738850 sshd-session[4971]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:35.743591 systemd[1]: sshd@13-172.232.7.67:22-147.75.109.163:37268.service: Deactivated successfully. Aug 13 01:48:35.747222 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:48:35.748300 systemd-logind[1532]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:48:35.750130 systemd-logind[1532]: Removed session 14. Aug 13 01:48:39.695602 containerd[1556]: time="2025-08-13T01:48:39.695555916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:39.761612 containerd[1556]: time="2025-08-13T01:48:39.761529199Z" level=error msg="Failed to destroy network for sandbox \"4b1750ee4c293eed7a0d6732c00648846700459d87f0030d9827d14f28e5e952\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:39.764979 containerd[1556]: time="2025-08-13T01:48:39.764875076Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b1750ee4c293eed7a0d6732c00648846700459d87f0030d9827d14f28e5e952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:39.765334 kubelet[2790]: E0813 01:48:39.765278 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b1750ee4c293eed7a0d6732c00648846700459d87f0030d9827d14f28e5e952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:39.765799 kubelet[2790]: E0813 01:48:39.765362 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b1750ee4c293eed7a0d6732c00648846700459d87f0030d9827d14f28e5e952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:48:39.765799 kubelet[2790]: E0813 01:48:39.765393 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b1750ee4c293eed7a0d6732c00648846700459d87f0030d9827d14f28e5e952\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:48:39.765799 kubelet[2790]: E0813 01:48:39.765459 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b1750ee4c293eed7a0d6732c00648846700459d87f0030d9827d14f28e5e952\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:48:39.767093 systemd[1]: run-netns-cni\x2de61176a5\x2d5e24\x2db769\x2d87ee\x2dbdef2516e186.mount: Deactivated successfully. Aug 13 01:48:40.806925 systemd[1]: Started sshd@14-172.232.7.67:22-147.75.109.163:42102.service - OpenSSH per-connection server daemon (147.75.109.163:42102). Aug 13 01:48:41.145892 sshd[5011]: Accepted publickey for core from 147.75.109.163 port 42102 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:41.148459 sshd-session[5011]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:41.159417 systemd-logind[1532]: New session 15 of user core. Aug 13 01:48:41.165132 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 01:48:41.534893 sshd[5013]: Connection closed by 147.75.109.163 port 42102 Aug 13 01:48:41.535614 sshd-session[5011]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:41.540055 systemd[1]: sshd@14-172.232.7.67:22-147.75.109.163:42102.service: Deactivated successfully. Aug 13 01:48:41.542309 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:48:41.543386 systemd-logind[1532]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:48:41.545575 systemd-logind[1532]: Removed session 15. Aug 13 01:48:41.599986 systemd[1]: Started sshd@15-172.232.7.67:22-147.75.109.163:42114.service - OpenSSH per-connection server daemon (147.75.109.163:42114). Aug 13 01:48:41.945928 sshd[5025]: Accepted publickey for core from 147.75.109.163 port 42114 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:41.947583 sshd-session[5025]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:41.953722 systemd-logind[1532]: New session 16 of user core. Aug 13 01:48:41.957970 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 01:48:42.342041 sshd[5027]: Connection closed by 147.75.109.163 port 42114 Aug 13 01:48:42.342674 sshd-session[5025]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:42.346860 systemd[1]: sshd@15-172.232.7.67:22-147.75.109.163:42114.service: Deactivated successfully. Aug 13 01:48:42.349022 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:48:42.350884 systemd-logind[1532]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:48:42.352449 systemd-logind[1532]: Removed session 16. Aug 13 01:48:42.402216 systemd[1]: Started sshd@16-172.232.7.67:22-147.75.109.163:42116.service - OpenSSH per-connection server daemon (147.75.109.163:42116). Aug 13 01:48:42.695898 kubelet[2790]: E0813 01:48:42.695123 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:48:42.698713 containerd[1556]: time="2025-08-13T01:48:42.697824139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:42.698713 containerd[1556]: time="2025-08-13T01:48:42.698295679Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:42.703276 kubelet[2790]: E0813 01:48:42.703229 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2096206682: write /var/lib/containerd/tmpmounts/containerd-mount2096206682/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-tsmrf" podUID="517ffc51-1a34-4ced-acf5-d8e5da6a1838" Aug 13 01:48:42.746678 sshd[5037]: Accepted publickey for core from 147.75.109.163 port 42116 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:42.749349 sshd-session[5037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:42.755579 systemd-logind[1532]: New session 17 of user core. Aug 13 01:48:42.762908 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 01:48:42.789003 containerd[1556]: time="2025-08-13T01:48:42.788948994Z" level=error msg="Failed to destroy network for sandbox \"b80a023525cc1c6fd8f5f939b3b5092a736e01e06c6aef3f04f98214946fcf2c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:42.790882 containerd[1556]: time="2025-08-13T01:48:42.790728553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b80a023525cc1c6fd8f5f939b3b5092a736e01e06c6aef3f04f98214946fcf2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:42.793145 kubelet[2790]: E0813 01:48:42.791328 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b80a023525cc1c6fd8f5f939b3b5092a736e01e06c6aef3f04f98214946fcf2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:42.793145 kubelet[2790]: E0813 01:48:42.791403 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b80a023525cc1c6fd8f5f939b3b5092a736e01e06c6aef3f04f98214946fcf2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:48:42.793145 kubelet[2790]: E0813 01:48:42.791443 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b80a023525cc1c6fd8f5f939b3b5092a736e01e06c6aef3f04f98214946fcf2c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:48:42.793145 kubelet[2790]: E0813 01:48:42.791499 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b80a023525cc1c6fd8f5f939b3b5092a736e01e06c6aef3f04f98214946fcf2c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:48:42.793516 systemd[1]: run-netns-cni\x2d15e7bb6d\x2dbd2a\x2d99ea\x2dadeb\x2d19e2488563aa.mount: Deactivated successfully. Aug 13 01:48:42.800712 containerd[1556]: time="2025-08-13T01:48:42.800663886Z" level=error msg="Failed to destroy network for sandbox \"a69dddf7619d1edd1a082226b74a118ed6a3ed7e71e303285b0dbc7e069de239\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:42.803144 systemd[1]: run-netns-cni\x2db95b58df\x2d9a77\x2de3b9\x2dcbf0\x2d6b27eea289d8.mount: Deactivated successfully. Aug 13 01:48:42.803946 containerd[1556]: time="2025-08-13T01:48:42.803898544Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a69dddf7619d1edd1a082226b74a118ed6a3ed7e71e303285b0dbc7e069de239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:42.804362 kubelet[2790]: E0813 01:48:42.804303 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a69dddf7619d1edd1a082226b74a118ed6a3ed7e71e303285b0dbc7e069de239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:42.804492 kubelet[2790]: E0813 01:48:42.804472 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a69dddf7619d1edd1a082226b74a118ed6a3ed7e71e303285b0dbc7e069de239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:48:42.804637 kubelet[2790]: E0813 01:48:42.804552 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a69dddf7619d1edd1a082226b74a118ed6a3ed7e71e303285b0dbc7e069de239\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:48:42.804871 kubelet[2790]: E0813 01:48:42.804839 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a69dddf7619d1edd1a082226b74a118ed6a3ed7e71e303285b0dbc7e069de239\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6rlkc" podUID="21a6ba02-58d5-43c1-a7de-9e24560a65f6" Aug 13 01:48:43.656066 sshd[5090]: Connection closed by 147.75.109.163 port 42116 Aug 13 01:48:43.656885 sshd-session[5037]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:43.661965 systemd[1]: sshd@16-172.232.7.67:22-147.75.109.163:42116.service: Deactivated successfully. Aug 13 01:48:43.664673 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 01:48:43.665638 systemd-logind[1532]: Session 17 logged out. Waiting for processes to exit. Aug 13 01:48:43.668672 systemd-logind[1532]: Removed session 17. Aug 13 01:48:43.720672 systemd[1]: Started sshd@17-172.232.7.67:22-147.75.109.163:42122.service - OpenSSH per-connection server daemon (147.75.109.163:42122). Aug 13 01:48:44.072875 sshd[5112]: Accepted publickey for core from 147.75.109.163 port 42122 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:44.074497 sshd-session[5112]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:44.079839 systemd-logind[1532]: New session 18 of user core. Aug 13 01:48:44.083889 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 01:48:44.484554 sshd[5114]: Connection closed by 147.75.109.163 port 42122 Aug 13 01:48:44.485296 sshd-session[5112]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:44.490006 systemd[1]: sshd@17-172.232.7.67:22-147.75.109.163:42122.service: Deactivated successfully. Aug 13 01:48:44.492256 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 01:48:44.493568 systemd-logind[1532]: Session 18 logged out. Waiting for processes to exit. Aug 13 01:48:44.495393 systemd-logind[1532]: Removed session 18. Aug 13 01:48:44.545174 systemd[1]: Started sshd@18-172.232.7.67:22-147.75.109.163:42128.service - OpenSSH per-connection server daemon (147.75.109.163:42128). Aug 13 01:48:44.895962 sshd[5124]: Accepted publickey for core from 147.75.109.163 port 42128 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:44.897952 sshd-session[5124]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:44.907438 systemd-logind[1532]: New session 19 of user core. Aug 13 01:48:44.915947 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 01:48:45.213448 sshd[5128]: Connection closed by 147.75.109.163 port 42128 Aug 13 01:48:45.215125 sshd-session[5124]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:45.222235 systemd[1]: sshd@18-172.232.7.67:22-147.75.109.163:42128.service: Deactivated successfully. Aug 13 01:48:45.223002 systemd-logind[1532]: Session 19 logged out. Waiting for processes to exit. Aug 13 01:48:45.225254 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 01:48:45.228573 systemd-logind[1532]: Removed session 19. Aug 13 01:48:47.695770 kubelet[2790]: E0813 01:48:47.695338 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:48:47.697355 containerd[1556]: time="2025-08-13T01:48:47.697279985Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:47.759630 containerd[1556]: time="2025-08-13T01:48:47.759557122Z" level=error msg="Failed to destroy network for sandbox \"2944f86e5dcba0d28a04be445ee890569ee10fed3d17fa5fed564ef42cce5364\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:47.763528 containerd[1556]: time="2025-08-13T01:48:47.763471279Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2944f86e5dcba0d28a04be445ee890569ee10fed3d17fa5fed564ef42cce5364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:47.763969 kubelet[2790]: E0813 01:48:47.763906 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2944f86e5dcba0d28a04be445ee890569ee10fed3d17fa5fed564ef42cce5364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:47.764046 kubelet[2790]: E0813 01:48:47.764011 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2944f86e5dcba0d28a04be445ee890569ee10fed3d17fa5fed564ef42cce5364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:48:47.764089 kubelet[2790]: E0813 01:48:47.764058 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2944f86e5dcba0d28a04be445ee890569ee10fed3d17fa5fed564ef42cce5364\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:48:47.765678 kubelet[2790]: E0813 01:48:47.764191 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2944f86e5dcba0d28a04be445ee890569ee10fed3d17fa5fed564ef42cce5364\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vtdcd" podUID="caa5836a-45f9-496b-86c1-95f6e1b6da17" Aug 13 01:48:47.764878 systemd[1]: run-netns-cni\x2ddc8599ad\x2dd198\x2d176f\x2d1e75\x2de92abae8febf.mount: Deactivated successfully. Aug 13 01:48:50.285049 systemd[1]: Started sshd@19-172.232.7.67:22-147.75.109.163:41298.service - OpenSSH per-connection server daemon (147.75.109.163:41298). Aug 13 01:48:50.634039 sshd[5166]: Accepted publickey for core from 147.75.109.163 port 41298 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:50.635874 sshd-session[5166]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:50.643507 systemd-logind[1532]: New session 20 of user core. Aug 13 01:48:50.648899 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 01:48:50.969439 sshd[5168]: Connection closed by 147.75.109.163 port 41298 Aug 13 01:48:50.970082 sshd-session[5166]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:50.975071 systemd[1]: sshd@19-172.232.7.67:22-147.75.109.163:41298.service: Deactivated successfully. Aug 13 01:48:50.978554 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 01:48:50.980559 systemd-logind[1532]: Session 20 logged out. Waiting for processes to exit. Aug 13 01:48:50.984004 systemd-logind[1532]: Removed session 20. Aug 13 01:48:53.696687 containerd[1556]: time="2025-08-13T01:48:53.696508611Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:53.699157 kubelet[2790]: E0813 01:48:53.699102 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2096206682: write /var/lib/containerd/tmpmounts/containerd-mount2096206682/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-tsmrf" podUID="517ffc51-1a34-4ced-acf5-d8e5da6a1838" Aug 13 01:48:53.771624 containerd[1556]: time="2025-08-13T01:48:53.771554538Z" level=error msg="Failed to destroy network for sandbox \"0f34d473048a25ca05b823ca2d2bf0c75b0d477f64b99136e13c684f9d173e2d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:53.774129 systemd[1]: run-netns-cni\x2daf7d0eef\x2d81b6\x2d2b04\x2d6357\x2db104dcc95488.mount: Deactivated successfully. Aug 13 01:48:53.774990 containerd[1556]: time="2025-08-13T01:48:53.774946397Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f34d473048a25ca05b823ca2d2bf0c75b0d477f64b99136e13c684f9d173e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:53.775990 kubelet[2790]: E0813 01:48:53.775933 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f34d473048a25ca05b823ca2d2bf0c75b0d477f64b99136e13c684f9d173e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:53.776171 kubelet[2790]: E0813 01:48:53.776130 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f34d473048a25ca05b823ca2d2bf0c75b0d477f64b99136e13c684f9d173e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:48:53.776318 kubelet[2790]: E0813 01:48:53.776252 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0f34d473048a25ca05b823ca2d2bf0c75b0d477f64b99136e13c684f9d173e2d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:48:53.776572 kubelet[2790]: E0813 01:48:53.776413 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0f34d473048a25ca05b823ca2d2bf0c75b0d477f64b99136e13c684f9d173e2d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:48:54.696665 containerd[1556]: time="2025-08-13T01:48:54.695817813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:54.697907 kubelet[2790]: E0813 01:48:54.697882 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:48:54.765469 containerd[1556]: time="2025-08-13T01:48:54.765368420Z" level=error msg="Failed to destroy network for sandbox \"751415ba271086e459158023229772a28b0bc9ae7697f8e97868c1da1c2b234c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:54.768538 containerd[1556]: time="2025-08-13T01:48:54.768417651Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"751415ba271086e459158023229772a28b0bc9ae7697f8e97868c1da1c2b234c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:54.769862 kubelet[2790]: E0813 01:48:54.768898 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"751415ba271086e459158023229772a28b0bc9ae7697f8e97868c1da1c2b234c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:54.769862 kubelet[2790]: E0813 01:48:54.768966 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"751415ba271086e459158023229772a28b0bc9ae7697f8e97868c1da1c2b234c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:48:54.769862 kubelet[2790]: E0813 01:48:54.768988 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"751415ba271086e459158023229772a28b0bc9ae7697f8e97868c1da1c2b234c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:48:54.769862 kubelet[2790]: E0813 01:48:54.769040 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"751415ba271086e459158023229772a28b0bc9ae7697f8e97868c1da1c2b234c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:48:54.771531 systemd[1]: run-netns-cni\x2d2327409d\x2d6cce\x2dffe6\x2dd3a2\x2dd6872c2e52ef.mount: Deactivated successfully. Aug 13 01:48:56.035520 systemd[1]: Started sshd@20-172.232.7.67:22-147.75.109.163:41314.service - OpenSSH per-connection server daemon (147.75.109.163:41314). Aug 13 01:48:56.391130 sshd[5231]: Accepted publickey for core from 147.75.109.163 port 41314 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:56.393279 sshd-session[5231]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:56.403284 systemd-logind[1532]: New session 21 of user core. Aug 13 01:48:56.408910 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 01:48:56.726278 sshd[5233]: Connection closed by 147.75.109.163 port 41314 Aug 13 01:48:56.730214 sshd-session[5231]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:56.734713 systemd-logind[1532]: Session 21 logged out. Waiting for processes to exit. Aug 13 01:48:56.735893 systemd[1]: sshd@20-172.232.7.67:22-147.75.109.163:41314.service: Deactivated successfully. Aug 13 01:48:56.738734 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 01:48:56.742015 systemd-logind[1532]: Removed session 21. Aug 13 01:48:58.697234 kubelet[2790]: E0813 01:48:58.696093 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:48:58.699918 kubelet[2790]: E0813 01:48:58.696340 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:48:58.700047 containerd[1556]: time="2025-08-13T01:48:58.699282733Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:58.797185 containerd[1556]: time="2025-08-13T01:48:58.797110514Z" level=error msg="Failed to destroy network for sandbox \"65c1777f86e1738f94a2bbc5520b7f597d66baf0f2247fa80e348c686f791055\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:58.800205 containerd[1556]: time="2025-08-13T01:48:58.800082057Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"65c1777f86e1738f94a2bbc5520b7f597d66baf0f2247fa80e348c686f791055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:58.800697 kubelet[2790]: E0813 01:48:58.800622 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65c1777f86e1738f94a2bbc5520b7f597d66baf0f2247fa80e348c686f791055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:58.800891 kubelet[2790]: E0813 01:48:58.800864 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65c1777f86e1738f94a2bbc5520b7f597d66baf0f2247fa80e348c686f791055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:48:58.801031 kubelet[2790]: E0813 01:48:58.800933 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"65c1777f86e1738f94a2bbc5520b7f597d66baf0f2247fa80e348c686f791055\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:48:58.801111 kubelet[2790]: E0813 01:48:58.801083 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"65c1777f86e1738f94a2bbc5520b7f597d66baf0f2247fa80e348c686f791055\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6rlkc" podUID="21a6ba02-58d5-43c1-a7de-9e24560a65f6" Aug 13 01:48:58.801305 systemd[1]: run-netns-cni\x2d0efeaa29\x2d6092\x2da8e4\x2d6aee\x2def96b2479ae3.mount: Deactivated successfully. Aug 13 01:48:59.694723 kubelet[2790]: E0813 01:48:59.694684 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:48:59.695513 containerd[1556]: time="2025-08-13T01:48:59.695314371Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:59.752864 containerd[1556]: time="2025-08-13T01:48:59.752782886Z" level=error msg="Failed to destroy network for sandbox \"82ead1dd62c93e401f660d6c882421e883b3be0ee83e070378954599b9ea10ed\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:59.757270 systemd[1]: run-netns-cni\x2ddc376102\x2d9efc\x2d49bb\x2ddcfb\x2d2649fc6bc1a0.mount: Deactivated successfully. Aug 13 01:48:59.758091 containerd[1556]: time="2025-08-13T01:48:59.758013695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"82ead1dd62c93e401f660d6c882421e883b3be0ee83e070378954599b9ea10ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:59.758556 kubelet[2790]: E0813 01:48:59.758516 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82ead1dd62c93e401f660d6c882421e883b3be0ee83e070378954599b9ea10ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:59.759090 kubelet[2790]: E0813 01:48:59.759055 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82ead1dd62c93e401f660d6c882421e883b3be0ee83e070378954599b9ea10ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:48:59.759149 kubelet[2790]: E0813 01:48:59.759114 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"82ead1dd62c93e401f660d6c882421e883b3be0ee83e070378954599b9ea10ed\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:48:59.759243 kubelet[2790]: E0813 01:48:59.759196 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"82ead1dd62c93e401f660d6c882421e883b3be0ee83e070378954599b9ea10ed\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vtdcd" podUID="caa5836a-45f9-496b-86c1-95f6e1b6da17" Aug 13 01:49:01.792349 systemd[1]: Started sshd@21-172.232.7.67:22-147.75.109.163:47894.service - OpenSSH per-connection server daemon (147.75.109.163:47894). Aug 13 01:49:02.145862 sshd[5301]: Accepted publickey for core from 147.75.109.163 port 47894 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:02.147481 sshd-session[5301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:02.153802 systemd-logind[1532]: New session 22 of user core. Aug 13 01:49:02.157928 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 01:49:02.476560 sshd[5303]: Connection closed by 147.75.109.163 port 47894 Aug 13 01:49:02.478066 sshd-session[5301]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:02.485488 systemd[1]: sshd@21-172.232.7.67:22-147.75.109.163:47894.service: Deactivated successfully. Aug 13 01:49:02.489096 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 01:49:02.491320 systemd-logind[1532]: Session 22 logged out. Waiting for processes to exit. Aug 13 01:49:02.493783 systemd-logind[1532]: Removed session 22. Aug 13 01:49:04.696613 kubelet[2790]: E0813 01:49:04.696135 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2096206682: write /var/lib/containerd/tmpmounts/containerd-mount2096206682/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-tsmrf" podUID="517ffc51-1a34-4ced-acf5-d8e5da6a1838" Aug 13 01:49:05.696203 containerd[1556]: time="2025-08-13T01:49:05.695883215Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,}" Aug 13 01:49:05.769809 containerd[1556]: time="2025-08-13T01:49:05.769730834Z" level=error msg="Failed to destroy network for sandbox \"1d0ae1300ad74df12f3b716570f3329901d4bc7e32f305eb861f50d68fe63fef\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:05.772910 systemd[1]: run-netns-cni\x2dd3fd1fc5\x2d3f78\x2d35f5\x2dd4c5\x2d915a88d43e54.mount: Deactivated successfully. Aug 13 01:49:05.774105 containerd[1556]: time="2025-08-13T01:49:05.772687068Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0ae1300ad74df12f3b716570f3329901d4bc7e32f305eb861f50d68fe63fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:05.774199 kubelet[2790]: E0813 01:49:05.773358 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0ae1300ad74df12f3b716570f3329901d4bc7e32f305eb861f50d68fe63fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:05.774199 kubelet[2790]: E0813 01:49:05.773489 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0ae1300ad74df12f3b716570f3329901d4bc7e32f305eb861f50d68fe63fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:49:05.774199 kubelet[2790]: E0813 01:49:05.773524 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0ae1300ad74df12f3b716570f3329901d4bc7e32f305eb861f50d68fe63fef\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:49:05.774199 kubelet[2790]: E0813 01:49:05.773656 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d0ae1300ad74df12f3b716570f3329901d4bc7e32f305eb861f50d68fe63fef\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:49:07.541955 systemd[1]: Started sshd@22-172.232.7.67:22-147.75.109.163:47904.service - OpenSSH per-connection server daemon (147.75.109.163:47904). Aug 13 01:49:07.885739 sshd[5343]: Accepted publickey for core from 147.75.109.163 port 47904 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:07.888232 sshd-session[5343]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:07.894794 systemd-logind[1532]: New session 23 of user core. Aug 13 01:49:07.899920 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 01:49:08.214906 sshd[5345]: Connection closed by 147.75.109.163 port 47904 Aug 13 01:49:08.215598 sshd-session[5343]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:08.220271 systemd-logind[1532]: Session 23 logged out. Waiting for processes to exit. Aug 13 01:49:08.221269 systemd[1]: sshd@22-172.232.7.67:22-147.75.109.163:47904.service: Deactivated successfully. Aug 13 01:49:08.224342 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 01:49:08.227211 systemd-logind[1532]: Removed session 23. Aug 13 01:49:10.695784 containerd[1556]: time="2025-08-13T01:49:10.695542545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,}" Aug 13 01:49:10.762323 containerd[1556]: time="2025-08-13T01:49:10.762251543Z" level=error msg="Failed to destroy network for sandbox \"104522766bba83678be552641740e8b3b5f0d0201ca9f954fd13fd6cb3e7615a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:10.764484 systemd[1]: run-netns-cni\x2da8154e92\x2dac08\x2d5a07\x2d7be8\x2d1faf12e150e2.mount: Deactivated successfully. Aug 13 01:49:10.765949 containerd[1556]: time="2025-08-13T01:49:10.765827035Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"104522766bba83678be552641740e8b3b5f0d0201ca9f954fd13fd6cb3e7615a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:10.766977 kubelet[2790]: E0813 01:49:10.766260 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"104522766bba83678be552641740e8b3b5f0d0201ca9f954fd13fd6cb3e7615a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:10.766977 kubelet[2790]: E0813 01:49:10.766324 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"104522766bba83678be552641740e8b3b5f0d0201ca9f954fd13fd6cb3e7615a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:49:10.766977 kubelet[2790]: E0813 01:49:10.766348 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"104522766bba83678be552641740e8b3b5f0d0201ca9f954fd13fd6cb3e7615a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:49:10.766977 kubelet[2790]: E0813 01:49:10.766401 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"104522766bba83678be552641740e8b3b5f0d0201ca9f954fd13fd6cb3e7615a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:49:12.694633 kubelet[2790]: E0813 01:49:12.694559 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:49:12.696995 containerd[1556]: time="2025-08-13T01:49:12.696955146Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,}" Aug 13 01:49:12.766068 containerd[1556]: time="2025-08-13T01:49:12.766010341Z" level=error msg="Failed to destroy network for sandbox \"45288b0a9c594ffca05ebeb2efcf7e586b723db4f331099593b2f98437909378\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:12.768646 containerd[1556]: time="2025-08-13T01:49:12.768579308Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"45288b0a9c594ffca05ebeb2efcf7e586b723db4f331099593b2f98437909378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:12.769349 kubelet[2790]: E0813 01:49:12.768909 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45288b0a9c594ffca05ebeb2efcf7e586b723db4f331099593b2f98437909378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:12.769349 kubelet[2790]: E0813 01:49:12.769001 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45288b0a9c594ffca05ebeb2efcf7e586b723db4f331099593b2f98437909378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:49:12.769349 kubelet[2790]: E0813 01:49:12.769027 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45288b0a9c594ffca05ebeb2efcf7e586b723db4f331099593b2f98437909378\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:49:12.769349 kubelet[2790]: E0813 01:49:12.769100 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45288b0a9c594ffca05ebeb2efcf7e586b723db4f331099593b2f98437909378\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6rlkc" podUID="21a6ba02-58d5-43c1-a7de-9e24560a65f6" Aug 13 01:49:12.769528 systemd[1]: run-netns-cni\x2df3838255\x2d876d\x2d97ed\x2d56a9\x2dba5b1321524e.mount: Deactivated successfully. Aug 13 01:49:13.279343 systemd[1]: Started sshd@23-172.232.7.67:22-147.75.109.163:59628.service - OpenSSH per-connection server daemon (147.75.109.163:59628). Aug 13 01:49:13.610227 sshd[5410]: Accepted publickey for core from 147.75.109.163 port 59628 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:13.612499 sshd-session[5410]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:13.619901 systemd-logind[1532]: New session 24 of user core. Aug 13 01:49:13.626026 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 01:49:13.945942 sshd[5412]: Connection closed by 147.75.109.163 port 59628 Aug 13 01:49:13.946783 sshd-session[5410]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:13.951233 systemd-logind[1532]: Session 24 logged out. Waiting for processes to exit. Aug 13 01:49:13.952634 systemd[1]: sshd@23-172.232.7.67:22-147.75.109.163:59628.service: Deactivated successfully. Aug 13 01:49:13.958076 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 01:49:13.963889 systemd-logind[1532]: Removed session 24. Aug 13 01:49:14.699638 kubelet[2790]: E0813 01:49:14.699604 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:49:14.700815 containerd[1556]: time="2025-08-13T01:49:14.700222724Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,}" Aug 13 01:49:14.757211 containerd[1556]: time="2025-08-13T01:49:14.757151626Z" level=error msg="Failed to destroy network for sandbox \"d97e9d593647609848eb6de54f38b41d615366a2836cc862e4b2ba02d180d600\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:14.759395 systemd[1]: run-netns-cni\x2dd7ae99e1\x2d43a9\x2d15b7\x2d08b1\x2d4cd0557088d8.mount: Deactivated successfully. Aug 13 01:49:14.760317 containerd[1556]: time="2025-08-13T01:49:14.760164011Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d97e9d593647609848eb6de54f38b41d615366a2836cc862e4b2ba02d180d600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:14.761440 kubelet[2790]: E0813 01:49:14.761393 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d97e9d593647609848eb6de54f38b41d615366a2836cc862e4b2ba02d180d600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:14.761541 kubelet[2790]: E0813 01:49:14.761464 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d97e9d593647609848eb6de54f38b41d615366a2836cc862e4b2ba02d180d600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:49:14.761541 kubelet[2790]: E0813 01:49:14.761495 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d97e9d593647609848eb6de54f38b41d615366a2836cc862e4b2ba02d180d600\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:49:14.761660 kubelet[2790]: E0813 01:49:14.761548 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d97e9d593647609848eb6de54f38b41d615366a2836cc862e4b2ba02d180d600\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vtdcd" podUID="caa5836a-45f9-496b-86c1-95f6e1b6da17" Aug 13 01:49:15.696486 kubelet[2790]: E0813 01:49:15.696431 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2096206682: write /var/lib/containerd/tmpmounts/containerd-mount2096206682/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-tsmrf" podUID="517ffc51-1a34-4ced-acf5-d8e5da6a1838" Aug 13 01:49:16.700786 kubelet[2790]: E0813 01:49:16.700625 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:49:19.030897 systemd[1]: Started sshd@24-172.232.7.67:22-147.75.109.163:50804.service - OpenSSH per-connection server daemon (147.75.109.163:50804). Aug 13 01:49:19.456936 sshd[5452]: Accepted publickey for core from 147.75.109.163 port 50804 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:19.461639 sshd-session[5452]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:19.482610 systemd-logind[1532]: New session 25 of user core. Aug 13 01:49:19.494314 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 01:49:19.699018 containerd[1556]: time="2025-08-13T01:49:19.698162729Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,}" Aug 13 01:49:19.894948 containerd[1556]: time="2025-08-13T01:49:19.894854183Z" level=error msg="Failed to destroy network for sandbox \"9dc3f9b11d1c424e352b9942ea41fd40471becedc38c9bb18f99a926c3806ed3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:19.902845 systemd[1]: run-netns-cni\x2d0cb132ae\x2df558\x2dd781\x2d2859\x2d27f61e1d5510.mount: Deactivated successfully. Aug 13 01:49:19.909921 containerd[1556]: time="2025-08-13T01:49:19.909597885Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc3f9b11d1c424e352b9942ea41fd40471becedc38c9bb18f99a926c3806ed3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:19.910988 kubelet[2790]: E0813 01:49:19.910820 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc3f9b11d1c424e352b9942ea41fd40471becedc38c9bb18f99a926c3806ed3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:19.913109 kubelet[2790]: E0813 01:49:19.911099 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc3f9b11d1c424e352b9942ea41fd40471becedc38c9bb18f99a926c3806ed3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:49:19.913109 kubelet[2790]: E0813 01:49:19.911463 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9dc3f9b11d1c424e352b9942ea41fd40471becedc38c9bb18f99a926c3806ed3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:49:19.913109 kubelet[2790]: E0813 01:49:19.911930 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9dc3f9b11d1c424e352b9942ea41fd40471becedc38c9bb18f99a926c3806ed3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:49:19.961979 sshd[5454]: Connection closed by 147.75.109.163 port 50804 Aug 13 01:49:19.963924 sshd-session[5452]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:19.973062 systemd[1]: sshd@24-172.232.7.67:22-147.75.109.163:50804.service: Deactivated successfully. Aug 13 01:49:19.977786 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 01:49:19.981686 systemd-logind[1532]: Session 25 logged out. Waiting for processes to exit. Aug 13 01:49:19.986431 systemd-logind[1532]: Removed session 25. Aug 13 01:49:23.695001 kubelet[2790]: E0813 01:49:23.694954 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:49:23.696090 containerd[1556]: time="2025-08-13T01:49:23.695999190Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,}" Aug 13 01:49:23.767040 containerd[1556]: time="2025-08-13T01:49:23.766983678Z" level=error msg="Failed to destroy network for sandbox \"40e8f59d6547e6ea26115287a83d8a6ec55cfe8002a3fc562ed70eb187df9eb4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:23.770132 containerd[1556]: time="2025-08-13T01:49:23.770086884Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"40e8f59d6547e6ea26115287a83d8a6ec55cfe8002a3fc562ed70eb187df9eb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:23.770821 kubelet[2790]: E0813 01:49:23.770304 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40e8f59d6547e6ea26115287a83d8a6ec55cfe8002a3fc562ed70eb187df9eb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:23.770821 kubelet[2790]: E0813 01:49:23.770354 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40e8f59d6547e6ea26115287a83d8a6ec55cfe8002a3fc562ed70eb187df9eb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:49:23.770821 kubelet[2790]: E0813 01:49:23.770377 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"40e8f59d6547e6ea26115287a83d8a6ec55cfe8002a3fc562ed70eb187df9eb4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:49:23.770821 kubelet[2790]: E0813 01:49:23.770436 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-6rlkc_kube-system(21a6ba02-58d5-43c1-a7de-9e24560a65f6)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"40e8f59d6547e6ea26115287a83d8a6ec55cfe8002a3fc562ed70eb187df9eb4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-6rlkc" podUID="21a6ba02-58d5-43c1-a7de-9e24560a65f6" Aug 13 01:49:23.770807 systemd[1]: run-netns-cni\x2d6f18b7fa\x2db958\x2d57bd\x2dcc88\x2d6c288f1d3619.mount: Deactivated successfully. Aug 13 01:49:25.034070 systemd[1]: Started sshd@25-172.232.7.67:22-147.75.109.163:50818.service - OpenSSH per-connection server daemon (147.75.109.163:50818). Aug 13 01:49:25.386860 sshd[5521]: Accepted publickey for core from 147.75.109.163 port 50818 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:25.388614 sshd-session[5521]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:25.395543 systemd-logind[1532]: New session 26 of user core. Aug 13 01:49:25.403879 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 01:49:25.696963 containerd[1556]: time="2025-08-13T01:49:25.696067151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,}" Aug 13 01:49:25.708255 sshd[5523]: Connection closed by 147.75.109.163 port 50818 Aug 13 01:49:25.709638 sshd-session[5521]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:25.715542 systemd[1]: sshd@25-172.232.7.67:22-147.75.109.163:50818.service: Deactivated successfully. Aug 13 01:49:25.719343 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 01:49:25.721892 systemd-logind[1532]: Session 26 logged out. Waiting for processes to exit. Aug 13 01:49:25.723453 systemd-logind[1532]: Removed session 26. Aug 13 01:49:25.770083 containerd[1556]: time="2025-08-13T01:49:25.770031603Z" level=error msg="Failed to destroy network for sandbox \"ac2554fd2cee030c7493b6f9af1a6e71933ea7497a7df42051c5fb1fa9f4837a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:25.773345 systemd[1]: run-netns-cni\x2d75f552b3\x2d22ce\x2d03d7\x2dc266\x2de8ce31070079.mount: Deactivated successfully. Aug 13 01:49:25.773694 containerd[1556]: time="2025-08-13T01:49:25.773642117Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2554fd2cee030c7493b6f9af1a6e71933ea7497a7df42051c5fb1fa9f4837a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:25.774123 kubelet[2790]: E0813 01:49:25.773941 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2554fd2cee030c7493b6f9af1a6e71933ea7497a7df42051c5fb1fa9f4837a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:25.774123 kubelet[2790]: E0813 01:49:25.774003 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2554fd2cee030c7493b6f9af1a6e71933ea7497a7df42051c5fb1fa9f4837a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:49:25.774123 kubelet[2790]: E0813 01:49:25.774032 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac2554fd2cee030c7493b6f9af1a6e71933ea7497a7df42051c5fb1fa9f4837a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:49:25.774123 kubelet[2790]: E0813 01:49:25.774081 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac2554fd2cee030c7493b6f9af1a6e71933ea7497a7df42051c5fb1fa9f4837a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:49:27.694979 kubelet[2790]: E0813 01:49:27.694933 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:49:27.695606 containerd[1556]: time="2025-08-13T01:49:27.695504914Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,}" Aug 13 01:49:27.755340 containerd[1556]: time="2025-08-13T01:49:27.755273992Z" level=error msg="Failed to destroy network for sandbox \"7f11ef349382db8c52dab42db03c3308b72def83082616e9ebf21ca111c59ed8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:27.757897 systemd[1]: run-netns-cni\x2d6af839f2\x2d5bdb\x2d6e81\x2dd6cb\x2db4586daa668a.mount: Deactivated successfully. Aug 13 01:49:27.759647 containerd[1556]: time="2025-08-13T01:49:27.759274276Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f11ef349382db8c52dab42db03c3308b72def83082616e9ebf21ca111c59ed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:27.759775 kubelet[2790]: E0813 01:49:27.759542 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f11ef349382db8c52dab42db03c3308b72def83082616e9ebf21ca111c59ed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:27.759775 kubelet[2790]: E0813 01:49:27.759608 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f11ef349382db8c52dab42db03c3308b72def83082616e9ebf21ca111c59ed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:49:27.759775 kubelet[2790]: E0813 01:49:27.759706 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7f11ef349382db8c52dab42db03c3308b72def83082616e9ebf21ca111c59ed8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:49:27.759905 kubelet[2790]: E0813 01:49:27.759851 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-vtdcd_kube-system(caa5836a-45f9-496b-86c1-95f6e1b6da17)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7f11ef349382db8c52dab42db03c3308b72def83082616e9ebf21ca111c59ed8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-vtdcd" podUID="caa5836a-45f9-496b-86c1-95f6e1b6da17" Aug 13 01:49:30.698884 containerd[1556]: time="2025-08-13T01:49:30.698721105Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:49:30.765676 systemd[1]: Started sshd@26-172.232.7.67:22-147.75.109.163:53906.service - OpenSSH per-connection server daemon (147.75.109.163:53906). Aug 13 01:49:31.108550 sshd[5589]: Accepted publickey for core from 147.75.109.163 port 53906 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:31.110571 sshd-session[5589]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:31.115956 systemd-logind[1532]: New session 27 of user core. Aug 13 01:49:31.122897 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 01:49:31.470993 sshd[5591]: Connection closed by 147.75.109.163 port 53906 Aug 13 01:49:31.472107 sshd-session[5589]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:31.481324 systemd[1]: sshd@26-172.232.7.67:22-147.75.109.163:53906.service: Deactivated successfully. Aug 13 01:49:31.483426 systemd-logind[1532]: Session 27 logged out. Waiting for processes to exit. Aug 13 01:49:31.487350 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 01:49:31.495234 systemd-logind[1532]: Removed session 27. Aug 13 01:49:32.697972 containerd[1556]: time="2025-08-13T01:49:32.697881553Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,}" Aug 13 01:49:32.816769 containerd[1556]: time="2025-08-13T01:49:32.816588580Z" level=error msg="Failed to destroy network for sandbox \"c567ec85c424aee793196ca752d6ce80d7741a520d0bc1dea2cc9d817f89ef40\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:32.820634 systemd[1]: run-netns-cni\x2d077eaa69\x2d699d\x2df31e\x2dfed4\x2d8fa2590b0468.mount: Deactivated successfully. Aug 13 01:49:32.822446 containerd[1556]: time="2025-08-13T01:49:32.822410017Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c567ec85c424aee793196ca752d6ce80d7741a520d0bc1dea2cc9d817f89ef40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:32.822988 kubelet[2790]: E0813 01:49:32.822949 2790 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c567ec85c424aee793196ca752d6ce80d7741a520d0bc1dea2cc9d817f89ef40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:49:32.824056 kubelet[2790]: E0813 01:49:32.823448 2790 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c567ec85c424aee793196ca752d6ce80d7741a520d0bc1dea2cc9d817f89ef40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:49:32.824056 kubelet[2790]: E0813 01:49:32.823507 2790 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c567ec85c424aee793196ca752d6ce80d7741a520d0bc1dea2cc9d817f89ef40\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:49:32.824056 kubelet[2790]: E0813 01:49:32.823985 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c567ec85c424aee793196ca752d6ce80d7741a520d0bc1dea2cc9d817f89ef40\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:49:34.185504 kubelet[2790]: I0813 01:49:34.185417 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:49:34.187814 kubelet[2790]: I0813 01:49:34.186529 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:49:34.191383 kubelet[2790]: I0813 01:49:34.191207 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:49:34.193972 kubelet[2790]: I0813 01:49:34.193897 2790 image_gc_manager.go:514] "Removing image to free bytes" imageID="sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93" size=25052538 runtimeHandler="" Aug 13 01:49:34.194216 containerd[1556]: time="2025-08-13T01:49:34.194179333Z" level=info msg="RemoveImage \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:49:34.196601 containerd[1556]: time="2025-08-13T01:49:34.196545974Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator:v1.38.3\"" Aug 13 01:49:34.197406 containerd[1556]: time="2025-08-13T01:49:34.197383610Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\"" Aug 13 01:49:34.198275 containerd[1556]: time="2025-08-13T01:49:34.198250807Z" level=info msg="RemoveImage \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" returns successfully" Aug 13 01:49:34.198344 containerd[1556]: time="2025-08-13T01:49:34.198325877Z" level=info msg="ImageDelete event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:49:34.216357 kubelet[2790]: I0813 01:49:34.216319 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:49:34.216730 kubelet[2790]: I0813 01:49:34.216656 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-76ff444f8d-4xcg9","kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-node-tsmrf","calico-system/csi-node-driver-c7jrc","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:49:34.217004 kubelet[2790]: E0813 01:49:34.216891 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:49:34.217004 kubelet[2790]: E0813 01:49:34.216944 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:49:34.217004 kubelet[2790]: E0813 01:49:34.216952 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:49:34.217004 kubelet[2790]: E0813 01:49:34.216965 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:49:34.217004 kubelet[2790]: E0813 01:49:34.216972 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:49:34.217263 kubelet[2790]: E0813 01:49:34.217098 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:49:34.217263 kubelet[2790]: E0813 01:49:34.217110 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:49:34.217263 kubelet[2790]: E0813 01:49:34.217118 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:49:34.217263 kubelet[2790]: E0813 01:49:34.217125 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:49:34.217263 kubelet[2790]: E0813 01:49:34.217132 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:49:34.217263 kubelet[2790]: I0813 01:49:34.217154 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:49:35.739839 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3330885441.mount: Deactivated successfully. Aug 13 01:49:35.770632 containerd[1556]: time="2025-08-13T01:49:35.770547280Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:35.771491 containerd[1556]: time="2025-08-13T01:49:35.771290157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:49:35.772094 containerd[1556]: time="2025-08-13T01:49:35.772062024Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:35.773482 containerd[1556]: time="2025-08-13T01:49:35.773449518Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:35.774102 containerd[1556]: time="2025-08-13T01:49:35.774077046Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 5.074201176s" Aug 13 01:49:35.774191 containerd[1556]: time="2025-08-13T01:49:35.774172516Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 01:49:35.804279 containerd[1556]: time="2025-08-13T01:49:35.804237250Z" level=info msg="CreateContainer within sandbox \"19f136ecf27677a48dbcfabc2d82ff742c8f56ba9769515ce641ecafbfefd0f8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 01:49:35.819384 containerd[1556]: time="2025-08-13T01:49:35.819350272Z" level=info msg="Container 6da5154e125e31750839c134b00edd40c42a3264d34e846351af2022803e2f22: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:49:35.824086 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount404510748.mount: Deactivated successfully. Aug 13 01:49:35.837100 containerd[1556]: time="2025-08-13T01:49:35.837026483Z" level=info msg="CreateContainer within sandbox \"19f136ecf27677a48dbcfabc2d82ff742c8f56ba9769515ce641ecafbfefd0f8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"6da5154e125e31750839c134b00edd40c42a3264d34e846351af2022803e2f22\"" Aug 13 01:49:35.838390 containerd[1556]: time="2025-08-13T01:49:35.838274398Z" level=info msg="StartContainer for \"6da5154e125e31750839c134b00edd40c42a3264d34e846351af2022803e2f22\"" Aug 13 01:49:35.841374 containerd[1556]: time="2025-08-13T01:49:35.841317387Z" level=info msg="connecting to shim 6da5154e125e31750839c134b00edd40c42a3264d34e846351af2022803e2f22" address="unix:///run/containerd/s/c15a429e3e4a3dd148596df52b2faebcc1e6f8eb8c8eb6510ce6f33ab5356af0" protocol=ttrpc version=3 Aug 13 01:49:35.877151 systemd[1]: Started cri-containerd-6da5154e125e31750839c134b00edd40c42a3264d34e846351af2022803e2f22.scope - libcontainer container 6da5154e125e31750839c134b00edd40c42a3264d34e846351af2022803e2f22. Aug 13 01:49:35.955944 containerd[1556]: time="2025-08-13T01:49:35.955899895Z" level=info msg="StartContainer for \"6da5154e125e31750839c134b00edd40c42a3264d34e846351af2022803e2f22\" returns successfully" Aug 13 01:49:36.060556 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 01:49:36.061059 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 01:49:36.539979 systemd[1]: Started sshd@27-172.232.7.67:22-147.75.109.163:53910.service - OpenSSH per-connection server daemon (147.75.109.163:53910). Aug 13 01:49:36.608801 kubelet[2790]: I0813 01:49:36.608176 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-tsmrf" podStartSLOduration=1.864187671 podStartE2EDuration="3m18.608124884s" podCreationTimestamp="2025-08-13 01:46:18 +0000 UTC" firstStartedPulling="2025-08-13 01:46:19.032432914 +0000 UTC m=+22.487201279" lastFinishedPulling="2025-08-13 01:49:35.776370127 +0000 UTC m=+219.231138492" observedRunningTime="2025-08-13 01:49:36.527142363 +0000 UTC m=+219.981910728" watchObservedRunningTime="2025-08-13 01:49:36.608124884 +0000 UTC m=+220.062893239" Aug 13 01:49:36.668426 containerd[1556]: time="2025-08-13T01:49:36.668155395Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6da5154e125e31750839c134b00edd40c42a3264d34e846351af2022803e2f22\" id:\"4c2cdc2fe24dfc277e7d0fdf489f06836f489710221b7c78dbbf462310b39e96\" pid:5712 exit_status:1 exited_at:{seconds:1755049776 nanos:667389718}" Aug 13 01:49:36.919707 sshd[5714]: Accepted publickey for core from 147.75.109.163 port 53910 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:36.922165 sshd-session[5714]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:36.928150 systemd-logind[1532]: New session 28 of user core. Aug 13 01:49:36.934975 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 01:49:37.258492 sshd[5726]: Connection closed by 147.75.109.163 port 53910 Aug 13 01:49:37.259463 sshd-session[5714]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:37.263128 systemd[1]: sshd@27-172.232.7.67:22-147.75.109.163:53910.service: Deactivated successfully. Aug 13 01:49:37.265679 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 01:49:37.267561 systemd-logind[1532]: Session 28 logged out. Waiting for processes to exit. Aug 13 01:49:37.269792 systemd-logind[1532]: Removed session 28. Aug 13 01:49:37.697115 kubelet[2790]: E0813 01:49:37.696547 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:49:37.698040 kubelet[2790]: E0813 01:49:37.697952 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:49:37.698979 containerd[1556]: time="2025-08-13T01:49:37.698934270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,}" Aug 13 01:49:37.982972 systemd-networkd[1462]: cali6a7362ede62: Link UP Aug 13 01:49:37.984901 systemd-networkd[1462]: cali6a7362ede62: Gained carrier Aug 13 01:49:38.014500 containerd[1556]: 2025-08-13 01:49:37.770 [INFO][5847] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Aug 13 01:49:38.014500 containerd[1556]: 2025-08-13 01:49:37.812 [INFO][5847] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--67-k8s-coredns--674b8bbfcf--6rlkc-eth0 coredns-674b8bbfcf- kube-system 21a6ba02-58d5-43c1-a7de-9e24560a65f6 862 0 2025-08-13 01:46:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-7-67 coredns-674b8bbfcf-6rlkc eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali6a7362ede62 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" Namespace="kube-system" Pod="coredns-674b8bbfcf-6rlkc" WorkloadEndpoint="172--232--7--67-k8s-coredns--674b8bbfcf--6rlkc-" Aug 13 01:49:38.014500 containerd[1556]: 2025-08-13 01:49:37.813 [INFO][5847] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" Namespace="kube-system" Pod="coredns-674b8bbfcf-6rlkc" WorkloadEndpoint="172--232--7--67-k8s-coredns--674b8bbfcf--6rlkc-eth0" Aug 13 01:49:38.014500 containerd[1556]: 2025-08-13 01:49:37.891 [INFO][5860] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" HandleID="k8s-pod-network.24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" Workload="172--232--7--67-k8s-coredns--674b8bbfcf--6rlkc-eth0" Aug 13 01:49:38.014975 containerd[1556]: 2025-08-13 01:49:37.892 [INFO][5860] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" HandleID="k8s-pod-network.24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" Workload="172--232--7--67-k8s-coredns--674b8bbfcf--6rlkc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc000304330), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-7-67", "pod":"coredns-674b8bbfcf-6rlkc", "timestamp":"2025-08-13 01:49:37.891322973 +0000 UTC"}, Hostname:"172-232-7-67", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:49:38.014975 containerd[1556]: 2025-08-13 01:49:37.892 [INFO][5860] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:49:38.014975 containerd[1556]: 2025-08-13 01:49:37.893 [INFO][5860] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:49:38.014975 containerd[1556]: 2025-08-13 01:49:37.893 [INFO][5860] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-67' Aug 13 01:49:38.014975 containerd[1556]: 2025-08-13 01:49:37.905 [INFO][5860] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" host="172-232-7-67" Aug 13 01:49:38.014975 containerd[1556]: 2025-08-13 01:49:37.923 [INFO][5860] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-67" Aug 13 01:49:38.014975 containerd[1556]: 2025-08-13 01:49:37.934 [INFO][5860] ipam/ipam.go 511: Trying affinity for 192.168.94.192/26 host="172-232-7-67" Aug 13 01:49:38.014975 containerd[1556]: 2025-08-13 01:49:37.936 [INFO][5860] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.192/26 host="172-232-7-67" Aug 13 01:49:38.014975 containerd[1556]: 2025-08-13 01:49:37.940 [INFO][5860] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="172-232-7-67" Aug 13 01:49:38.014975 containerd[1556]: 2025-08-13 01:49:37.941 [INFO][5860] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" host="172-232-7-67" Aug 13 01:49:38.015313 containerd[1556]: 2025-08-13 01:49:37.943 [INFO][5860] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895 Aug 13 01:49:38.015313 containerd[1556]: 2025-08-13 01:49:37.949 [INFO][5860] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" host="172-232-7-67" Aug 13 01:49:38.015313 containerd[1556]: 2025-08-13 01:49:37.955 [INFO][5860] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.193/26] block=192.168.94.192/26 handle="k8s-pod-network.24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" host="172-232-7-67" Aug 13 01:49:38.015313 containerd[1556]: 2025-08-13 01:49:37.955 [INFO][5860] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.193/26] handle="k8s-pod-network.24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" host="172-232-7-67" Aug 13 01:49:38.015313 containerd[1556]: 2025-08-13 01:49:37.955 [INFO][5860] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:49:38.015313 containerd[1556]: 2025-08-13 01:49:37.955 [INFO][5860] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.193/26] IPv6=[] ContainerID="24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" HandleID="k8s-pod-network.24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" Workload="172--232--7--67-k8s-coredns--674b8bbfcf--6rlkc-eth0" Aug 13 01:49:38.015573 containerd[1556]: 2025-08-13 01:49:37.966 [INFO][5847] cni-plugin/k8s.go 418: Populated endpoint ContainerID="24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" Namespace="kube-system" Pod="coredns-674b8bbfcf-6rlkc" WorkloadEndpoint="172--232--7--67-k8s-coredns--674b8bbfcf--6rlkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--67-k8s-coredns--674b8bbfcf--6rlkc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"21a6ba02-58d5-43c1-a7de-9e24560a65f6", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-67", ContainerID:"", Pod:"coredns-674b8bbfcf-6rlkc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a7362ede62", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:49:38.015573 containerd[1556]: 2025-08-13 01:49:37.966 [INFO][5847] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.193/32] ContainerID="24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" Namespace="kube-system" Pod="coredns-674b8bbfcf-6rlkc" WorkloadEndpoint="172--232--7--67-k8s-coredns--674b8bbfcf--6rlkc-eth0" Aug 13 01:49:38.015573 containerd[1556]: 2025-08-13 01:49:37.967 [INFO][5847] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6a7362ede62 ContainerID="24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" Namespace="kube-system" Pod="coredns-674b8bbfcf-6rlkc" WorkloadEndpoint="172--232--7--67-k8s-coredns--674b8bbfcf--6rlkc-eth0" Aug 13 01:49:38.015573 containerd[1556]: 2025-08-13 01:49:37.985 [INFO][5847] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" Namespace="kube-system" Pod="coredns-674b8bbfcf-6rlkc" WorkloadEndpoint="172--232--7--67-k8s-coredns--674b8bbfcf--6rlkc-eth0" Aug 13 01:49:38.015573 containerd[1556]: 2025-08-13 01:49:37.986 [INFO][5847] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" Namespace="kube-system" Pod="coredns-674b8bbfcf-6rlkc" WorkloadEndpoint="172--232--7--67-k8s-coredns--674b8bbfcf--6rlkc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--67-k8s-coredns--674b8bbfcf--6rlkc-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"21a6ba02-58d5-43c1-a7de-9e24560a65f6", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-67", ContainerID:"24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895", Pod:"coredns-674b8bbfcf-6rlkc", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali6a7362ede62", MAC:"86:6a:c2:15:23:b8", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:49:38.015573 containerd[1556]: 2025-08-13 01:49:38.010 [INFO][5847] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" Namespace="kube-system" Pod="coredns-674b8bbfcf-6rlkc" WorkloadEndpoint="172--232--7--67-k8s-coredns--674b8bbfcf--6rlkc-eth0" Aug 13 01:49:38.067831 containerd[1556]: time="2025-08-13T01:49:38.067506210Z" level=info msg="connecting to shim 24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895" address="unix:///run/containerd/s/245495ca6c924825bcfd32a05d99ff4addb8a81b321da0a09cea85447ec1724d" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:49:38.108527 containerd[1556]: time="2025-08-13T01:49:38.108479517Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6da5154e125e31750839c134b00edd40c42a3264d34e846351af2022803e2f22\" id:\"5a35351b052a455cf84b879d7ade4217f6680cc04ba54c5cd4518568dc3e837d\" pid:5830 exit_status:1 exited_at:{seconds:1755049778 nanos:107657030}" Aug 13 01:49:38.137047 systemd[1]: Started cri-containerd-24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895.scope - libcontainer container 24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895. Aug 13 01:49:38.232664 containerd[1556]: time="2025-08-13T01:49:38.232614823Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-6rlkc,Uid:21a6ba02-58d5-43c1-a7de-9e24560a65f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895\"" Aug 13 01:49:38.234981 kubelet[2790]: E0813 01:49:38.234882 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:49:38.237818 containerd[1556]: time="2025-08-13T01:49:38.237686235Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 01:49:38.622809 systemd-networkd[1462]: vxlan.calico: Link UP Aug 13 01:49:38.622824 systemd-networkd[1462]: vxlan.calico: Gained carrier Aug 13 01:49:38.699920 containerd[1556]: time="2025-08-13T01:49:38.699730028Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,}" Aug 13 01:49:38.861000 systemd-networkd[1462]: cali7402a3c247b: Link UP Aug 13 01:49:38.864092 systemd-networkd[1462]: cali7402a3c247b: Gained carrier Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.766 [INFO][5988] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--67-k8s-csi--node--driver--c7jrc-eth0 csi-node-driver- calico-system 4296a7ed-e75a-4d74-935a-9017b9a86286 743 0 2025-08-13 01:46:18 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-232-7-67 csi-node-driver-c7jrc eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali7402a3c247b [] [] }} ContainerID="84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" Namespace="calico-system" Pod="csi-node-driver-c7jrc" WorkloadEndpoint="172--232--7--67-k8s-csi--node--driver--c7jrc-" Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.766 [INFO][5988] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" Namespace="calico-system" Pod="csi-node-driver-c7jrc" WorkloadEndpoint="172--232--7--67-k8s-csi--node--driver--c7jrc-eth0" Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.805 [INFO][6000] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" HandleID="k8s-pod-network.84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" Workload="172--232--7--67-k8s-csi--node--driver--c7jrc-eth0" Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.806 [INFO][6000] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" HandleID="k8s-pod-network.84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" Workload="172--232--7--67-k8s-csi--node--driver--c7jrc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f640), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-7-67", "pod":"csi-node-driver-c7jrc", "timestamp":"2025-08-13 01:49:38.805509063 +0000 UTC"}, Hostname:"172-232-7-67", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.806 [INFO][6000] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.806 [INFO][6000] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.806 [INFO][6000] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-67' Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.817 [INFO][6000] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" host="172-232-7-67" Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.827 [INFO][6000] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-67" Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.832 [INFO][6000] ipam/ipam.go 511: Trying affinity for 192.168.94.192/26 host="172-232-7-67" Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.834 [INFO][6000] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.192/26 host="172-232-7-67" Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.836 [INFO][6000] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="172-232-7-67" Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.837 [INFO][6000] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" host="172-232-7-67" Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.839 [INFO][6000] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335 Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.844 [INFO][6000] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" host="172-232-7-67" Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.852 [INFO][6000] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.194/26] block=192.168.94.192/26 handle="k8s-pod-network.84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" host="172-232-7-67" Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.852 [INFO][6000] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.194/26] handle="k8s-pod-network.84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" host="172-232-7-67" Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.852 [INFO][6000] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:49:38.891233 containerd[1556]: 2025-08-13 01:49:38.852 [INFO][6000] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.194/26] IPv6=[] ContainerID="84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" HandleID="k8s-pod-network.84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" Workload="172--232--7--67-k8s-csi--node--driver--c7jrc-eth0" Aug 13 01:49:38.891858 containerd[1556]: 2025-08-13 01:49:38.855 [INFO][5988] cni-plugin/k8s.go 418: Populated endpoint ContainerID="84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" Namespace="calico-system" Pod="csi-node-driver-c7jrc" WorkloadEndpoint="172--232--7--67-k8s-csi--node--driver--c7jrc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--67-k8s-csi--node--driver--c7jrc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4296a7ed-e75a-4d74-935a-9017b9a86286", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-67", ContainerID:"", Pod:"csi-node-driver-c7jrc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7402a3c247b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:49:38.891858 containerd[1556]: 2025-08-13 01:49:38.855 [INFO][5988] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.194/32] ContainerID="84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" Namespace="calico-system" Pod="csi-node-driver-c7jrc" WorkloadEndpoint="172--232--7--67-k8s-csi--node--driver--c7jrc-eth0" Aug 13 01:49:38.891858 containerd[1556]: 2025-08-13 01:49:38.856 [INFO][5988] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7402a3c247b ContainerID="84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" Namespace="calico-system" Pod="csi-node-driver-c7jrc" WorkloadEndpoint="172--232--7--67-k8s-csi--node--driver--c7jrc-eth0" Aug 13 01:49:38.891858 containerd[1556]: 2025-08-13 01:49:38.864 [INFO][5988] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" Namespace="calico-system" Pod="csi-node-driver-c7jrc" WorkloadEndpoint="172--232--7--67-k8s-csi--node--driver--c7jrc-eth0" Aug 13 01:49:38.891858 containerd[1556]: 2025-08-13 01:49:38.865 [INFO][5988] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" Namespace="calico-system" Pod="csi-node-driver-c7jrc" WorkloadEndpoint="172--232--7--67-k8s-csi--node--driver--c7jrc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--67-k8s-csi--node--driver--c7jrc-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"4296a7ed-e75a-4d74-935a-9017b9a86286", ResourceVersion:"743", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-67", ContainerID:"84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335", Pod:"csi-node-driver-c7jrc", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.94.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali7402a3c247b", MAC:"ea:8b:1b:79:e6:31", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:49:38.891858 containerd[1556]: 2025-08-13 01:49:38.886 [INFO][5988] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" Namespace="calico-system" Pod="csi-node-driver-c7jrc" WorkloadEndpoint="172--232--7--67-k8s-csi--node--driver--c7jrc-eth0" Aug 13 01:49:38.960219 containerd[1556]: time="2025-08-13T01:49:38.960162145Z" level=info msg="connecting to shim 84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335" address="unix:///run/containerd/s/23202ee974fdca4cc7ec2d027550a968b8fae8717f9052f9ac96c4c30ac92a86" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:49:39.052988 systemd[1]: Started cri-containerd-84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335.scope - libcontainer container 84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335. Aug 13 01:49:39.116550 containerd[1556]: time="2025-08-13T01:49:39.116433025Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-c7jrc,Uid:4296a7ed-e75a-4d74-935a-9017b9a86286,Namespace:calico-system,Attempt:0,} returns sandbox id \"84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335\"" Aug 13 01:49:39.135465 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4215929576.mount: Deactivated successfully. Aug 13 01:49:39.424947 systemd-networkd[1462]: cali6a7362ede62: Gained IPv6LL Aug 13 01:49:40.129142 systemd-networkd[1462]: cali7402a3c247b: Gained IPv6LL Aug 13 01:49:40.132340 systemd-networkd[1462]: vxlan.calico: Gained IPv6LL Aug 13 01:49:40.695186 kubelet[2790]: E0813 01:49:40.695134 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:49:40.697344 containerd[1556]: time="2025-08-13T01:49:40.696049742Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,}" Aug 13 01:49:40.915055 systemd-networkd[1462]: cali50a8e0b3793: Link UP Aug 13 01:49:40.917398 systemd-networkd[1462]: cali50a8e0b3793: Gained carrier Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.777 [INFO][6144] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--67-k8s-coredns--674b8bbfcf--vtdcd-eth0 coredns-674b8bbfcf- kube-system caa5836a-45f9-496b-86c1-95f6e1b6da17 866 0 2025-08-13 01:46:02 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-7-67 coredns-674b8bbfcf-vtdcd eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali50a8e0b3793 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" Namespace="kube-system" Pod="coredns-674b8bbfcf-vtdcd" WorkloadEndpoint="172--232--7--67-k8s-coredns--674b8bbfcf--vtdcd-" Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.778 [INFO][6144] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" Namespace="kube-system" Pod="coredns-674b8bbfcf-vtdcd" WorkloadEndpoint="172--232--7--67-k8s-coredns--674b8bbfcf--vtdcd-eth0" Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.849 [INFO][6156] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" HandleID="k8s-pod-network.93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" Workload="172--232--7--67-k8s-coredns--674b8bbfcf--vtdcd-eth0" Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.850 [INFO][6156] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" HandleID="k8s-pod-network.93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" Workload="172--232--7--67-k8s-coredns--674b8bbfcf--vtdcd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c6ff0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-7-67", "pod":"coredns-674b8bbfcf-vtdcd", "timestamp":"2025-08-13 01:49:40.849068632 +0000 UTC"}, Hostname:"172-232-7-67", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.850 [INFO][6156] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.850 [INFO][6156] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.850 [INFO][6156] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-67' Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.861 [INFO][6156] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" host="172-232-7-67" Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.866 [INFO][6156] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-67" Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.873 [INFO][6156] ipam/ipam.go 511: Trying affinity for 192.168.94.192/26 host="172-232-7-67" Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.876 [INFO][6156] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.192/26 host="172-232-7-67" Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.879 [INFO][6156] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="172-232-7-67" Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.879 [INFO][6156] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" host="172-232-7-67" Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.881 [INFO][6156] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302 Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.896 [INFO][6156] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" host="172-232-7-67" Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.903 [INFO][6156] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.195/26] block=192.168.94.192/26 handle="k8s-pod-network.93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" host="172-232-7-67" Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.903 [INFO][6156] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.195/26] handle="k8s-pod-network.93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" host="172-232-7-67" Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.903 [INFO][6156] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:49:40.937100 containerd[1556]: 2025-08-13 01:49:40.903 [INFO][6156] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.195/26] IPv6=[] ContainerID="93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" HandleID="k8s-pod-network.93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" Workload="172--232--7--67-k8s-coredns--674b8bbfcf--vtdcd-eth0" Aug 13 01:49:40.937706 containerd[1556]: 2025-08-13 01:49:40.906 [INFO][6144] cni-plugin/k8s.go 418: Populated endpoint ContainerID="93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" Namespace="kube-system" Pod="coredns-674b8bbfcf-vtdcd" WorkloadEndpoint="172--232--7--67-k8s-coredns--674b8bbfcf--vtdcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--67-k8s-coredns--674b8bbfcf--vtdcd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"caa5836a-45f9-496b-86c1-95f6e1b6da17", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-67", ContainerID:"", Pod:"coredns-674b8bbfcf-vtdcd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50a8e0b3793", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:49:40.937706 containerd[1556]: 2025-08-13 01:49:40.906 [INFO][6144] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.195/32] ContainerID="93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" Namespace="kube-system" Pod="coredns-674b8bbfcf-vtdcd" WorkloadEndpoint="172--232--7--67-k8s-coredns--674b8bbfcf--vtdcd-eth0" Aug 13 01:49:40.937706 containerd[1556]: 2025-08-13 01:49:40.906 [INFO][6144] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali50a8e0b3793 ContainerID="93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" Namespace="kube-system" Pod="coredns-674b8bbfcf-vtdcd" WorkloadEndpoint="172--232--7--67-k8s-coredns--674b8bbfcf--vtdcd-eth0" Aug 13 01:49:40.937706 containerd[1556]: 2025-08-13 01:49:40.918 [INFO][6144] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" Namespace="kube-system" Pod="coredns-674b8bbfcf-vtdcd" WorkloadEndpoint="172--232--7--67-k8s-coredns--674b8bbfcf--vtdcd-eth0" Aug 13 01:49:40.937706 containerd[1556]: 2025-08-13 01:49:40.919 [INFO][6144] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" Namespace="kube-system" Pod="coredns-674b8bbfcf-vtdcd" WorkloadEndpoint="172--232--7--67-k8s-coredns--674b8bbfcf--vtdcd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--67-k8s-coredns--674b8bbfcf--vtdcd-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"caa5836a-45f9-496b-86c1-95f6e1b6da17", ResourceVersion:"866", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 46, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-67", ContainerID:"93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302", Pod:"coredns-674b8bbfcf-vtdcd", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.94.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali50a8e0b3793", MAC:"ae:27:c4:49:36:a4", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:49:40.937706 containerd[1556]: 2025-08-13 01:49:40.932 [INFO][6144] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" Namespace="kube-system" Pod="coredns-674b8bbfcf-vtdcd" WorkloadEndpoint="172--232--7--67-k8s-coredns--674b8bbfcf--vtdcd-eth0" Aug 13 01:49:41.014490 containerd[1556]: time="2025-08-13T01:49:41.014422657Z" level=info msg="connecting to shim 93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302" address="unix:///run/containerd/s/c2845d39da955306f065c7f204d86198d561b7ed7960995dbb1f054bc1c4421f" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:49:41.061448 systemd[1]: Started cri-containerd-93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302.scope - libcontainer container 93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302. Aug 13 01:49:41.150926 containerd[1556]: time="2025-08-13T01:49:41.150886583Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-vtdcd,Uid:caa5836a-45f9-496b-86c1-95f6e1b6da17,Namespace:kube-system,Attempt:0,} returns sandbox id \"93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302\"" Aug 13 01:49:41.154679 kubelet[2790]: E0813 01:49:41.154604 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:49:42.165798 containerd[1556]: time="2025-08-13T01:49:42.165333804Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:42.166447 containerd[1556]: time="2025-08-13T01:49:42.166408060Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=20942238" Aug 13 01:49:42.167187 containerd[1556]: time="2025-08-13T01:49:42.167161988Z" level=info msg="ImageCreate event name:\"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:42.169574 containerd[1556]: time="2025-08-13T01:49:42.169511469Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:42.170591 containerd[1556]: time="2025-08-13T01:49:42.170554916Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 3.932578573s" Aug 13 01:49:42.170647 containerd[1556]: time="2025-08-13T01:49:42.170592235Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 01:49:42.172237 containerd[1556]: time="2025-08-13T01:49:42.172200940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 01:49:42.176009 containerd[1556]: time="2025-08-13T01:49:42.175952756Z" level=info msg="CreateContainer within sandbox \"24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:49:42.188847 containerd[1556]: time="2025-08-13T01:49:42.188516321Z" level=info msg="Container 442a5e38faa0a6480f6d4e836bd033bae486c55efc775519182d5d87a01f27bd: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:49:42.201137 containerd[1556]: time="2025-08-13T01:49:42.201086646Z" level=info msg="CreateContainer within sandbox \"24598e97c2b289cd71ac0107d6ce568c58c99322fabd02def457d617b2510895\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"442a5e38faa0a6480f6d4e836bd033bae486c55efc775519182d5d87a01f27bd\"" Aug 13 01:49:42.201605 containerd[1556]: time="2025-08-13T01:49:42.201570084Z" level=info msg="StartContainer for \"442a5e38faa0a6480f6d4e836bd033bae486c55efc775519182d5d87a01f27bd\"" Aug 13 01:49:42.203240 containerd[1556]: time="2025-08-13T01:49:42.203179539Z" level=info msg="connecting to shim 442a5e38faa0a6480f6d4e836bd033bae486c55efc775519182d5d87a01f27bd" address="unix:///run/containerd/s/245495ca6c924825bcfd32a05d99ff4addb8a81b321da0a09cea85447ec1724d" protocol=ttrpc version=3 Aug 13 01:49:42.233234 systemd[1]: Started cri-containerd-442a5e38faa0a6480f6d4e836bd033bae486c55efc775519182d5d87a01f27bd.scope - libcontainer container 442a5e38faa0a6480f6d4e836bd033bae486c55efc775519182d5d87a01f27bd. Aug 13 01:49:42.240906 systemd-networkd[1462]: cali50a8e0b3793: Gained IPv6LL Aug 13 01:49:42.296992 containerd[1556]: time="2025-08-13T01:49:42.296855013Z" level=info msg="StartContainer for \"442a5e38faa0a6480f6d4e836bd033bae486c55efc775519182d5d87a01f27bd\" returns successfully" Aug 13 01:49:42.326635 systemd[1]: Started sshd@28-172.232.7.67:22-147.75.109.163:36928.service - OpenSSH per-connection server daemon (147.75.109.163:36928). Aug 13 01:49:42.517604 kubelet[2790]: E0813 01:49:42.517567 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:49:42.551769 kubelet[2790]: I0813 01:49:42.551201 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-6rlkc" podStartSLOduration=216.616192588 podStartE2EDuration="3m40.551182181s" podCreationTimestamp="2025-08-13 01:46:02 +0000 UTC" firstStartedPulling="2025-08-13 01:49:38.237091717 +0000 UTC m=+221.691860082" lastFinishedPulling="2025-08-13 01:49:42.17208132 +0000 UTC m=+225.626849675" observedRunningTime="2025-08-13 01:49:42.52931338 +0000 UTC m=+225.984081755" watchObservedRunningTime="2025-08-13 01:49:42.551182181 +0000 UTC m=+226.005950536" Aug 13 01:49:42.689397 sshd[6255]: Accepted publickey for core from 147.75.109.163 port 36928 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:42.693204 sshd-session[6255]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:42.703629 systemd-logind[1532]: New session 29 of user core. Aug 13 01:49:42.709906 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 01:49:43.069551 sshd[6264]: Connection closed by 147.75.109.163 port 36928 Aug 13 01:49:43.070029 sshd-session[6255]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:43.075181 systemd-logind[1532]: Session 29 logged out. Waiting for processes to exit. Aug 13 01:49:43.076578 systemd[1]: sshd@28-172.232.7.67:22-147.75.109.163:36928.service: Deactivated successfully. Aug 13 01:49:43.080529 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 01:49:43.084853 systemd-logind[1532]: Removed session 29. Aug 13 01:49:43.520152 kubelet[2790]: E0813 01:49:43.520115 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:49:44.162642 containerd[1556]: time="2025-08-13T01:49:44.162580733Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:44.163792 containerd[1556]: time="2025-08-13T01:49:44.163530439Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 01:49:44.165211 containerd[1556]: time="2025-08-13T01:49:44.164240167Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:44.166120 containerd[1556]: time="2025-08-13T01:49:44.166085330Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:44.166921 containerd[1556]: time="2025-08-13T01:49:44.166873438Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.994642078s" Aug 13 01:49:44.167032 containerd[1556]: time="2025-08-13T01:49:44.167008107Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 01:49:44.170768 containerd[1556]: time="2025-08-13T01:49:44.168795871Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Aug 13 01:49:44.172566 containerd[1556]: time="2025-08-13T01:49:44.172508778Z" level=info msg="CreateContainer within sandbox \"84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 01:49:44.185405 containerd[1556]: time="2025-08-13T01:49:44.185339833Z" level=info msg="Container 8b87e87ad5e36bdfcb5b8645cdff8570c82b7a33f1fc0e2cc1bde59e7a2b593e: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:49:44.198764 containerd[1556]: time="2025-08-13T01:49:44.198220228Z" level=info msg="CreateContainer within sandbox \"84ace2e411909811bff93c863d8ea1496aeb6af8cdfd300c2b87237c3e95e335\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"8b87e87ad5e36bdfcb5b8645cdff8570c82b7a33f1fc0e2cc1bde59e7a2b593e\"" Aug 13 01:49:44.199805 containerd[1556]: time="2025-08-13T01:49:44.199528693Z" level=info msg="StartContainer for \"8b87e87ad5e36bdfcb5b8645cdff8570c82b7a33f1fc0e2cc1bde59e7a2b593e\"" Aug 13 01:49:44.201410 containerd[1556]: time="2025-08-13T01:49:44.201384836Z" level=info msg="connecting to shim 8b87e87ad5e36bdfcb5b8645cdff8570c82b7a33f1fc0e2cc1bde59e7a2b593e" address="unix:///run/containerd/s/23202ee974fdca4cc7ec2d027550a968b8fae8717f9052f9ac96c4c30ac92a86" protocol=ttrpc version=3 Aug 13 01:49:44.256501 systemd[1]: Started cri-containerd-8b87e87ad5e36bdfcb5b8645cdff8570c82b7a33f1fc0e2cc1bde59e7a2b593e.scope - libcontainer container 8b87e87ad5e36bdfcb5b8645cdff8570c82b7a33f1fc0e2cc1bde59e7a2b593e. Aug 13 01:49:44.351532 kubelet[2790]: I0813 01:49:44.351461 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:49:44.352095 kubelet[2790]: I0813 01:49:44.351629 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:49:44.358638 kubelet[2790]: I0813 01:49:44.358585 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:49:44.360257 containerd[1556]: time="2025-08-13T01:49:44.360198849Z" level=info msg="StartContainer for \"8b87e87ad5e36bdfcb5b8645cdff8570c82b7a33f1fc0e2cc1bde59e7a2b593e\" returns successfully" Aug 13 01:49:44.397095 kubelet[2790]: I0813 01:49:44.396878 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:49:44.397095 kubelet[2790]: I0813 01:49:44.397014 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-674b8bbfcf-vtdcd","calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/csi-node-driver-c7jrc","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-node-tsmrf","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:49:44.397593 kubelet[2790]: E0813 01:49:44.397325 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:49:44.397593 kubelet[2790]: E0813 01:49:44.397342 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:49:44.397593 kubelet[2790]: E0813 01:49:44.397352 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:49:44.397593 kubelet[2790]: E0813 01:49:44.397363 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:49:44.397593 kubelet[2790]: E0813 01:49:44.397372 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:49:44.397593 kubelet[2790]: E0813 01:49:44.397501 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:49:44.397593 kubelet[2790]: E0813 01:49:44.397514 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:49:44.397593 kubelet[2790]: E0813 01:49:44.397522 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:49:44.397593 kubelet[2790]: E0813 01:49:44.397529 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:49:44.397926 kubelet[2790]: E0813 01:49:44.397656 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:49:44.397926 kubelet[2790]: I0813 01:49:44.397670 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:49:44.407692 containerd[1556]: time="2025-08-13T01:49:44.407634562Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:44.409636 containerd[1556]: time="2025-08-13T01:49:44.409593095Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=0" Aug 13 01:49:44.412428 containerd[1556]: time="2025-08-13T01:49:44.412366185Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"20939036\" in 243.540404ms" Aug 13 01:49:44.412428 containerd[1556]: time="2025-08-13T01:49:44.412419095Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:1cf5f116067c67da67f97bff78c4bbc76913f59057c18627b96facaced73ea0b\"" Aug 13 01:49:44.415063 containerd[1556]: time="2025-08-13T01:49:44.414957896Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 01:49:44.418174 containerd[1556]: time="2025-08-13T01:49:44.418106295Z" level=info msg="CreateContainer within sandbox \"93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:49:44.431078 containerd[1556]: time="2025-08-13T01:49:44.431012280Z" level=info msg="Container d667bd63212ed8b60b0bcc2737f539531a26c980c0d22470005c49986f69f149: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:49:44.435402 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2489655915.mount: Deactivated successfully. Aug 13 01:49:44.440029 containerd[1556]: time="2025-08-13T01:49:44.439637110Z" level=info msg="CreateContainer within sandbox \"93abab50b2b5a5f3c48f4faa69dbf09f526362c5d625d09e246d751f1ea05302\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d667bd63212ed8b60b0bcc2737f539531a26c980c0d22470005c49986f69f149\"" Aug 13 01:49:44.440915 containerd[1556]: time="2025-08-13T01:49:44.440818695Z" level=info msg="StartContainer for \"d667bd63212ed8b60b0bcc2737f539531a26c980c0d22470005c49986f69f149\"" Aug 13 01:49:44.452510 containerd[1556]: time="2025-08-13T01:49:44.452389045Z" level=info msg="connecting to shim d667bd63212ed8b60b0bcc2737f539531a26c980c0d22470005c49986f69f149" address="unix:///run/containerd/s/c2845d39da955306f065c7f204d86198d561b7ed7960995dbb1f054bc1c4421f" protocol=ttrpc version=3 Aug 13 01:49:44.501041 systemd[1]: Started cri-containerd-d667bd63212ed8b60b0bcc2737f539531a26c980c0d22470005c49986f69f149.scope - libcontainer container d667bd63212ed8b60b0bcc2737f539531a26c980c0d22470005c49986f69f149. Aug 13 01:49:44.539814 kubelet[2790]: E0813 01:49:44.539723 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:49:44.593155 containerd[1556]: time="2025-08-13T01:49:44.591608226Z" level=info msg="StartContainer for \"d667bd63212ed8b60b0bcc2737f539531a26c980c0d22470005c49986f69f149\" returns successfully" Aug 13 01:49:45.481129 containerd[1556]: time="2025-08-13T01:49:45.481082208Z" level=error msg="failed to cleanup \"extract-899895443-v0Nj sha256:a6200c63e2a03c9e19bca689383dae051e67c8fbd246c7e3961b6330b68b8256\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:49:45.481790 containerd[1556]: time="2025-08-13T01:49:45.481722725Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/94/fs/usr/bin/node-driver-registrar: no space left on device" Aug 13 01:49:45.481999 containerd[1556]: time="2025-08-13T01:49:45.481776265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 01:49:45.482081 kubelet[2790]: E0813 01:49:45.482037 2790 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/94/fs/usr/bin/node-driver-registrar: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2" Aug 13 01:49:45.482188 kubelet[2790]: E0813 01:49:45.482103 2790 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/94/fs/usr/bin/node-driver-registrar: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2" Aug 13 01:49:45.482553 kubelet[2790]: E0813 01:49:45.482422 2790 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v76c4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/94/fs/usr/bin/node-driver-registrar: no space left on device" logger="UnhandledError" Aug 13 01:49:45.484602 kubelet[2790]: E0813 01:49:45.484543 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/94/fs/usr/bin/node-driver-registrar: no space left on device\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:49:45.543473 kubelet[2790]: E0813 01:49:45.543436 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:49:45.545857 kubelet[2790]: E0813 01:49:45.545814 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/94/fs/usr/bin/node-driver-registrar: no space left on device\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:49:45.613049 kubelet[2790]: I0813 01:49:45.612834 2790 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-vtdcd" podStartSLOduration=220.355737171 podStartE2EDuration="3m43.61281714s" podCreationTimestamp="2025-08-13 01:46:02 +0000 UTC" firstStartedPulling="2025-08-13 01:49:41.156235043 +0000 UTC m=+224.611003398" lastFinishedPulling="2025-08-13 01:49:44.413315012 +0000 UTC m=+227.868083367" observedRunningTime="2025-08-13 01:49:45.583553541 +0000 UTC m=+229.038321906" watchObservedRunningTime="2025-08-13 01:49:45.61281714 +0000 UTC m=+229.067585495" Aug 13 01:49:46.545054 kubelet[2790]: E0813 01:49:46.544933 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:49:47.548372 kubelet[2790]: E0813 01:49:47.548339 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:49:47.694448 containerd[1556]: time="2025-08-13T01:49:47.694402115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,}" Aug 13 01:49:47.881996 systemd-networkd[1462]: cali146ee0dd383: Link UP Aug 13 01:49:47.884603 systemd-networkd[1462]: cali146ee0dd383: Gained carrier Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.752 [INFO][6367] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--67-k8s-calico--kube--controllers--76ff444f8d--4xcg9-eth0 calico-kube-controllers-76ff444f8d- calico-system f88563f6-5704-426b-aecc-303b3869ce30 854 0 2025-08-13 01:46:18 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:76ff444f8d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-232-7-67 calico-kube-controllers-76ff444f8d-4xcg9 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali146ee0dd383 [] [] }} ContainerID="665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" Namespace="calico-system" Pod="calico-kube-controllers-76ff444f8d-4xcg9" WorkloadEndpoint="172--232--7--67-k8s-calico--kube--controllers--76ff444f8d--4xcg9-" Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.753 [INFO][6367] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" Namespace="calico-system" Pod="calico-kube-controllers-76ff444f8d-4xcg9" WorkloadEndpoint="172--232--7--67-k8s-calico--kube--controllers--76ff444f8d--4xcg9-eth0" Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.796 [INFO][6378] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" HandleID="k8s-pod-network.665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" Workload="172--232--7--67-k8s-calico--kube--controllers--76ff444f8d--4xcg9-eth0" Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.796 [INFO][6378] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" HandleID="k8s-pod-network.665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" Workload="172--232--7--67-k8s-calico--kube--controllers--76ff444f8d--4xcg9-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002c7920), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-7-67", "pod":"calico-kube-controllers-76ff444f8d-4xcg9", "timestamp":"2025-08-13 01:49:47.796540447 +0000 UTC"}, Hostname:"172-232-7-67", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.796 [INFO][6378] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.796 [INFO][6378] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.796 [INFO][6378] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-67' Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.815 [INFO][6378] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" host="172-232-7-67" Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.821 [INFO][6378] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-67" Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.829 [INFO][6378] ipam/ipam.go 511: Trying affinity for 192.168.94.192/26 host="172-232-7-67" Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.832 [INFO][6378] ipam/ipam.go 158: Attempting to load block cidr=192.168.94.192/26 host="172-232-7-67" Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.837 [INFO][6378] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.94.192/26 host="172-232-7-67" Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.837 [INFO][6378] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.94.192/26 handle="k8s-pod-network.665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" host="172-232-7-67" Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.841 [INFO][6378] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48 Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.849 [INFO][6378] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.94.192/26 handle="k8s-pod-network.665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" host="172-232-7-67" Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.867 [INFO][6378] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.94.196/26] block=192.168.94.192/26 handle="k8s-pod-network.665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" host="172-232-7-67" Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.868 [INFO][6378] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.94.196/26] handle="k8s-pod-network.665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" host="172-232-7-67" Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.868 [INFO][6378] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:49:47.913773 containerd[1556]: 2025-08-13 01:49:47.868 [INFO][6378] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.94.196/26] IPv6=[] ContainerID="665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" HandleID="k8s-pod-network.665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" Workload="172--232--7--67-k8s-calico--kube--controllers--76ff444f8d--4xcg9-eth0" Aug 13 01:49:47.917075 containerd[1556]: 2025-08-13 01:49:47.873 [INFO][6367] cni-plugin/k8s.go 418: Populated endpoint ContainerID="665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" Namespace="calico-system" Pod="calico-kube-controllers-76ff444f8d-4xcg9" WorkloadEndpoint="172--232--7--67-k8s-calico--kube--controllers--76ff444f8d--4xcg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--67-k8s-calico--kube--controllers--76ff444f8d--4xcg9-eth0", GenerateName:"calico-kube-controllers-76ff444f8d-", Namespace:"calico-system", SelfLink:"", UID:"f88563f6-5704-426b-aecc-303b3869ce30", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76ff444f8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-67", ContainerID:"", Pod:"calico-kube-controllers-76ff444f8d-4xcg9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali146ee0dd383", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:49:47.917075 containerd[1556]: 2025-08-13 01:49:47.873 [INFO][6367] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.94.196/32] ContainerID="665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" Namespace="calico-system" Pod="calico-kube-controllers-76ff444f8d-4xcg9" WorkloadEndpoint="172--232--7--67-k8s-calico--kube--controllers--76ff444f8d--4xcg9-eth0" Aug 13 01:49:47.917075 containerd[1556]: 2025-08-13 01:49:47.873 [INFO][6367] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali146ee0dd383 ContainerID="665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" Namespace="calico-system" Pod="calico-kube-controllers-76ff444f8d-4xcg9" WorkloadEndpoint="172--232--7--67-k8s-calico--kube--controllers--76ff444f8d--4xcg9-eth0" Aug 13 01:49:47.917075 containerd[1556]: 2025-08-13 01:49:47.886 [INFO][6367] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" Namespace="calico-system" Pod="calico-kube-controllers-76ff444f8d-4xcg9" WorkloadEndpoint="172--232--7--67-k8s-calico--kube--controllers--76ff444f8d--4xcg9-eth0" Aug 13 01:49:47.917075 containerd[1556]: 2025-08-13 01:49:47.887 [INFO][6367] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" Namespace="calico-system" Pod="calico-kube-controllers-76ff444f8d-4xcg9" WorkloadEndpoint="172--232--7--67-k8s-calico--kube--controllers--76ff444f8d--4xcg9-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--67-k8s-calico--kube--controllers--76ff444f8d--4xcg9-eth0", GenerateName:"calico-kube-controllers-76ff444f8d-", Namespace:"calico-system", SelfLink:"", UID:"f88563f6-5704-426b-aecc-303b3869ce30", ResourceVersion:"854", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 46, 18, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"76ff444f8d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-67", ContainerID:"665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48", Pod:"calico-kube-controllers-76ff444f8d-4xcg9", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.94.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali146ee0dd383", MAC:"f2:e0:86:11:55:f9", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:49:47.917075 containerd[1556]: 2025-08-13 01:49:47.905 [INFO][6367] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" Namespace="calico-system" Pod="calico-kube-controllers-76ff444f8d-4xcg9" WorkloadEndpoint="172--232--7--67-k8s-calico--kube--controllers--76ff444f8d--4xcg9-eth0" Aug 13 01:49:47.956591 containerd[1556]: time="2025-08-13T01:49:47.955203216Z" level=info msg="connecting to shim 665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48" address="unix:///run/containerd/s/afe8decd56f59bc359066fdfaca3fc67b025bc684c5417b4d78d48f08dd3bcb8" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:49:47.996945 systemd[1]: Started cri-containerd-665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48.scope - libcontainer container 665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48. Aug 13 01:49:48.064518 containerd[1556]: time="2025-08-13T01:49:48.064447545Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-76ff444f8d-4xcg9,Uid:f88563f6-5704-426b-aecc-303b3869ce30,Namespace:calico-system,Attempt:0,} returns sandbox id \"665924e046191010acb468379983726e88fc3f6a5733155b6a53e3f631cfdc48\"" Aug 13 01:49:48.070258 containerd[1556]: time="2025-08-13T01:49:48.070035186Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:49:48.134705 systemd[1]: Started sshd@29-172.232.7.67:22-147.75.109.163:46168.service - OpenSSH per-connection server daemon (147.75.109.163:46168). Aug 13 01:49:48.479542 sshd[6442]: Accepted publickey for core from 147.75.109.163 port 46168 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:48.481507 sshd-session[6442]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:48.487661 systemd-logind[1532]: New session 30 of user core. Aug 13 01:49:48.498090 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 01:49:48.822926 sshd[6444]: Connection closed by 147.75.109.163 port 46168 Aug 13 01:49:48.824090 sshd-session[6442]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:48.829606 systemd[1]: sshd@29-172.232.7.67:22-147.75.109.163:46168.service: Deactivated successfully. Aug 13 01:49:48.832701 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 01:49:48.833899 systemd-logind[1532]: Session 30 logged out. Waiting for processes to exit. Aug 13 01:49:48.837703 systemd-logind[1532]: Removed session 30. Aug 13 01:49:49.042641 containerd[1556]: time="2025-08-13T01:49:49.042560255Z" level=error msg="failed to cleanup \"extract-807806567-O5m2 sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:49:49.043735 containerd[1556]: time="2025-08-13T01:49:49.043652442Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 01:49:49.044555 containerd[1556]: time="2025-08-13T01:49:49.043889881Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=33558762" Aug 13 01:49:49.044717 kubelet[2790]: E0813 01:49:49.044148 2790 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:49:49.044717 kubelet[2790]: E0813 01:49:49.044197 2790 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:49:49.044717 kubelet[2790]: E0813 01:49:49.044405 2790 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqghm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 01:49:49.045781 kubelet[2790]: E0813 01:49:49.045718 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:49:49.556638 kubelet[2790]: E0813 01:49:49.556511 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:49:49.729056 systemd-networkd[1462]: cali146ee0dd383: Gained IPv6LL Aug 13 01:49:53.887431 systemd[1]: Started sshd@30-172.232.7.67:22-147.75.109.163:46184.service - OpenSSH per-connection server daemon (147.75.109.163:46184). Aug 13 01:49:54.230598 sshd[6464]: Accepted publickey for core from 147.75.109.163 port 46184 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:54.232902 sshd-session[6464]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:54.240809 systemd-logind[1532]: New session 31 of user core. Aug 13 01:49:54.248927 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 13 01:49:54.438032 kubelet[2790]: I0813 01:49:54.437931 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:49:54.439482 kubelet[2790]: I0813 01:49:54.439441 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:49:54.445283 kubelet[2790]: I0813 01:49:54.445250 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:49:54.484442 kubelet[2790]: I0813 01:49:54.483486 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:49:54.485132 kubelet[2790]: I0813 01:49:54.485069 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/coredns-674b8bbfcf-6rlkc","kube-system/coredns-674b8bbfcf-vtdcd","calico-system/calico-node-tsmrf","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","calico-system/csi-node-driver-c7jrc","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:49:54.485640 kubelet[2790]: E0813 01:49:54.485561 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:49:54.485640 kubelet[2790]: E0813 01:49:54.485590 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:49:54.485640 kubelet[2790]: E0813 01:49:54.485600 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:49:54.485640 kubelet[2790]: E0813 01:49:54.485609 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:49:54.486296 kubelet[2790]: E0813 01:49:54.486143 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:49:54.486296 kubelet[2790]: E0813 01:49:54.486165 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:49:54.486296 kubelet[2790]: E0813 01:49:54.486175 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:49:54.486296 kubelet[2790]: E0813 01:49:54.486182 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:49:54.486296 kubelet[2790]: E0813 01:49:54.486190 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:49:54.486296 kubelet[2790]: E0813 01:49:54.486198 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:49:54.486672 kubelet[2790]: I0813 01:49:54.486595 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:49:54.567910 sshd[6466]: Connection closed by 147.75.109.163 port 46184 Aug 13 01:49:54.569672 sshd-session[6464]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:54.573940 systemd-logind[1532]: Session 31 logged out. Waiting for processes to exit. Aug 13 01:49:54.575217 systemd[1]: sshd@30-172.232.7.67:22-147.75.109.163:46184.service: Deactivated successfully. Aug 13 01:49:54.577976 systemd[1]: session-31.scope: Deactivated successfully. Aug 13 01:49:54.581524 systemd-logind[1532]: Removed session 31. Aug 13 01:49:57.696693 containerd[1556]: time="2025-08-13T01:49:57.696611335Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 01:49:58.569874 containerd[1556]: time="2025-08-13T01:49:58.569809389Z" level=error msg="failed to cleanup \"extract-172331388-Y-2P sha256:a6200c63e2a03c9e19bca689383dae051e67c8fbd246c7e3961b6330b68b8256\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:49:58.570680 containerd[1556]: time="2025-08-13T01:49:58.570622077Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/97/fs/usr/bin/node-driver-registrar: no space left on device" Aug 13 01:49:58.570767 containerd[1556]: time="2025-08-13T01:49:58.570716057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 01:49:58.571237 kubelet[2790]: E0813 01:49:58.571167 2790 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/97/fs/usr/bin/node-driver-registrar: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2" Aug 13 01:49:58.571630 kubelet[2790]: E0813 01:49:58.571240 2790 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/97/fs/usr/bin/node-driver-registrar: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2" Aug 13 01:49:58.571630 kubelet[2790]: E0813 01:49:58.571380 2790 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v76c4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/97/fs/usr/bin/node-driver-registrar: no space left on device" logger="UnhandledError" Aug 13 01:49:58.572676 kubelet[2790]: E0813 01:49:58.572625 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/97/fs/usr/bin/node-driver-registrar: no space left on device\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:49:58.695481 kubelet[2790]: E0813 01:49:58.695019 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:49:59.638256 systemd[1]: Started sshd@31-172.232.7.67:22-147.75.109.163:37620.service - OpenSSH per-connection server daemon (147.75.109.163:37620). Aug 13 01:49:59.984255 sshd[6493]: Accepted publickey for core from 147.75.109.163 port 37620 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:59.985021 sshd-session[6493]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:59.994195 systemd-logind[1532]: New session 32 of user core. Aug 13 01:49:59.999099 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 13 01:50:00.316438 sshd[6495]: Connection closed by 147.75.109.163 port 37620 Aug 13 01:50:00.317564 sshd-session[6493]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:00.321450 systemd-logind[1532]: Session 32 logged out. Waiting for processes to exit. Aug 13 01:50:00.322184 systemd[1]: sshd@31-172.232.7.67:22-147.75.109.163:37620.service: Deactivated successfully. Aug 13 01:50:00.326180 systemd[1]: session-32.scope: Deactivated successfully. Aug 13 01:50:00.330603 systemd-logind[1532]: Removed session 32. Aug 13 01:50:02.702127 containerd[1556]: time="2025-08-13T01:50:02.699622243Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:50:03.714195 containerd[1556]: time="2025-08-13T01:50:03.714122674Z" level=error msg="failed to cleanup \"extract-233979151-KUU0 sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:50:03.715580 containerd[1556]: time="2025-08-13T01:50:03.715520270Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 01:50:03.715724 containerd[1556]: time="2025-08-13T01:50:03.715671830Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=33558762" Aug 13 01:50:03.716840 kubelet[2790]: E0813 01:50:03.716782 2790 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:50:03.717304 kubelet[2790]: E0813 01:50:03.716869 2790 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:50:03.717485 kubelet[2790]: E0813 01:50:03.717409 2790 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqghm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 01:50:03.719162 kubelet[2790]: E0813 01:50:03.718722 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:50:04.511468 kubelet[2790]: I0813 01:50:04.511319 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:04.511468 kubelet[2790]: I0813 01:50:04.511363 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:50:04.514613 kubelet[2790]: I0813 01:50:04.514577 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:50:04.538435 kubelet[2790]: I0813 01:50:04.538395 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:04.538984 kubelet[2790]: I0813 01:50:04.538697 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-node-tsmrf","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","calico-system/csi-node-driver-c7jrc","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:50:04.539078 kubelet[2790]: E0813 01:50:04.538791 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:50:04.539078 kubelet[2790]: E0813 01:50:04.539017 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:50:04.539078 kubelet[2790]: E0813 01:50:04.539028 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:50:04.539078 kubelet[2790]: E0813 01:50:04.539038 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:50:04.539078 kubelet[2790]: E0813 01:50:04.539048 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:50:04.539235 kubelet[2790]: E0813 01:50:04.539056 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:50:04.539235 kubelet[2790]: E0813 01:50:04.539094 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:50:04.539235 kubelet[2790]: E0813 01:50:04.539102 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:50:04.539235 kubelet[2790]: E0813 01:50:04.539110 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:50:04.539235 kubelet[2790]: E0813 01:50:04.539117 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:50:04.539235 kubelet[2790]: I0813 01:50:04.539138 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:50:04.695612 kubelet[2790]: E0813 01:50:04.694830 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:50:05.382620 systemd[1]: Started sshd@32-172.232.7.67:22-147.75.109.163:37622.service - OpenSSH per-connection server daemon (147.75.109.163:37622). Aug 13 01:50:05.728494 sshd[6510]: Accepted publickey for core from 147.75.109.163 port 37622 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:05.732540 sshd-session[6510]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:05.741673 systemd-logind[1532]: New session 33 of user core. Aug 13 01:50:05.746989 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 13 01:50:06.082789 sshd[6512]: Connection closed by 147.75.109.163 port 37622 Aug 13 01:50:06.084435 sshd-session[6510]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:06.091211 systemd[1]: sshd@32-172.232.7.67:22-147.75.109.163:37622.service: Deactivated successfully. Aug 13 01:50:06.095805 systemd[1]: session-33.scope: Deactivated successfully. Aug 13 01:50:06.097868 systemd-logind[1532]: Session 33 logged out. Waiting for processes to exit. Aug 13 01:50:06.102711 systemd-logind[1532]: Removed session 33. Aug 13 01:50:07.606557 containerd[1556]: time="2025-08-13T01:50:07.606511182Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6da5154e125e31750839c134b00edd40c42a3264d34e846351af2022803e2f22\" id:\"9012cb0f7ec8f7f790593abec03d4baf5c27e5c02a903bb3aa18e7496dce89a6\" pid:6535 exited_at:{seconds:1755049807 nanos:606008754}" Aug 13 01:50:10.697970 kubelet[2790]: E0813 01:50:10.697899 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/97/fs/usr/bin/node-driver-registrar: no space left on device\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:50:11.143113 systemd[1]: Started sshd@33-172.232.7.67:22-147.75.109.163:38472.service - OpenSSH per-connection server daemon (147.75.109.163:38472). Aug 13 01:50:11.486797 sshd[6553]: Accepted publickey for core from 147.75.109.163 port 38472 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:11.488032 sshd-session[6553]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:11.494034 systemd-logind[1532]: New session 34 of user core. Aug 13 01:50:11.498894 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 13 01:50:11.798701 sshd[6555]: Connection closed by 147.75.109.163 port 38472 Aug 13 01:50:11.799196 sshd-session[6553]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:11.804129 systemd-logind[1532]: Session 34 logged out. Waiting for processes to exit. Aug 13 01:50:11.805394 systemd[1]: sshd@33-172.232.7.67:22-147.75.109.163:38472.service: Deactivated successfully. Aug 13 01:50:11.807829 systemd[1]: session-34.scope: Deactivated successfully. Aug 13 01:50:11.810583 systemd-logind[1532]: Removed session 34. Aug 13 01:50:14.574114 kubelet[2790]: I0813 01:50:14.574034 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:14.574114 kubelet[2790]: I0813 01:50:14.574119 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:50:14.579389 kubelet[2790]: I0813 01:50:14.579360 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:50:14.618154 kubelet[2790]: I0813 01:50:14.617961 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:14.618705 kubelet[2790]: I0813 01:50:14.618642 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/coredns-674b8bbfcf-6rlkc","kube-system/coredns-674b8bbfcf-vtdcd","calico-system/calico-node-tsmrf","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","calico-system/csi-node-driver-c7jrc","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:50:14.619022 kubelet[2790]: E0813 01:50:14.618713 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:50:14.619022 kubelet[2790]: E0813 01:50:14.618735 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:50:14.619022 kubelet[2790]: E0813 01:50:14.618773 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:50:14.619022 kubelet[2790]: E0813 01:50:14.618786 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:50:14.619022 kubelet[2790]: E0813 01:50:14.618801 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:50:14.619022 kubelet[2790]: E0813 01:50:14.618813 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:50:14.619022 kubelet[2790]: E0813 01:50:14.618825 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:50:14.619022 kubelet[2790]: E0813 01:50:14.618835 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:50:14.619022 kubelet[2790]: E0813 01:50:14.618860 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:50:14.619022 kubelet[2790]: E0813 01:50:14.618872 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:50:14.619022 kubelet[2790]: I0813 01:50:14.618887 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:50:16.864822 systemd[1]: Started sshd@34-172.232.7.67:22-147.75.109.163:38484.service - OpenSSH per-connection server daemon (147.75.109.163:38484). Aug 13 01:50:17.216791 sshd[6567]: Accepted publickey for core from 147.75.109.163 port 38484 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:17.218345 sshd-session[6567]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:17.225579 systemd-logind[1532]: New session 35 of user core. Aug 13 01:50:17.228942 systemd[1]: Started session-35.scope - Session 35 of User core. Aug 13 01:50:17.547931 sshd[6569]: Connection closed by 147.75.109.163 port 38484 Aug 13 01:50:17.548929 sshd-session[6567]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:17.553849 systemd-logind[1532]: Session 35 logged out. Waiting for processes to exit. Aug 13 01:50:17.554513 systemd[1]: sshd@34-172.232.7.67:22-147.75.109.163:38484.service: Deactivated successfully. Aug 13 01:50:17.558593 systemd[1]: session-35.scope: Deactivated successfully. Aug 13 01:50:17.560724 systemd-logind[1532]: Removed session 35. Aug 13 01:50:17.695858 kubelet[2790]: E0813 01:50:17.695683 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:50:21.694601 kubelet[2790]: E0813 01:50:21.694481 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:50:22.612599 systemd[1]: Started sshd@35-172.232.7.67:22-147.75.109.163:40186.service - OpenSSH per-connection server daemon (147.75.109.163:40186). Aug 13 01:50:22.971064 sshd[6587]: Accepted publickey for core from 147.75.109.163 port 40186 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:22.972678 sshd-session[6587]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:22.982556 systemd-logind[1532]: New session 36 of user core. Aug 13 01:50:22.986403 systemd[1]: Started session-36.scope - Session 36 of User core. Aug 13 01:50:23.319860 sshd[6589]: Connection closed by 147.75.109.163 port 40186 Aug 13 01:50:23.320605 sshd-session[6587]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:23.330955 systemd-logind[1532]: Session 36 logged out. Waiting for processes to exit. Aug 13 01:50:23.332324 systemd[1]: sshd@35-172.232.7.67:22-147.75.109.163:40186.service: Deactivated successfully. Aug 13 01:50:23.335603 systemd[1]: session-36.scope: Deactivated successfully. Aug 13 01:50:23.339087 systemd-logind[1532]: Removed session 36. Aug 13 01:50:23.696613 containerd[1556]: time="2025-08-13T01:50:23.696218874Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 01:50:24.704081 kubelet[2790]: I0813 01:50:24.704003 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:24.704081 kubelet[2790]: I0813 01:50:24.704049 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:50:24.710054 kubelet[2790]: I0813 01:50:24.710029 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:50:24.762312 kubelet[2790]: I0813 01:50:24.762261 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:24.762731 kubelet[2790]: I0813 01:50:24.762455 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/coredns-674b8bbfcf-6rlkc","kube-system/coredns-674b8bbfcf-vtdcd","calico-system/calico-node-tsmrf","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","calico-system/csi-node-driver-c7jrc","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:50:24.762731 kubelet[2790]: E0813 01:50:24.762491 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:50:24.762731 kubelet[2790]: E0813 01:50:24.762504 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:50:24.762731 kubelet[2790]: E0813 01:50:24.762514 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:50:24.762731 kubelet[2790]: E0813 01:50:24.762523 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:50:24.762731 kubelet[2790]: E0813 01:50:24.762534 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:50:24.762731 kubelet[2790]: E0813 01:50:24.762543 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:50:24.762731 kubelet[2790]: E0813 01:50:24.762551 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:50:24.762731 kubelet[2790]: E0813 01:50:24.762558 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:50:24.762731 kubelet[2790]: E0813 01:50:24.762570 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:50:24.762731 kubelet[2790]: E0813 01:50:24.762577 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:50:24.762731 kubelet[2790]: I0813 01:50:24.762587 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:50:24.799353 containerd[1556]: time="2025-08-13T01:50:24.799254848Z" level=error msg="failed to cleanup \"extract-245294989-uBQ8 sha256:a6200c63e2a03c9e19bca689383dae051e67c8fbd246c7e3961b6330b68b8256\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:50:24.800437 containerd[1556]: time="2025-08-13T01:50:24.800104126Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/99/fs/usr/bin/node-driver-registrar: no space left on device" Aug 13 01:50:24.800437 containerd[1556]: time="2025-08-13T01:50:24.800238136Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 01:50:24.801040 kubelet[2790]: E0813 01:50:24.800738 2790 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/99/fs/usr/bin/node-driver-registrar: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2" Aug 13 01:50:24.801040 kubelet[2790]: E0813 01:50:24.800803 2790 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/99/fs/usr/bin/node-driver-registrar: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2" Aug 13 01:50:24.801374 kubelet[2790]: E0813 01:50:24.801307 2790 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v76c4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/99/fs/usr/bin/node-driver-registrar: no space left on device" logger="UnhandledError" Aug 13 01:50:24.802529 kubelet[2790]: E0813 01:50:24.802485 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/99/fs/usr/bin/node-driver-registrar: no space left on device\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:50:26.701160 kubelet[2790]: E0813 01:50:26.700803 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:50:28.383163 systemd[1]: Started sshd@36-172.232.7.67:22-147.75.109.163:52386.service - OpenSSH per-connection server daemon (147.75.109.163:52386). Aug 13 01:50:28.732029 sshd[6608]: Accepted publickey for core from 147.75.109.163 port 52386 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:28.733023 sshd-session[6608]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:28.740124 systemd-logind[1532]: New session 37 of user core. Aug 13 01:50:28.746956 systemd[1]: Started session-37.scope - Session 37 of User core. Aug 13 01:50:29.062654 sshd[6610]: Connection closed by 147.75.109.163 port 52386 Aug 13 01:50:29.063104 sshd-session[6608]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:29.069479 systemd[1]: sshd@36-172.232.7.67:22-147.75.109.163:52386.service: Deactivated successfully. Aug 13 01:50:29.072340 systemd[1]: session-37.scope: Deactivated successfully. Aug 13 01:50:29.074555 systemd-logind[1532]: Session 37 logged out. Waiting for processes to exit. Aug 13 01:50:29.078010 systemd-logind[1532]: Removed session 37. Aug 13 01:50:32.699309 containerd[1556]: time="2025-08-13T01:50:32.698791006Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:50:33.483770 containerd[1556]: time="2025-08-13T01:50:33.483683956Z" level=error msg="failed to cleanup \"extract-322436277-4iGK sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:50:33.484845 containerd[1556]: time="2025-08-13T01:50:33.484787844Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 01:50:33.485842 containerd[1556]: time="2025-08-13T01:50:33.484998953Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=33558762" Aug 13 01:50:33.485922 kubelet[2790]: E0813 01:50:33.485249 2790 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:50:33.485922 kubelet[2790]: E0813 01:50:33.485321 2790 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:50:33.485922 kubelet[2790]: E0813 01:50:33.485472 2790 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqghm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 01:50:33.486840 kubelet[2790]: E0813 01:50:33.486692 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:50:34.127152 systemd[1]: Started sshd@37-172.232.7.67:22-147.75.109.163:52400.service - OpenSSH per-connection server daemon (147.75.109.163:52400). Aug 13 01:50:34.475097 sshd[6624]: Accepted publickey for core from 147.75.109.163 port 52400 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:34.476688 sshd-session[6624]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:34.484950 systemd-logind[1532]: New session 38 of user core. Aug 13 01:50:34.493934 systemd[1]: Started session-38.scope - Session 38 of User core. Aug 13 01:50:34.793620 kubelet[2790]: I0813 01:50:34.793318 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:34.793620 kubelet[2790]: I0813 01:50:34.793397 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:50:34.796250 kubelet[2790]: I0813 01:50:34.796209 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:50:34.802089 sshd[6626]: Connection closed by 147.75.109.163 port 52400 Aug 13 01:50:34.802429 sshd-session[6624]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:34.810154 systemd[1]: sshd@37-172.232.7.67:22-147.75.109.163:52400.service: Deactivated successfully. Aug 13 01:50:34.814219 systemd[1]: session-38.scope: Deactivated successfully. Aug 13 01:50:34.816524 systemd-logind[1532]: Session 38 logged out. Waiting for processes to exit. Aug 13 01:50:34.819279 systemd-logind[1532]: Removed session 38. Aug 13 01:50:34.829705 kubelet[2790]: I0813 01:50:34.829501 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:34.830221 kubelet[2790]: I0813 01:50:34.829984 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/coredns-674b8bbfcf-6rlkc","kube-system/coredns-674b8bbfcf-vtdcd","calico-system/calico-node-tsmrf","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","calico-system/csi-node-driver-c7jrc","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:50:34.830221 kubelet[2790]: E0813 01:50:34.830026 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:50:34.830221 kubelet[2790]: E0813 01:50:34.830041 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:50:34.830221 kubelet[2790]: E0813 01:50:34.830050 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:50:34.830221 kubelet[2790]: E0813 01:50:34.830058 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:50:34.830221 kubelet[2790]: E0813 01:50:34.830068 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:50:34.830221 kubelet[2790]: E0813 01:50:34.830076 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:50:34.830221 kubelet[2790]: E0813 01:50:34.830085 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:50:34.830221 kubelet[2790]: E0813 01:50:34.830092 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:50:34.830221 kubelet[2790]: E0813 01:50:34.830101 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:50:34.830221 kubelet[2790]: E0813 01:50:34.830110 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:50:34.830221 kubelet[2790]: I0813 01:50:34.830119 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:50:37.111686 update_engine[1537]: I20250813 01:50:37.110900 1537 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 01:50:37.111686 update_engine[1537]: I20250813 01:50:37.111063 1537 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 01:50:37.112599 update_engine[1537]: I20250813 01:50:37.111825 1537 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 01:50:37.114037 update_engine[1537]: I20250813 01:50:37.113946 1537 omaha_request_params.cc:62] Current group set to beta Aug 13 01:50:37.115224 update_engine[1537]: I20250813 01:50:37.115180 1537 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 01:50:37.115224 update_engine[1537]: I20250813 01:50:37.115208 1537 update_attempter.cc:643] Scheduling an action processor start. Aug 13 01:50:37.115480 update_engine[1537]: I20250813 01:50:37.115235 1537 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 01:50:37.115480 update_engine[1537]: I20250813 01:50:37.115324 1537 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 01:50:37.117580 update_engine[1537]: I20250813 01:50:37.115800 1537 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 01:50:37.117580 update_engine[1537]: I20250813 01:50:37.115825 1537 omaha_request_action.cc:272] Request: Aug 13 01:50:37.117580 update_engine[1537]: Aug 13 01:50:37.117580 update_engine[1537]: Aug 13 01:50:37.117580 update_engine[1537]: Aug 13 01:50:37.117580 update_engine[1537]: Aug 13 01:50:37.117580 update_engine[1537]: Aug 13 01:50:37.117580 update_engine[1537]: Aug 13 01:50:37.117580 update_engine[1537]: Aug 13 01:50:37.117580 update_engine[1537]: Aug 13 01:50:37.117580 update_engine[1537]: I20250813 01:50:37.115839 1537 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:50:37.120581 update_engine[1537]: I20250813 01:50:37.120545 1537 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:50:37.121506 update_engine[1537]: I20250813 01:50:37.121316 1537 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:50:37.123315 locksmithd[1568]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 01:50:37.141251 update_engine[1537]: E20250813 01:50:37.140976 1537 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:50:37.141251 update_engine[1537]: I20250813 01:50:37.141141 1537 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 01:50:37.608631 containerd[1556]: time="2025-08-13T01:50:37.608534882Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6da5154e125e31750839c134b00edd40c42a3264d34e846351af2022803e2f22\" id:\"ad9666db15c39cf65bdc2ad5346aaef8d24f5da9a088d272dc67951e910fd175\" pid:6648 exited_at:{seconds:1755049837 nanos:607883584}" Aug 13 01:50:37.695114 kubelet[2790]: E0813 01:50:37.695048 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/99/fs/usr/bin/node-driver-registrar: no space left on device\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:50:39.890871 systemd[1]: Started sshd@38-172.232.7.67:22-147.75.109.163:60034.service - OpenSSH per-connection server daemon (147.75.109.163:60034). Aug 13 01:50:40.314506 sshd[6661]: Accepted publickey for core from 147.75.109.163 port 60034 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:40.318033 sshd-session[6661]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:40.350852 systemd-logind[1532]: New session 39 of user core. Aug 13 01:50:40.357397 systemd[1]: Started session-39.scope - Session 39 of User core. Aug 13 01:50:40.777877 sshd[6663]: Connection closed by 147.75.109.163 port 60034 Aug 13 01:50:40.780419 sshd-session[6661]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:40.790460 systemd[1]: sshd@38-172.232.7.67:22-147.75.109.163:60034.service: Deactivated successfully. Aug 13 01:50:40.799724 systemd[1]: session-39.scope: Deactivated successfully. Aug 13 01:50:40.803386 systemd-logind[1532]: Session 39 logged out. Waiting for processes to exit. Aug 13 01:50:40.807594 systemd-logind[1532]: Removed session 39. Aug 13 01:50:44.858335 kubelet[2790]: I0813 01:50:44.858261 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:44.859540 kubelet[2790]: I0813 01:50:44.858406 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:50:44.863109 kubelet[2790]: I0813 01:50:44.863068 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:50:44.884953 kubelet[2790]: I0813 01:50:44.884361 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:44.884953 kubelet[2790]: I0813 01:50:44.884630 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-node-tsmrf","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","calico-system/csi-node-driver-c7jrc","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:50:44.884953 kubelet[2790]: E0813 01:50:44.884762 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:50:44.884953 kubelet[2790]: E0813 01:50:44.884791 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:50:44.884953 kubelet[2790]: E0813 01:50:44.884807 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:50:44.884953 kubelet[2790]: E0813 01:50:44.884818 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:50:44.884953 kubelet[2790]: E0813 01:50:44.884833 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:50:44.884953 kubelet[2790]: E0813 01:50:44.884844 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:50:44.884953 kubelet[2790]: E0813 01:50:44.884855 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:50:44.884953 kubelet[2790]: E0813 01:50:44.884866 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:50:44.884953 kubelet[2790]: E0813 01:50:44.884878 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:50:44.884953 kubelet[2790]: E0813 01:50:44.884891 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:50:44.884953 kubelet[2790]: I0813 01:50:44.884905 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:50:45.839518 systemd[1]: Started sshd@39-172.232.7.67:22-147.75.109.163:60044.service - OpenSSH per-connection server daemon (147.75.109.163:60044). Aug 13 01:50:46.190861 sshd[6676]: Accepted publickey for core from 147.75.109.163 port 60044 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:46.192957 sshd-session[6676]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:46.198818 systemd-logind[1532]: New session 40 of user core. Aug 13 01:50:46.205896 systemd[1]: Started session-40.scope - Session 40 of User core. Aug 13 01:50:46.512035 sshd[6678]: Connection closed by 147.75.109.163 port 60044 Aug 13 01:50:46.513870 sshd-session[6676]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:46.518869 systemd[1]: sshd@39-172.232.7.67:22-147.75.109.163:60044.service: Deactivated successfully. Aug 13 01:50:46.523329 systemd[1]: session-40.scope: Deactivated successfully. Aug 13 01:50:46.524839 systemd-logind[1532]: Session 40 logged out. Waiting for processes to exit. Aug 13 01:50:46.528362 systemd-logind[1532]: Removed session 40. Aug 13 01:50:46.702730 kubelet[2790]: E0813 01:50:46.702636 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:50:47.109055 update_engine[1537]: I20250813 01:50:47.108969 1537 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:50:47.109477 update_engine[1537]: I20250813 01:50:47.109298 1537 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:50:47.109696 update_engine[1537]: I20250813 01:50:47.109670 1537 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:50:47.110609 update_engine[1537]: E20250813 01:50:47.110576 1537 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:50:47.110711 update_engine[1537]: I20250813 01:50:47.110632 1537 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 13 01:50:47.694727 kubelet[2790]: E0813 01:50:47.694672 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:50:50.405041 containerd[1556]: time="2025-08-13T01:50:50.404662122Z" level=warning msg="container event discarded" container=4fddfd637777d66d603483af1c8de6c7a6dc7638d6086e55764e9a22040f8ca4 type=CONTAINER_CREATED_EVENT Aug 13 01:50:50.416582 containerd[1556]: time="2025-08-13T01:50:50.416506887Z" level=warning msg="container event discarded" container=4fddfd637777d66d603483af1c8de6c7a6dc7638d6086e55764e9a22040f8ca4 type=CONTAINER_STARTED_EVENT Aug 13 01:50:50.458947 containerd[1556]: time="2025-08-13T01:50:50.458871690Z" level=warning msg="container event discarded" container=2d101dcb31f49c538452eb0456a3edf189b30f2a97b14d874d92a20a2c6c54ec type=CONTAINER_CREATED_EVENT Aug 13 01:50:50.480206 containerd[1556]: time="2025-08-13T01:50:50.480128377Z" level=warning msg="container event discarded" container=82ad7c8e7965f9f675f02efc4ef995213b33ab2cc1a63e6c47a15e7ef03c4350 type=CONTAINER_CREATED_EVENT Aug 13 01:50:50.480206 containerd[1556]: time="2025-08-13T01:50:50.480176247Z" level=warning msg="container event discarded" container=82ad7c8e7965f9f675f02efc4ef995213b33ab2cc1a63e6c47a15e7ef03c4350 type=CONTAINER_STARTED_EVENT Aug 13 01:50:50.503785 containerd[1556]: time="2025-08-13T01:50:50.503515319Z" level=warning msg="container event discarded" container=07d1d0391857b928080a0b255dc5d0cc2f4f3767c606ccc25fa823c79046d812 type=CONTAINER_CREATED_EVENT Aug 13 01:50:50.503785 containerd[1556]: time="2025-08-13T01:50:50.503577659Z" level=warning msg="container event discarded" container=07d1d0391857b928080a0b255dc5d0cc2f4f3767c606ccc25fa823c79046d812 type=CONTAINER_STARTED_EVENT Aug 13 01:50:50.503785 containerd[1556]: time="2025-08-13T01:50:50.503586498Z" level=warning msg="container event discarded" container=82bd2ae4a8f0759932f3b8b4744d9ea8052155a62594e8a9307d5e495a955593 type=CONTAINER_CREATED_EVENT Aug 13 01:50:50.531888 containerd[1556]: time="2025-08-13T01:50:50.531807401Z" level=warning msg="container event discarded" container=7bf5ef9179c1930d03bf50b33265eb181ceb25c1bc93bf8026e6beed853e02db type=CONTAINER_CREATED_EVENT Aug 13 01:50:50.641361 containerd[1556]: time="2025-08-13T01:50:50.641284766Z" level=warning msg="container event discarded" container=2d101dcb31f49c538452eb0456a3edf189b30f2a97b14d874d92a20a2c6c54ec type=CONTAINER_STARTED_EVENT Aug 13 01:50:50.661652 containerd[1556]: time="2025-08-13T01:50:50.661528794Z" level=warning msg="container event discarded" container=82bd2ae4a8f0759932f3b8b4744d9ea8052155a62594e8a9307d5e495a955593 type=CONTAINER_STARTED_EVENT Aug 13 01:50:50.699790 kubelet[2790]: E0813 01:50:50.699687 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/99/fs/usr/bin/node-driver-registrar: no space left on device\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:50:50.731498 containerd[1556]: time="2025-08-13T01:50:50.731414811Z" level=warning msg="container event discarded" container=7bf5ef9179c1930d03bf50b33265eb181ceb25c1bc93bf8026e6beed853e02db type=CONTAINER_STARTED_EVENT Aug 13 01:50:51.576318 systemd[1]: Started sshd@40-172.232.7.67:22-147.75.109.163:44820.service - OpenSSH per-connection server daemon (147.75.109.163:44820). Aug 13 01:50:51.923485 sshd[6690]: Accepted publickey for core from 147.75.109.163 port 44820 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:51.925782 sshd-session[6690]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:51.932473 systemd-logind[1532]: New session 41 of user core. Aug 13 01:50:51.939082 systemd[1]: Started session-41.scope - Session 41 of User core. Aug 13 01:50:52.235914 sshd[6692]: Connection closed by 147.75.109.163 port 44820 Aug 13 01:50:52.237038 sshd-session[6690]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:52.242011 systemd-logind[1532]: Session 41 logged out. Waiting for processes to exit. Aug 13 01:50:52.243178 systemd[1]: sshd@40-172.232.7.67:22-147.75.109.163:44820.service: Deactivated successfully. Aug 13 01:50:52.245720 systemd[1]: session-41.scope: Deactivated successfully. Aug 13 01:50:52.248428 systemd-logind[1532]: Removed session 41. Aug 13 01:50:54.918229 kubelet[2790]: I0813 01:50:54.918153 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:54.918229 kubelet[2790]: I0813 01:50:54.918213 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:50:54.921586 kubelet[2790]: I0813 01:50:54.921555 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:50:54.949781 kubelet[2790]: I0813 01:50:54.949035 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:54.949781 kubelet[2790]: I0813 01:50:54.949560 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-node-tsmrf","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","calico-system/csi-node-driver-c7jrc","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:50:54.949781 kubelet[2790]: E0813 01:50:54.949632 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:50:54.949781 kubelet[2790]: E0813 01:50:54.949648 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:50:54.949781 kubelet[2790]: E0813 01:50:54.949687 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:50:54.949781 kubelet[2790]: E0813 01:50:54.949697 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:50:54.949781 kubelet[2790]: E0813 01:50:54.949708 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:50:54.949781 kubelet[2790]: E0813 01:50:54.949717 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:50:54.949781 kubelet[2790]: E0813 01:50:54.949725 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:50:54.949781 kubelet[2790]: E0813 01:50:54.949773 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:50:54.949781 kubelet[2790]: E0813 01:50:54.949787 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:50:54.949781 kubelet[2790]: E0813 01:50:54.949807 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:50:54.949781 kubelet[2790]: I0813 01:50:54.949819 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:50:56.697964 kubelet[2790]: I0813 01:50:56.697851 2790 image_gc_manager.go:391] "Disk usage on image filesystem is over the high threshold, trying to free bytes down to the low threshold" usage=100 highThreshold=85 amountToFree=411531673 lowThreshold=80 Aug 13 01:50:56.698993 kubelet[2790]: E0813 01:50:56.698498 2790 kubelet.go:1596] "Image garbage collection failed multiple times in a row" err="Failed to garbage collect required amount of images. Attempted to free 411531673 bytes, but only found 0 bytes eligible to free." Aug 13 01:50:57.112733 update_engine[1537]: I20250813 01:50:57.112630 1537 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:50:57.113210 update_engine[1537]: I20250813 01:50:57.113003 1537 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:50:57.113395 update_engine[1537]: I20250813 01:50:57.113345 1537 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:50:57.114367 update_engine[1537]: E20250813 01:50:57.114312 1537 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:50:57.114421 update_engine[1537]: I20250813 01:50:57.114370 1537 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 13 01:50:57.301159 systemd[1]: Started sshd@41-172.232.7.67:22-147.75.109.163:44822.service - OpenSSH per-connection server daemon (147.75.109.163:44822). Aug 13 01:50:57.642362 sshd[6706]: Accepted publickey for core from 147.75.109.163 port 44822 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:57.644163 sshd-session[6706]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:57.650774 systemd-logind[1532]: New session 42 of user core. Aug 13 01:50:57.659940 systemd[1]: Started session-42.scope - Session 42 of User core. Aug 13 01:50:57.958842 sshd[6708]: Connection closed by 147.75.109.163 port 44822 Aug 13 01:50:57.959986 sshd-session[6706]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:57.965264 systemd-logind[1532]: Session 42 logged out. Waiting for processes to exit. Aug 13 01:50:57.966108 systemd[1]: sshd@41-172.232.7.67:22-147.75.109.163:44822.service: Deactivated successfully. Aug 13 01:50:57.969035 systemd[1]: session-42.scope: Deactivated successfully. Aug 13 01:50:57.972022 systemd-logind[1532]: Removed session 42. Aug 13 01:50:58.699828 kubelet[2790]: E0813 01:50:58.699538 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:51:00.696473 kubelet[2790]: E0813 01:51:00.696427 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:51:01.696671 kubelet[2790]: E0813 01:51:01.696517 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/99/fs/usr/bin/node-driver-registrar: no space left on device\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:51:02.477348 containerd[1556]: time="2025-08-13T01:51:02.477261314Z" level=warning msg="container event discarded" container=29beb7298bc36dcae95fec8a27a3795971a53cdfe6abce9af7f444dc60415eac type=CONTAINER_CREATED_EVENT Aug 13 01:51:02.477348 containerd[1556]: time="2025-08-13T01:51:02.477328064Z" level=warning msg="container event discarded" container=29beb7298bc36dcae95fec8a27a3795971a53cdfe6abce9af7f444dc60415eac type=CONTAINER_STARTED_EVENT Aug 13 01:51:02.531012 containerd[1556]: time="2025-08-13T01:51:02.530874162Z" level=warning msg="container event discarded" container=81e52e943171741dd5182ebf33c08f4d53bb223cd9ae2fcb01fa836e3f6dc5f1 type=CONTAINER_CREATED_EVENT Aug 13 01:51:02.626321 containerd[1556]: time="2025-08-13T01:51:02.626186589Z" level=warning msg="container event discarded" container=d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f type=CONTAINER_CREATED_EVENT Aug 13 01:51:02.626321 containerd[1556]: time="2025-08-13T01:51:02.626252819Z" level=warning msg="container event discarded" container=d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f type=CONTAINER_STARTED_EVENT Aug 13 01:51:02.686618 containerd[1556]: time="2025-08-13T01:51:02.686537214Z" level=warning msg="container event discarded" container=81e52e943171741dd5182ebf33c08f4d53bb223cd9ae2fcb01fa836e3f6dc5f1 type=CONTAINER_STARTED_EVENT Aug 13 01:51:03.024844 systemd[1]: Started sshd@42-172.232.7.67:22-147.75.109.163:45710.service - OpenSSH per-connection server daemon (147.75.109.163:45710). Aug 13 01:51:03.365522 sshd[6728]: Accepted publickey for core from 147.75.109.163 port 45710 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:51:03.367875 sshd-session[6728]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:51:03.375853 systemd-logind[1532]: New session 43 of user core. Aug 13 01:51:03.380917 systemd[1]: Started session-43.scope - Session 43 of User core. Aug 13 01:51:03.695412 kubelet[2790]: E0813 01:51:03.695267 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:51:03.704921 sshd[6730]: Connection closed by 147.75.109.163 port 45710 Aug 13 01:51:03.705996 sshd-session[6728]: pam_unix(sshd:session): session closed for user core Aug 13 01:51:03.714664 systemd[1]: sshd@42-172.232.7.67:22-147.75.109.163:45710.service: Deactivated successfully. Aug 13 01:51:03.718460 systemd[1]: session-43.scope: Deactivated successfully. Aug 13 01:51:03.720358 systemd-logind[1532]: Session 43 logged out. Waiting for processes to exit. Aug 13 01:51:03.723614 systemd-logind[1532]: Removed session 43. Aug 13 01:51:04.996307 kubelet[2790]: I0813 01:51:04.996254 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:51:04.996307 kubelet[2790]: I0813 01:51:04.996314 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:51:05.000101 kubelet[2790]: I0813 01:51:05.000062 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:51:05.022842 kubelet[2790]: I0813 01:51:05.022793 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:51:05.023147 kubelet[2790]: I0813 01:51:05.023116 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/coredns-674b8bbfcf-6rlkc","kube-system/coredns-674b8bbfcf-vtdcd","calico-system/calico-node-tsmrf","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","calico-system/csi-node-driver-c7jrc","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:51:05.023244 kubelet[2790]: E0813 01:51:05.023183 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:51:05.023244 kubelet[2790]: E0813 01:51:05.023201 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:51:05.023244 kubelet[2790]: E0813 01:51:05.023210 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:51:05.023244 kubelet[2790]: E0813 01:51:05.023220 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:51:05.023399 kubelet[2790]: E0813 01:51:05.023251 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:51:05.023399 kubelet[2790]: E0813 01:51:05.023302 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:51:05.023399 kubelet[2790]: E0813 01:51:05.023340 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:51:05.023399 kubelet[2790]: E0813 01:51:05.023350 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:51:05.023399 kubelet[2790]: E0813 01:51:05.023361 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:51:05.023399 kubelet[2790]: E0813 01:51:05.023369 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:51:05.023399 kubelet[2790]: I0813 01:51:05.023380 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:51:05.240401 containerd[1556]: time="2025-08-13T01:51:05.240260991Z" level=warning msg="container event discarded" container=773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198 type=CONTAINER_CREATED_EVENT Aug 13 01:51:05.468857 containerd[1556]: time="2025-08-13T01:51:05.468784270Z" level=warning msg="container event discarded" container=773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198 type=CONTAINER_STARTED_EVENT Aug 13 01:51:07.112318 update_engine[1537]: I20250813 01:51:07.112210 1537 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:51:07.112999 update_engine[1537]: I20250813 01:51:07.112609 1537 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:51:07.113203 update_engine[1537]: I20250813 01:51:07.113109 1537 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:51:07.113880 update_engine[1537]: E20250813 01:51:07.113838 1537 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:51:07.113970 update_engine[1537]: I20250813 01:51:07.113910 1537 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 01:51:07.113970 update_engine[1537]: I20250813 01:51:07.113927 1537 omaha_request_action.cc:617] Omaha request response: Aug 13 01:51:07.114089 update_engine[1537]: E20250813 01:51:07.114069 1537 omaha_request_action.cc:636] Omaha request network transfer failed. Aug 13 01:51:07.114143 update_engine[1537]: I20250813 01:51:07.114123 1537 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Aug 13 01:51:07.114180 update_engine[1537]: I20250813 01:51:07.114130 1537 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 01:51:07.114180 update_engine[1537]: I20250813 01:51:07.114152 1537 update_attempter.cc:306] Processing Done. Aug 13 01:51:07.114264 update_engine[1537]: E20250813 01:51:07.114207 1537 update_attempter.cc:619] Update failed. Aug 13 01:51:07.114264 update_engine[1537]: I20250813 01:51:07.114225 1537 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Aug 13 01:51:07.114264 update_engine[1537]: I20250813 01:51:07.114233 1537 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Aug 13 01:51:07.114264 update_engine[1537]: I20250813 01:51:07.114239 1537 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Aug 13 01:51:07.115160 update_engine[1537]: I20250813 01:51:07.114542 1537 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 01:51:07.115160 update_engine[1537]: I20250813 01:51:07.114583 1537 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 01:51:07.115160 update_engine[1537]: I20250813 01:51:07.114590 1537 omaha_request_action.cc:272] Request: Aug 13 01:51:07.115160 update_engine[1537]: Aug 13 01:51:07.115160 update_engine[1537]: Aug 13 01:51:07.115160 update_engine[1537]: Aug 13 01:51:07.115160 update_engine[1537]: Aug 13 01:51:07.115160 update_engine[1537]: Aug 13 01:51:07.115160 update_engine[1537]: Aug 13 01:51:07.115160 update_engine[1537]: I20250813 01:51:07.114598 1537 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:51:07.115160 update_engine[1537]: I20250813 01:51:07.114851 1537 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:51:07.115160 update_engine[1537]: I20250813 01:51:07.115113 1537 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:51:07.115566 locksmithd[1568]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Aug 13 01:51:07.115963 update_engine[1537]: E20250813 01:51:07.115828 1537 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:51:07.115963 update_engine[1537]: I20250813 01:51:07.115870 1537 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 01:51:07.115963 update_engine[1537]: I20250813 01:51:07.115880 1537 omaha_request_action.cc:617] Omaha request response: Aug 13 01:51:07.115963 update_engine[1537]: I20250813 01:51:07.115890 1537 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 01:51:07.115963 update_engine[1537]: I20250813 01:51:07.115897 1537 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 01:51:07.115963 update_engine[1537]: I20250813 01:51:07.115904 1537 update_attempter.cc:306] Processing Done. Aug 13 01:51:07.115963 update_engine[1537]: I20250813 01:51:07.115914 1537 update_attempter.cc:310] Error event sent. Aug 13 01:51:07.115963 update_engine[1537]: I20250813 01:51:07.115942 1537 update_check_scheduler.cc:74] Next update check in 40m3s Aug 13 01:51:07.116515 locksmithd[1568]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Aug 13 01:51:07.598801 containerd[1556]: time="2025-08-13T01:51:07.598720587Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6da5154e125e31750839c134b00edd40c42a3264d34e846351af2022803e2f22\" id:\"bed1ce5e5754dad6ed92f83eae63ce5ca98a30d2a3fd90118331ca466c01d88a\" pid:6753 exited_at:{seconds:1755049867 nanos:597643839}" Aug 13 01:51:08.768038 systemd[1]: Started sshd@43-172.232.7.67:22-147.75.109.163:49112.service - OpenSSH per-connection server daemon (147.75.109.163:49112). Aug 13 01:51:09.107996 sshd[6769]: Accepted publickey for core from 147.75.109.163 port 49112 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:51:09.109275 sshd-session[6769]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:51:09.114935 systemd-logind[1532]: New session 44 of user core. Aug 13 01:51:09.119946 systemd[1]: Started session-44.scope - Session 44 of User core. Aug 13 01:51:09.442172 sshd[6771]: Connection closed by 147.75.109.163 port 49112 Aug 13 01:51:09.444012 sshd-session[6769]: pam_unix(sshd:session): session closed for user core Aug 13 01:51:09.449888 systemd[1]: sshd@43-172.232.7.67:22-147.75.109.163:49112.service: Deactivated successfully. Aug 13 01:51:09.453218 systemd[1]: session-44.scope: Deactivated successfully. Aug 13 01:51:09.454226 systemd-logind[1532]: Session 44 logged out. Waiting for processes to exit. Aug 13 01:51:09.456564 systemd-logind[1532]: Removed session 44. Aug 13 01:51:09.698884 kubelet[2790]: E0813 01:51:09.698581 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:51:12.696845 containerd[1556]: time="2025-08-13T01:51:12.696789663Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 01:51:13.497189 containerd[1556]: time="2025-08-13T01:51:13.497079991Z" level=error msg="failed to cleanup \"extract-191453035-7yAt sha256:a6200c63e2a03c9e19bca689383dae051e67c8fbd246c7e3961b6330b68b8256\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:51:13.498112 containerd[1556]: time="2025-08-13T01:51:13.497981040Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/101/fs/usr/bin/node-driver-registrar: no space left on device" Aug 13 01:51:13.498233 containerd[1556]: time="2025-08-13T01:51:13.498028810Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 01:51:13.498466 kubelet[2790]: E0813 01:51:13.498328 2790 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/101/fs/usr/bin/node-driver-registrar: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2" Aug 13 01:51:13.498466 kubelet[2790]: E0813 01:51:13.498407 2790 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/101/fs/usr/bin/node-driver-registrar: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2" Aug 13 01:51:13.499228 kubelet[2790]: E0813 01:51:13.498596 2790 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-v76c4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-c7jrc_calico-system(4296a7ed-e75a-4d74-935a-9017b9a86286): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/101/fs/usr/bin/node-driver-registrar: no space left on device" logger="UnhandledError" Aug 13 01:51:13.499984 kubelet[2790]: E0813 01:51:13.499948 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/101/fs/usr/bin/node-driver-registrar: no space left on device\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:51:14.510770 systemd[1]: Started sshd@44-172.232.7.67:22-147.75.109.163:49126.service - OpenSSH per-connection server daemon (147.75.109.163:49126). Aug 13 01:51:14.871535 sshd[6801]: Accepted publickey for core from 147.75.109.163 port 49126 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:51:14.873710 sshd-session[6801]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:51:14.881032 systemd-logind[1532]: New session 45 of user core. Aug 13 01:51:14.888092 systemd[1]: Started session-45.scope - Session 45 of User core. Aug 13 01:51:15.054728 kubelet[2790]: I0813 01:51:15.054481 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:51:15.054728 kubelet[2790]: I0813 01:51:15.054771 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:51:15.057289 kubelet[2790]: I0813 01:51:15.057260 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:51:15.086621 kubelet[2790]: I0813 01:51:15.086554 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:51:15.087634 kubelet[2790]: I0813 01:51:15.086721 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-node-tsmrf","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","calico-system/csi-node-driver-c7jrc","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:51:15.087634 kubelet[2790]: E0813 01:51:15.087314 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:51:15.087634 kubelet[2790]: E0813 01:51:15.087338 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:51:15.087634 kubelet[2790]: E0813 01:51:15.087348 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:51:15.087634 kubelet[2790]: E0813 01:51:15.087361 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:51:15.087634 kubelet[2790]: E0813 01:51:15.087373 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:51:15.087634 kubelet[2790]: E0813 01:51:15.087380 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:51:15.087634 kubelet[2790]: E0813 01:51:15.087388 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:51:15.087634 kubelet[2790]: E0813 01:51:15.087395 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:51:15.087634 kubelet[2790]: E0813 01:51:15.087404 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:51:15.087634 kubelet[2790]: E0813 01:51:15.087412 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:51:15.087634 kubelet[2790]: I0813 01:51:15.087437 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:51:15.199899 sshd[6803]: Connection closed by 147.75.109.163 port 49126 Aug 13 01:51:15.202001 sshd-session[6801]: pam_unix(sshd:session): session closed for user core Aug 13 01:51:15.207137 systemd-logind[1532]: Session 45 logged out. Waiting for processes to exit. Aug 13 01:51:15.207444 systemd[1]: sshd@44-172.232.7.67:22-147.75.109.163:49126.service: Deactivated successfully. Aug 13 01:51:15.210018 systemd[1]: session-45.scope: Deactivated successfully. Aug 13 01:51:15.211910 systemd-logind[1532]: Removed session 45. Aug 13 01:51:17.694587 kubelet[2790]: E0813 01:51:17.694483 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:51:18.750031 containerd[1556]: time="2025-08-13T01:51:18.749841751Z" level=warning msg="container event discarded" container=a6df34dc1e5476403a75a033586a859ed0d1bbd1b6a5361e6314f40369ee5a54 type=CONTAINER_CREATED_EVENT Aug 13 01:51:18.750031 containerd[1556]: time="2025-08-13T01:51:18.750005241Z" level=warning msg="container event discarded" container=a6df34dc1e5476403a75a033586a859ed0d1bbd1b6a5361e6314f40369ee5a54 type=CONTAINER_STARTED_EVENT Aug 13 01:51:19.041124 containerd[1556]: time="2025-08-13T01:51:19.040927947Z" level=warning msg="container event discarded" container=19f136ecf27677a48dbcfabc2d82ff742c8f56ba9769515ce641ecafbfefd0f8 type=CONTAINER_CREATED_EVENT Aug 13 01:51:19.041124 containerd[1556]: time="2025-08-13T01:51:19.040996747Z" level=warning msg="container event discarded" container=19f136ecf27677a48dbcfabc2d82ff742c8f56ba9769515ce641ecafbfefd0f8 type=CONTAINER_STARTED_EVENT Aug 13 01:51:20.268514 systemd[1]: Started sshd@45-172.232.7.67:22-147.75.109.163:42788.service - OpenSSH per-connection server daemon (147.75.109.163:42788). Aug 13 01:51:20.623879 sshd[6815]: Accepted publickey for core from 147.75.109.163 port 42788 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:51:20.626766 sshd-session[6815]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:51:20.634804 systemd-logind[1532]: New session 46 of user core. Aug 13 01:51:20.640952 systemd[1]: Started session-46.scope - Session 46 of User core. Aug 13 01:51:20.696487 containerd[1556]: time="2025-08-13T01:51:20.696424169Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:51:20.953256 sshd[6817]: Connection closed by 147.75.109.163 port 42788 Aug 13 01:51:20.954097 sshd-session[6815]: pam_unix(sshd:session): session closed for user core Aug 13 01:51:20.961238 systemd-logind[1532]: Session 46 logged out. Waiting for processes to exit. Aug 13 01:51:20.962584 systemd[1]: sshd@45-172.232.7.67:22-147.75.109.163:42788.service: Deactivated successfully. Aug 13 01:51:20.965422 systemd[1]: session-46.scope: Deactivated successfully. Aug 13 01:51:20.967708 systemd-logind[1532]: Removed session 46. Aug 13 01:51:21.302291 containerd[1556]: time="2025-08-13T01:51:21.302228280Z" level=warning msg="container event discarded" container=2b9b16d3a696259bfa160d4bdf11e63cf133237683db2f955c0725cf670b427d type=CONTAINER_CREATED_EVENT Aug 13 01:51:21.421087 containerd[1556]: time="2025-08-13T01:51:21.420919413Z" level=error msg="failed to cleanup \"extract-197807263-H1fE sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:51:21.421925 containerd[1556]: time="2025-08-13T01:51:21.421825692Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 01:51:21.422151 containerd[1556]: time="2025-08-13T01:51:21.422033371Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=33558762" Aug 13 01:51:21.422365 kubelet[2790]: E0813 01:51:21.422296 2790 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:51:21.422837 kubelet[2790]: E0813 01:51:21.422376 2790 kuberuntime_image.go:42] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:51:21.422837 kubelet[2790]: E0813 01:51:21.422581 2790 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-nqghm,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-76ff444f8d-4xcg9_calico-system(f88563f6-5704-426b-aecc-303b3869ce30): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 01:51:21.424016 kubelet[2790]: E0813 01:51:21.423983 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:51:21.625029 containerd[1556]: time="2025-08-13T01:51:21.624767377Z" level=warning msg="container event discarded" container=2b9b16d3a696259bfa160d4bdf11e63cf133237683db2f955c0725cf670b427d type=CONTAINER_STARTED_EVENT Aug 13 01:51:21.694525 kubelet[2790]: E0813 01:51:21.694467 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:51:22.114532 containerd[1556]: time="2025-08-13T01:51:22.114454335Z" level=warning msg="container event discarded" container=fa5a919e20223bac07f8f651375dcbb9305c368d27edd1d93dbdc2e9e7e64361 type=CONTAINER_CREATED_EVENT Aug 13 01:51:22.489352 containerd[1556]: time="2025-08-13T01:51:22.489280823Z" level=warning msg="container event discarded" container=fa5a919e20223bac07f8f651375dcbb9305c368d27edd1d93dbdc2e9e7e64361 type=CONTAINER_STARTED_EVENT Aug 13 01:51:22.646770 containerd[1556]: time="2025-08-13T01:51:22.646676480Z" level=warning msg="container event discarded" container=fa5a919e20223bac07f8f651375dcbb9305c368d27edd1d93dbdc2e9e7e64361 type=CONTAINER_STOPPED_EVENT Aug 13 01:51:25.111008 kubelet[2790]: I0813 01:51:25.110968 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:51:25.111008 kubelet[2790]: I0813 01:51:25.111018 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:51:25.113129 kubelet[2790]: I0813 01:51:25.113093 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:51:25.141428 kubelet[2790]: I0813 01:51:25.141386 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:51:25.141665 kubelet[2790]: I0813 01:51:25.141638 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-node-tsmrf","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","calico-system/csi-node-driver-c7jrc","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:51:25.141765 kubelet[2790]: E0813 01:51:25.141706 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:51:25.141765 kubelet[2790]: E0813 01:51:25.141721 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:51:25.141765 kubelet[2790]: E0813 01:51:25.141730 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:51:25.141900 kubelet[2790]: E0813 01:51:25.141772 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:51:25.141900 kubelet[2790]: E0813 01:51:25.141787 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:51:25.141900 kubelet[2790]: E0813 01:51:25.141795 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:51:25.141900 kubelet[2790]: E0813 01:51:25.141803 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:51:25.141900 kubelet[2790]: E0813 01:51:25.141812 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:51:25.141900 kubelet[2790]: E0813 01:51:25.141822 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:51:25.141900 kubelet[2790]: E0813 01:51:25.141858 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:51:25.141900 kubelet[2790]: I0813 01:51:25.141869 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:51:26.021863 systemd[1]: Started sshd@46-172.232.7.67:22-147.75.109.163:42790.service - OpenSSH per-connection server daemon (147.75.109.163:42790). Aug 13 01:51:26.366641 sshd[6830]: Accepted publickey for core from 147.75.109.163 port 42790 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:51:26.368455 sshd-session[6830]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:51:26.374640 systemd-logind[1532]: New session 47 of user core. Aug 13 01:51:26.378915 systemd[1]: Started session-47.scope - Session 47 of User core. Aug 13 01:51:26.709822 kubelet[2790]: E0813 01:51:26.709232 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/101/fs/usr/bin/node-driver-registrar: no space left on device\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:51:26.737918 sshd[6832]: Connection closed by 147.75.109.163 port 42790 Aug 13 01:51:26.740033 sshd-session[6830]: pam_unix(sshd:session): session closed for user core Aug 13 01:51:26.749390 systemd[1]: sshd@46-172.232.7.67:22-147.75.109.163:42790.service: Deactivated successfully. Aug 13 01:51:26.753407 systemd[1]: session-47.scope: Deactivated successfully. Aug 13 01:51:26.756769 systemd-logind[1532]: Session 47 logged out. Waiting for processes to exit. Aug 13 01:51:26.760974 systemd-logind[1532]: Removed session 47. Aug 13 01:51:27.012165 containerd[1556]: time="2025-08-13T01:51:27.012084788Z" level=warning msg="container event discarded" container=3538f29f84e2362f6432f3bf84aea32ac2549d9e4a8698ef54009978bd9bcd7f type=CONTAINER_CREATED_EVENT Aug 13 01:51:27.233633 containerd[1556]: time="2025-08-13T01:51:27.233509731Z" level=warning msg="container event discarded" container=3538f29f84e2362f6432f3bf84aea32ac2549d9e4a8698ef54009978bd9bcd7f type=CONTAINER_STARTED_EVENT Aug 13 01:51:28.695645 kubelet[2790]: E0813 01:51:28.694683 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:51:30.088627 containerd[1556]: time="2025-08-13T01:51:30.088557168Z" level=warning msg="container event discarded" container=3538f29f84e2362f6432f3bf84aea32ac2549d9e4a8698ef54009978bd9bcd7f type=CONTAINER_STOPPED_EVENT Aug 13 01:51:31.813907 systemd[1]: Started sshd@47-172.232.7.67:22-147.75.109.163:48524.service - OpenSSH per-connection server daemon (147.75.109.163:48524). Aug 13 01:51:32.159856 sshd[6843]: Accepted publickey for core from 147.75.109.163 port 48524 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:51:32.161049 sshd-session[6843]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:51:32.165898 systemd-logind[1532]: New session 48 of user core. Aug 13 01:51:32.171901 systemd[1]: Started session-48.scope - Session 48 of User core. Aug 13 01:51:32.471146 sshd[6845]: Connection closed by 147.75.109.163 port 48524 Aug 13 01:51:32.471794 sshd-session[6843]: pam_unix(sshd:session): session closed for user core Aug 13 01:51:32.476423 systemd-logind[1532]: Session 48 logged out. Waiting for processes to exit. Aug 13 01:51:32.477339 systemd[1]: sshd@47-172.232.7.67:22-147.75.109.163:48524.service: Deactivated successfully. Aug 13 01:51:32.480670 systemd[1]: session-48.scope: Deactivated successfully. Aug 13 01:51:32.485399 systemd-logind[1532]: Removed session 48. Aug 13 01:51:32.695582 kubelet[2790]: E0813 01:51:32.695540 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:51:35.177887 kubelet[2790]: I0813 01:51:35.177771 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:51:35.177887 kubelet[2790]: I0813 01:51:35.177857 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:51:35.183405 kubelet[2790]: I0813 01:51:35.183351 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:51:35.210227 kubelet[2790]: I0813 01:51:35.210187 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:51:35.210622 kubelet[2790]: I0813 01:51:35.210552 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/coredns-674b8bbfcf-6rlkc","kube-system/coredns-674b8bbfcf-vtdcd","calico-system/calico-node-tsmrf","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","calico-system/csi-node-driver-c7jrc","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:51:35.210769 kubelet[2790]: E0813 01:51:35.210661 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:51:35.210769 kubelet[2790]: E0813 01:51:35.210683 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:51:35.210769 kubelet[2790]: E0813 01:51:35.210702 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:51:35.210769 kubelet[2790]: E0813 01:51:35.210725 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:51:35.210769 kubelet[2790]: E0813 01:51:35.210736 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:51:35.210769 kubelet[2790]: E0813 01:51:35.210771 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:51:35.210980 kubelet[2790]: E0813 01:51:35.210797 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:51:35.210980 kubelet[2790]: E0813 01:51:35.210805 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:51:35.210980 kubelet[2790]: E0813 01:51:35.210814 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:51:35.210980 kubelet[2790]: E0813 01:51:35.210821 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:51:35.210980 kubelet[2790]: I0813 01:51:35.210832 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:51:35.695637 kubelet[2790]: E0813 01:51:35.695575 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:51:37.538976 systemd[1]: Started sshd@48-172.232.7.67:22-147.75.109.163:48534.service - OpenSSH per-connection server daemon (147.75.109.163:48534). Aug 13 01:51:37.761771 containerd[1556]: time="2025-08-13T01:51:37.761665563Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6da5154e125e31750839c134b00edd40c42a3264d34e846351af2022803e2f22\" id:\"0206a68353bd95e0c290622121ea4ab9ab39dfd106f393d043a3ff44ba6fad61\" pid:6870 exited_at:{seconds:1755049897 nanos:761163373}" Aug 13 01:51:37.888364 sshd[6878]: Accepted publickey for core from 147.75.109.163 port 48534 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:51:37.891040 sshd-session[6878]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:51:37.900896 systemd-logind[1532]: New session 49 of user core. Aug 13 01:51:37.906105 systemd[1]: Started session-49.scope - Session 49 of User core. Aug 13 01:51:38.220865 sshd[6886]: Connection closed by 147.75.109.163 port 48534 Aug 13 01:51:38.221995 sshd-session[6878]: pam_unix(sshd:session): session closed for user core Aug 13 01:51:38.227973 systemd-logind[1532]: Session 49 logged out. Waiting for processes to exit. Aug 13 01:51:38.230166 systemd[1]: sshd@48-172.232.7.67:22-147.75.109.163:48534.service: Deactivated successfully. Aug 13 01:51:38.235693 systemd[1]: session-49.scope: Deactivated successfully. Aug 13 01:51:38.241610 systemd-logind[1532]: Removed session 49. Aug 13 01:51:39.697289 kubelet[2790]: E0813 01:51:39.695890 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/101/fs/usr/bin/node-driver-registrar: no space left on device\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:51:42.374147 containerd[1556]: time="2025-08-13T01:51:42.374031551Z" level=warning msg="container event discarded" container=773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198 type=CONTAINER_STOPPED_EVENT Aug 13 01:51:42.441451 containerd[1556]: time="2025-08-13T01:51:42.441378743Z" level=warning msg="container event discarded" container=d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f type=CONTAINER_STOPPED_EVENT Aug 13 01:51:42.986009 containerd[1556]: time="2025-08-13T01:51:42.985893208Z" level=warning msg="container event discarded" container=773634732375f5beb6ed85668c63244e6d495ae9e3ea516f9fd9b8e924e33198 type=CONTAINER_DELETED_EVENT Aug 13 01:51:43.288287 systemd[1]: Started sshd@49-172.232.7.67:22-147.75.109.163:35412.service - OpenSSH per-connection server daemon (147.75.109.163:35412). Aug 13 01:51:43.485379 containerd[1556]: time="2025-08-13T01:51:43.485305638Z" level=warning msg="container event discarded" container=d93ba90d490463a283c631823074ee5ab3b226108e5851a0130f31c332b1132f type=CONTAINER_DELETED_EVENT Aug 13 01:51:43.650340 sshd[6899]: Accepted publickey for core from 147.75.109.163 port 35412 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:51:43.651847 sshd-session[6899]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:51:43.657830 systemd-logind[1532]: New session 50 of user core. Aug 13 01:51:43.662910 systemd[1]: Started session-50.scope - Session 50 of User core. Aug 13 01:51:43.976241 sshd[6901]: Connection closed by 147.75.109.163 port 35412 Aug 13 01:51:43.976925 sshd-session[6899]: pam_unix(sshd:session): session closed for user core Aug 13 01:51:43.982398 systemd-logind[1532]: Session 50 logged out. Waiting for processes to exit. Aug 13 01:51:43.988084 systemd[1]: sshd@49-172.232.7.67:22-147.75.109.163:35412.service: Deactivated successfully. Aug 13 01:51:43.991719 systemd[1]: session-50.scope: Deactivated successfully. Aug 13 01:51:43.996405 systemd-logind[1532]: Removed session 50. Aug 13 01:51:45.238851 kubelet[2790]: I0813 01:51:45.238790 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:51:45.239877 kubelet[2790]: I0813 01:51:45.239393 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:51:45.241565 kubelet[2790]: I0813 01:51:45.241544 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:51:45.261894 kubelet[2790]: I0813 01:51:45.261862 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:51:45.262161 kubelet[2790]: I0813 01:51:45.262119 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/coredns-674b8bbfcf-6rlkc","kube-system/coredns-674b8bbfcf-vtdcd","calico-system/calico-node-tsmrf","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","calico-system/csi-node-driver-c7jrc","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:51:45.262161 kubelet[2790]: E0813 01:51:45.262159 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:51:45.262279 kubelet[2790]: E0813 01:51:45.262173 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:51:45.262279 kubelet[2790]: E0813 01:51:45.262182 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:51:45.262279 kubelet[2790]: E0813 01:51:45.262191 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:51:45.262279 kubelet[2790]: E0813 01:51:45.262201 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:51:45.262279 kubelet[2790]: E0813 01:51:45.262214 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:51:45.262279 kubelet[2790]: E0813 01:51:45.262222 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:51:45.262279 kubelet[2790]: E0813 01:51:45.262229 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:51:45.262279 kubelet[2790]: E0813 01:51:45.262238 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:51:45.262279 kubelet[2790]: E0813 01:51:45.262246 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:51:45.262279 kubelet[2790]: I0813 01:51:45.262257 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:51:48.696707 kubelet[2790]: E0813 01:51:48.696612 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:51:49.040586 systemd[1]: Started sshd@50-172.232.7.67:22-147.75.109.163:55536.service - OpenSSH per-connection server daemon (147.75.109.163:55536). Aug 13 01:51:49.392870 sshd[6913]: Accepted publickey for core from 147.75.109.163 port 55536 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:51:49.394562 sshd-session[6913]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:51:49.400926 systemd-logind[1532]: New session 51 of user core. Aug 13 01:51:49.406915 systemd[1]: Started session-51.scope - Session 51 of User core. Aug 13 01:51:49.709962 sshd[6915]: Connection closed by 147.75.109.163 port 55536 Aug 13 01:51:49.711898 sshd-session[6913]: pam_unix(sshd:session): session closed for user core Aug 13 01:51:49.715560 systemd[1]: sshd@50-172.232.7.67:22-147.75.109.163:55536.service: Deactivated successfully. Aug 13 01:51:49.719180 systemd[1]: session-51.scope: Deactivated successfully. Aug 13 01:51:49.722521 systemd-logind[1532]: Session 51 logged out. Waiting for processes to exit. Aug 13 01:51:49.724106 systemd-logind[1532]: Removed session 51. Aug 13 01:51:50.696425 kubelet[2790]: E0813 01:51:50.696312 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/101/fs/usr/bin/node-driver-registrar: no space left on device\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:51:54.779415 systemd[1]: Started sshd@51-172.232.7.67:22-147.75.109.163:55540.service - OpenSSH per-connection server daemon (147.75.109.163:55540). Aug 13 01:51:55.128793 sshd[6927]: Accepted publickey for core from 147.75.109.163 port 55540 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:51:55.130582 sshd-session[6927]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:51:55.135947 systemd-logind[1532]: New session 52 of user core. Aug 13 01:51:55.147971 systemd[1]: Started session-52.scope - Session 52 of User core. Aug 13 01:51:55.292321 kubelet[2790]: I0813 01:51:55.292280 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:51:55.292321 kubelet[2790]: I0813 01:51:55.292334 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:51:55.295179 kubelet[2790]: I0813 01:51:55.295156 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:51:55.311979 kubelet[2790]: I0813 01:51:55.311945 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:51:55.312146 kubelet[2790]: I0813 01:51:55.312106 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-node-tsmrf","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","calico-system/csi-node-driver-c7jrc","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:51:55.312146 kubelet[2790]: E0813 01:51:55.312144 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:51:55.312257 kubelet[2790]: E0813 01:51:55.312157 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:51:55.312257 kubelet[2790]: E0813 01:51:55.312165 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:51:55.312257 kubelet[2790]: E0813 01:51:55.312173 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:51:55.312257 kubelet[2790]: E0813 01:51:55.312185 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:51:55.312257 kubelet[2790]: E0813 01:51:55.312195 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:51:55.312257 kubelet[2790]: E0813 01:51:55.312203 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:51:55.312257 kubelet[2790]: E0813 01:51:55.312210 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:51:55.312257 kubelet[2790]: E0813 01:51:55.312218 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:51:55.312257 kubelet[2790]: E0813 01:51:55.312225 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:51:55.312257 kubelet[2790]: I0813 01:51:55.312236 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:51:55.465674 sshd[6929]: Connection closed by 147.75.109.163 port 55540 Aug 13 01:51:55.466330 sshd-session[6927]: pam_unix(sshd:session): session closed for user core Aug 13 01:51:55.474042 systemd[1]: sshd@51-172.232.7.67:22-147.75.109.163:55540.service: Deactivated successfully. Aug 13 01:51:55.478602 systemd[1]: session-52.scope: Deactivated successfully. Aug 13 01:51:55.481431 systemd-logind[1532]: Session 52 logged out. Waiting for processes to exit. Aug 13 01:51:55.483794 systemd-logind[1532]: Removed session 52. Aug 13 01:51:58.696774 kubelet[2790]: E0813 01:51:58.696385 2790 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.20 172.232.0.15 172.232.0.18" Aug 13 01:52:00.556490 systemd[1]: Started sshd@52-172.232.7.67:22-147.75.109.163:40684.service - OpenSSH per-connection server daemon (147.75.109.163:40684). Aug 13 01:52:00.977988 sshd[6942]: Accepted publickey for core from 147.75.109.163 port 40684 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:52:00.981715 sshd-session[6942]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:52:01.019621 systemd-logind[1532]: New session 53 of user core. Aug 13 01:52:01.023841 systemd[1]: Started session-53.scope - Session 53 of User core. Aug 13 01:52:01.576114 sshd[6944]: Connection closed by 147.75.109.163 port 40684 Aug 13 01:52:01.576411 sshd-session[6942]: pam_unix(sshd:session): session closed for user core Aug 13 01:52:01.588768 systemd[1]: sshd@52-172.232.7.67:22-147.75.109.163:40684.service: Deactivated successfully. Aug 13 01:52:01.597280 systemd[1]: session-53.scope: Deactivated successfully. Aug 13 01:52:01.614024 systemd-logind[1532]: Session 53 logged out. Waiting for processes to exit. Aug 13 01:52:01.617155 systemd-logind[1532]: Removed session 53. Aug 13 01:52:03.697907 kubelet[2790]: E0813 01:52:03.697832 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" podUID="f88563f6-5704-426b-aecc-303b3869ce30" Aug 13 01:52:05.347280 kubelet[2790]: I0813 01:52:05.347233 2790 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:52:05.347280 kubelet[2790]: I0813 01:52:05.347313 2790 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:52:05.349441 kubelet[2790]: I0813 01:52:05.349415 2790 image_gc_manager.go:447] "Attempting to delete unused images" Aug 13 01:52:05.369053 kubelet[2790]: I0813 01:52:05.369026 2790 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:52:05.369196 kubelet[2790]: I0813 01:52:05.369164 2790 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-76ff444f8d-4xcg9","calico-system/calico-typha-64bcb76cdd-m4xlg","kube-system/coredns-674b8bbfcf-vtdcd","kube-system/coredns-674b8bbfcf-6rlkc","calico-system/calico-node-tsmrf","kube-system/kube-controller-manager-172-232-7-67","kube-system/kube-proxy-mjdwx","kube-system/kube-apiserver-172-232-7-67","calico-system/csi-node-driver-c7jrc","kube-system/kube-scheduler-172-232-7-67"] Aug 13 01:52:05.369267 kubelet[2790]: E0813 01:52:05.369203 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-76ff444f8d-4xcg9" Aug 13 01:52:05.369267 kubelet[2790]: E0813 01:52:05.369217 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-64bcb76cdd-m4xlg" Aug 13 01:52:05.369267 kubelet[2790]: E0813 01:52:05.369226 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-vtdcd" Aug 13 01:52:05.369267 kubelet[2790]: E0813 01:52:05.369236 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-674b8bbfcf-6rlkc" Aug 13 01:52:05.369267 kubelet[2790]: E0813 01:52:05.369247 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-tsmrf" Aug 13 01:52:05.369267 kubelet[2790]: E0813 01:52:05.369255 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-67" Aug 13 01:52:05.369267 kubelet[2790]: E0813 01:52:05.369262 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-mjdwx" Aug 13 01:52:05.369267 kubelet[2790]: E0813 01:52:05.369269 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-67" Aug 13 01:52:05.369480 kubelet[2790]: E0813 01:52:05.369277 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-c7jrc" Aug 13 01:52:05.369480 kubelet[2790]: E0813 01:52:05.369283 2790 eviction_manager.go:610] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-67" Aug 13 01:52:05.369480 kubelet[2790]: I0813 01:52:05.369294 2790 eviction_manager.go:439] "Eviction manager: unable to evict any pods from the node" Aug 13 01:52:05.695463 kubelet[2790]: E0813 01:52:05.695328 2790 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": failed to extract layer sha256:fc0260a65ddba357b1d129f8ee26e320e324b952c3f6454255c10ab49e1b985e: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/101/fs/usr/bin/node-driver-registrar: no space left on device\"" pod="calico-system/csi-node-driver-c7jrc" podUID="4296a7ed-e75a-4d74-935a-9017b9a86286" Aug 13 01:52:06.634982 systemd[1]: Started sshd@53-172.232.7.67:22-147.75.109.163:40690.service - OpenSSH per-connection server daemon (147.75.109.163:40690). Aug 13 01:52:06.982635 sshd[6958]: Accepted publickey for core from 147.75.109.163 port 40690 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:52:06.984440 sshd-session[6958]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:52:06.993822 systemd-logind[1532]: New session 54 of user core. Aug 13 01:52:07.002983 systemd[1]: Started session-54.scope - Session 54 of User core. Aug 13 01:52:07.309033 sshd[6960]: Connection closed by 147.75.109.163 port 40690 Aug 13 01:52:07.310541 sshd-session[6958]: pam_unix(sshd:session): session closed for user core Aug 13 01:52:07.316149 systemd[1]: sshd@53-172.232.7.67:22-147.75.109.163:40690.service: Deactivated successfully. Aug 13 01:52:07.319149 systemd[1]: session-54.scope: Deactivated successfully. Aug 13 01:52:07.320377 systemd-logind[1532]: Session 54 logged out. Waiting for processes to exit. Aug 13 01:52:07.323269 systemd-logind[1532]: Removed session 54. Aug 13 01:52:07.610106 containerd[1556]: time="2025-08-13T01:52:07.609982866Z" level=info msg="TaskExit event in podsandbox handler container_id:\"6da5154e125e31750839c134b00edd40c42a3264d34e846351af2022803e2f22\" id:\"360360171154cdc7aacd08a4e09f769b318487a5dead49bfbe669fb281193550\" pid:6982 exited_at:{seconds:1755049927 nanos:609634076}"