Aug 13 01:44:42.858494 kernel: Linux version 6.12.40-flatcar (build@pony-truck.infra.kinvolk.io) (x86_64-cros-linux-gnu-gcc (Gentoo Hardened 14.2.1_p20241221 p7) 14.2.1 20241221, GNU ld (Gentoo 2.44 p1) 2.44.0) #1 SMP PREEMPT_DYNAMIC Tue Aug 12 21:42:48 -00 2025 Aug 13 01:44:42.858514 kernel: Command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:44:42.858522 kernel: BIOS-provided physical RAM map: Aug 13 01:44:42.858530 kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009f7ff] usable Aug 13 01:44:42.858536 kernel: BIOS-e820: [mem 0x000000000009f800-0x000000000009ffff] reserved Aug 13 01:44:42.858541 kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved Aug 13 01:44:42.858548 kernel: BIOS-e820: [mem 0x0000000000100000-0x000000007ffdcfff] usable Aug 13 01:44:42.858553 kernel: BIOS-e820: [mem 0x000000007ffdd000-0x000000007fffffff] reserved Aug 13 01:44:42.858559 kernel: BIOS-e820: [mem 0x00000000b0000000-0x00000000bfffffff] reserved Aug 13 01:44:42.858564 kernel: BIOS-e820: [mem 0x00000000fed1c000-0x00000000fed1ffff] reserved Aug 13 01:44:42.858570 kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved Aug 13 01:44:42.858576 kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved Aug 13 01:44:42.858583 kernel: BIOS-e820: [mem 0x0000000100000000-0x000000017fffffff] usable Aug 13 01:44:42.858589 kernel: BIOS-e820: [mem 0x000000fd00000000-0x000000ffffffffff] reserved Aug 13 01:44:42.858595 kernel: NX (Execute Disable) protection: active Aug 13 01:44:42.858601 kernel: APIC: Static calls initialized Aug 13 01:44:42.858607 kernel: SMBIOS 2.8 present. Aug 13 01:44:42.858615 kernel: DMI: Linode Compute Instance/Standard PC (Q35 + ICH9, 2009), BIOS Not Specified Aug 13 01:44:42.858621 kernel: DMI: Memory slots populated: 1/1 Aug 13 01:44:42.858627 kernel: Hypervisor detected: KVM Aug 13 01:44:42.858633 kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00 Aug 13 01:44:42.858639 kernel: kvm-clock: using sched offset of 5774407179 cycles Aug 13 01:44:42.858645 kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns Aug 13 01:44:42.858651 kernel: tsc: Detected 1999.999 MHz processor Aug 13 01:44:42.858657 kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved Aug 13 01:44:42.858664 kernel: e820: remove [mem 0x000a0000-0x000fffff] usable Aug 13 01:44:42.858670 kernel: last_pfn = 0x180000 max_arch_pfn = 0x400000000 Aug 13 01:44:42.858678 kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs Aug 13 01:44:42.858684 kernel: x86/PAT: Configuration [0-7]: WB WC UC- UC WB WP UC- WT Aug 13 01:44:42.858690 kernel: last_pfn = 0x7ffdd max_arch_pfn = 0x400000000 Aug 13 01:44:42.858696 kernel: Using GB pages for direct mapping Aug 13 01:44:42.858702 kernel: ACPI: Early table checksum verification disabled Aug 13 01:44:42.858708 kernel: ACPI: RSDP 0x00000000000F5160 000014 (v00 BOCHS ) Aug 13 01:44:42.858714 kernel: ACPI: RSDT 0x000000007FFE2307 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:42.858720 kernel: ACPI: FACP 0x000000007FFE20F7 0000F4 (v03 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:42.858726 kernel: ACPI: DSDT 0x000000007FFE0040 0020B7 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:42.858734 kernel: ACPI: FACS 0x000000007FFE0000 000040 Aug 13 01:44:42.858740 kernel: ACPI: APIC 0x000000007FFE21EB 000080 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:42.858746 kernel: ACPI: HPET 0x000000007FFE226B 000038 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:42.858753 kernel: ACPI: MCFG 0x000000007FFE22A3 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:42.858761 kernel: ACPI: WAET 0x000000007FFE22DF 000028 (v01 BOCHS BXPC 00000001 BXPC 00000001) Aug 13 01:44:42.858768 kernel: ACPI: Reserving FACP table memory at [mem 0x7ffe20f7-0x7ffe21ea] Aug 13 01:44:42.858776 kernel: ACPI: Reserving DSDT table memory at [mem 0x7ffe0040-0x7ffe20f6] Aug 13 01:44:42.858782 kernel: ACPI: Reserving FACS table memory at [mem 0x7ffe0000-0x7ffe003f] Aug 13 01:44:42.858789 kernel: ACPI: Reserving APIC table memory at [mem 0x7ffe21eb-0x7ffe226a] Aug 13 01:44:42.858795 kernel: ACPI: Reserving HPET table memory at [mem 0x7ffe226b-0x7ffe22a2] Aug 13 01:44:42.858801 kernel: ACPI: Reserving MCFG table memory at [mem 0x7ffe22a3-0x7ffe22de] Aug 13 01:44:42.858808 kernel: ACPI: Reserving WAET table memory at [mem 0x7ffe22df-0x7ffe2306] Aug 13 01:44:42.858814 kernel: No NUMA configuration found Aug 13 01:44:42.858820 kernel: Faking a node at [mem 0x0000000000000000-0x000000017fffffff] Aug 13 01:44:42.858828 kernel: NODE_DATA(0) allocated [mem 0x17fff8dc0-0x17fffffff] Aug 13 01:44:42.858834 kernel: Zone ranges: Aug 13 01:44:42.858841 kernel: DMA [mem 0x0000000000001000-0x0000000000ffffff] Aug 13 01:44:42.858847 kernel: DMA32 [mem 0x0000000001000000-0x00000000ffffffff] Aug 13 01:44:42.858853 kernel: Normal [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:44:42.858860 kernel: Device empty Aug 13 01:44:42.858866 kernel: Movable zone start for each node Aug 13 01:44:42.858872 kernel: Early memory node ranges Aug 13 01:44:42.858878 kernel: node 0: [mem 0x0000000000001000-0x000000000009efff] Aug 13 01:44:42.858884 kernel: node 0: [mem 0x0000000000100000-0x000000007ffdcfff] Aug 13 01:44:42.858893 kernel: node 0: [mem 0x0000000100000000-0x000000017fffffff] Aug 13 01:44:42.858899 kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000017fffffff] Aug 13 01:44:42.858905 kernel: On node 0, zone DMA: 1 pages in unavailable ranges Aug 13 01:44:42.858911 kernel: On node 0, zone DMA: 97 pages in unavailable ranges Aug 13 01:44:42.858918 kernel: On node 0, zone Normal: 35 pages in unavailable ranges Aug 13 01:44:42.858924 kernel: ACPI: PM-Timer IO Port: 0x608 Aug 13 01:44:42.858930 kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1]) Aug 13 01:44:42.858937 kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23 Aug 13 01:44:42.858959 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) Aug 13 01:44:42.858968 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level) Aug 13 01:44:42.858974 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) Aug 13 01:44:42.858980 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level) Aug 13 01:44:42.858987 kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level) Aug 13 01:44:42.858993 kernel: ACPI: Using ACPI (MADT) for SMP configuration information Aug 13 01:44:42.858999 kernel: ACPI: HPET id: 0x8086a201 base: 0xfed00000 Aug 13 01:44:42.859006 kernel: TSC deadline timer available Aug 13 01:44:42.859012 kernel: CPU topo: Max. logical packages: 1 Aug 13 01:44:42.859018 kernel: CPU topo: Max. logical dies: 1 Aug 13 01:44:42.859026 kernel: CPU topo: Max. dies per package: 1 Aug 13 01:44:42.859032 kernel: CPU topo: Max. threads per core: 1 Aug 13 01:44:42.859039 kernel: CPU topo: Num. cores per package: 2 Aug 13 01:44:42.859045 kernel: CPU topo: Num. threads per package: 2 Aug 13 01:44:42.859051 kernel: CPU topo: Allowing 2 present CPUs plus 0 hotplug CPUs Aug 13 01:44:42.859057 kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write() Aug 13 01:44:42.859074 kernel: kvm-guest: KVM setup pv remote TLB flush Aug 13 01:44:42.859098 kernel: kvm-guest: setup PV sched yield Aug 13 01:44:42.859104 kernel: [mem 0xc0000000-0xfed1bfff] available for PCI devices Aug 13 01:44:42.859113 kernel: Booting paravirtualized kernel on KVM Aug 13 01:44:42.859119 kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns Aug 13 01:44:42.859126 kernel: setup_percpu: NR_CPUS:512 nr_cpumask_bits:2 nr_cpu_ids:2 nr_node_ids:1 Aug 13 01:44:42.859132 kernel: percpu: Embedded 60 pages/cpu s207832 r8192 d29736 u1048576 Aug 13 01:44:42.859138 kernel: pcpu-alloc: s207832 r8192 d29736 u1048576 alloc=1*2097152 Aug 13 01:44:42.859145 kernel: pcpu-alloc: [0] 0 1 Aug 13 01:44:42.859151 kernel: kvm-guest: PV spinlocks enabled Aug 13 01:44:42.859162 kernel: PV qspinlock hash table entries: 256 (order: 0, 4096 bytes, linear) Aug 13 01:44:42.859169 kernel: Kernel command line: rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:44:42.859178 kernel: Unknown kernel command line parameters "BOOT_IMAGE=/flatcar/vmlinuz-a", will be passed to user space. Aug 13 01:44:42.859184 kernel: random: crng init done Aug 13 01:44:42.859190 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Aug 13 01:44:42.859197 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Aug 13 01:44:42.859203 kernel: Fallback order for Node 0: 0 Aug 13 01:44:42.859210 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1048443 Aug 13 01:44:42.859216 kernel: Policy zone: Normal Aug 13 01:44:42.859222 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Aug 13 01:44:42.859229 kernel: software IO TLB: area num 2. Aug 13 01:44:42.859237 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Aug 13 01:44:42.859243 kernel: ftrace: allocating 40098 entries in 157 pages Aug 13 01:44:42.859249 kernel: ftrace: allocated 157 pages with 5 groups Aug 13 01:44:42.859256 kernel: Dynamic Preempt: voluntary Aug 13 01:44:42.859262 kernel: rcu: Preemptible hierarchical RCU implementation. Aug 13 01:44:42.859269 kernel: rcu: RCU event tracing is enabled. Aug 13 01:44:42.859276 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Aug 13 01:44:42.859282 kernel: Trampoline variant of Tasks RCU enabled. Aug 13 01:44:42.859288 kernel: Rude variant of Tasks RCU enabled. Aug 13 01:44:42.859296 kernel: Tracing variant of Tasks RCU enabled. Aug 13 01:44:42.859303 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Aug 13 01:44:42.859309 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Aug 13 01:44:42.859316 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:44:42.859328 kernel: RCU Tasks Rude: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:44:42.859336 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Aug 13 01:44:42.859343 kernel: NR_IRQS: 33024, nr_irqs: 440, preallocated irqs: 16 Aug 13 01:44:42.859350 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Aug 13 01:44:42.859356 kernel: Console: colour VGA+ 80x25 Aug 13 01:44:42.859363 kernel: printk: legacy console [tty0] enabled Aug 13 01:44:42.859369 kernel: printk: legacy console [ttyS0] enabled Aug 13 01:44:42.859377 kernel: ACPI: Core revision 20240827 Aug 13 01:44:42.859384 kernel: clocksource: hpet: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 19112604467 ns Aug 13 01:44:42.859391 kernel: APIC: Switch to symmetric I/O mode setup Aug 13 01:44:42.859397 kernel: x2apic enabled Aug 13 01:44:42.859404 kernel: APIC: Switched APIC routing to: physical x2apic Aug 13 01:44:42.859412 kernel: kvm-guest: APIC: send_IPI_mask() replaced with kvm_send_ipi_mask() Aug 13 01:44:42.859419 kernel: kvm-guest: APIC: send_IPI_mask_allbutself() replaced with kvm_send_ipi_mask_allbutself() Aug 13 01:44:42.859426 kernel: kvm-guest: setup PV IPIs Aug 13 01:44:42.859432 kernel: ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1 Aug 13 01:44:42.859439 kernel: clocksource: tsc-early: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Aug 13 01:44:42.859446 kernel: Calibrating delay loop (skipped) preset value.. 3999.99 BogoMIPS (lpj=1999999) Aug 13 01:44:42.859452 kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated Aug 13 01:44:42.859459 kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127 Aug 13 01:44:42.859466 kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0 Aug 13 01:44:42.859474 kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization Aug 13 01:44:42.859481 kernel: Spectre V2 : Mitigation: Retpolines Aug 13 01:44:42.859487 kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT Aug 13 01:44:42.859494 kernel: Spectre V2 : Enabling Restricted Speculation for firmware calls Aug 13 01:44:42.859501 kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier Aug 13 01:44:42.859507 kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl Aug 13 01:44:42.859514 kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied! Aug 13 01:44:42.859521 kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options. Aug 13 01:44:42.859529 kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode Aug 13 01:44:42.859536 kernel: Transient Scheduler Attacks: Forcing mitigation on in a VM Aug 13 01:44:42.859543 kernel: Transient Scheduler Attacks: Vulnerable: Clear CPU buffers attempted, no microcode Aug 13 01:44:42.859549 kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers' Aug 13 01:44:42.859556 kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers' Aug 13 01:44:42.859562 kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers' Aug 13 01:44:42.859569 kernel: x86/fpu: Supporting XSAVE feature 0x200: 'Protection Keys User registers' Aug 13 01:44:42.859576 kernel: x86/fpu: xstate_offset[2]: 576, xstate_sizes[2]: 256 Aug 13 01:44:42.859582 kernel: x86/fpu: xstate_offset[9]: 832, xstate_sizes[9]: 8 Aug 13 01:44:42.859591 kernel: x86/fpu: Enabled xstate features 0x207, context size is 840 bytes, using 'compacted' format. Aug 13 01:44:42.859597 kernel: Freeing SMP alternatives memory: 32K Aug 13 01:44:42.859604 kernel: pid_max: default: 32768 minimum: 301 Aug 13 01:44:42.859611 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Aug 13 01:44:42.859617 kernel: landlock: Up and running. Aug 13 01:44:42.859624 kernel: SELinux: Initializing. Aug 13 01:44:42.859630 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:44:42.859637 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Aug 13 01:44:42.859644 kernel: smpboot: CPU0: AMD EPYC 7713 64-Core Processor (family: 0x19, model: 0x1, stepping: 0x1) Aug 13 01:44:42.859652 kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver. Aug 13 01:44:42.859658 kernel: ... version: 0 Aug 13 01:44:42.859665 kernel: ... bit width: 48 Aug 13 01:44:42.859672 kernel: ... generic registers: 6 Aug 13 01:44:42.859678 kernel: ... value mask: 0000ffffffffffff Aug 13 01:44:42.859685 kernel: ... max period: 00007fffffffffff Aug 13 01:44:42.859691 kernel: ... fixed-purpose events: 0 Aug 13 01:44:42.859698 kernel: ... event mask: 000000000000003f Aug 13 01:44:42.859704 kernel: signal: max sigframe size: 3376 Aug 13 01:44:42.859713 kernel: rcu: Hierarchical SRCU implementation. Aug 13 01:44:42.859720 kernel: rcu: Max phase no-delay instances is 400. Aug 13 01:44:42.859726 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Aug 13 01:44:42.859733 kernel: smp: Bringing up secondary CPUs ... Aug 13 01:44:42.859740 kernel: smpboot: x86: Booting SMP configuration: Aug 13 01:44:42.859746 kernel: .... node #0, CPUs: #1 Aug 13 01:44:42.859753 kernel: smp: Brought up 1 node, 2 CPUs Aug 13 01:44:42.859759 kernel: smpboot: Total of 2 processors activated (7999.99 BogoMIPS) Aug 13 01:44:42.859766 kernel: Memory: 3961808K/4193772K available (14336K kernel code, 2430K rwdata, 9960K rodata, 54444K init, 2524K bss, 227288K reserved, 0K cma-reserved) Aug 13 01:44:42.859775 kernel: devtmpfs: initialized Aug 13 01:44:42.859781 kernel: x86/mm: Memory block size: 128MB Aug 13 01:44:42.859788 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Aug 13 01:44:42.859794 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Aug 13 01:44:42.859801 kernel: pinctrl core: initialized pinctrl subsystem Aug 13 01:44:42.859808 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Aug 13 01:44:42.859814 kernel: audit: initializing netlink subsys (disabled) Aug 13 01:44:42.859821 kernel: audit: type=2000 audit(1755049480.697:1): state=initialized audit_enabled=0 res=1 Aug 13 01:44:42.859828 kernel: thermal_sys: Registered thermal governor 'step_wise' Aug 13 01:44:42.859836 kernel: thermal_sys: Registered thermal governor 'user_space' Aug 13 01:44:42.859842 kernel: cpuidle: using governor menu Aug 13 01:44:42.859859 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Aug 13 01:44:42.859866 kernel: dca service started, version 1.12.1 Aug 13 01:44:42.859889 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] (base 0xb0000000) for domain 0000 [bus 00-ff] Aug 13 01:44:42.859896 kernel: PCI: ECAM [mem 0xb0000000-0xbfffffff] reserved as E820 entry Aug 13 01:44:42.859903 kernel: PCI: Using configuration type 1 for base access Aug 13 01:44:42.859909 kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible. Aug 13 01:44:42.859916 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Aug 13 01:44:42.859925 kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page Aug 13 01:44:42.859932 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Aug 13 01:44:42.885180 kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page Aug 13 01:44:42.885192 kernel: ACPI: Added _OSI(Module Device) Aug 13 01:44:42.885200 kernel: ACPI: Added _OSI(Processor Device) Aug 13 01:44:42.885207 kernel: ACPI: Added _OSI(Processor Aggregator Device) Aug 13 01:44:42.885214 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Aug 13 01:44:42.885221 kernel: ACPI: Interpreter enabled Aug 13 01:44:42.885228 kernel: ACPI: PM: (supports S0 S3 S5) Aug 13 01:44:42.885239 kernel: ACPI: Using IOAPIC for interrupt routing Aug 13 01:44:42.885246 kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug Aug 13 01:44:42.885253 kernel: PCI: Using E820 reservations for host bridge windows Aug 13 01:44:42.885260 kernel: ACPI: Enabled 2 GPEs in block 00 to 3F Aug 13 01:44:42.885267 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Aug 13 01:44:42.885429 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Aug 13 01:44:42.885544 kernel: acpi PNP0A08:00: _OSC: platform does not support [PCIeHotplug LTR] Aug 13 01:44:42.885657 kernel: acpi PNP0A08:00: _OSC: OS now controls [PME AER PCIeCapability] Aug 13 01:44:42.885667 kernel: PCI host bridge to bus 0000:00 Aug 13 01:44:42.885779 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0x0cf7 window] Aug 13 01:44:42.885880 kernel: pci_bus 0000:00: root bus resource [io 0x0d00-0xffff window] Aug 13 01:44:42.886009 kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window] Aug 13 01:44:42.886110 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xafffffff window] Aug 13 01:44:42.886206 kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window] Aug 13 01:44:42.886302 kernel: pci_bus 0000:00: root bus resource [mem 0x180000000-0x97fffffff window] Aug 13 01:44:42.886403 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Aug 13 01:44:42.886526 kernel: pci 0000:00:00.0: [8086:29c0] type 00 class 0x060000 conventional PCI endpoint Aug 13 01:44:42.886646 kernel: pci 0000:00:01.0: [1234:1111] type 00 class 0x030000 conventional PCI endpoint Aug 13 01:44:42.886754 kernel: pci 0000:00:01.0: BAR 0 [mem 0xfd000000-0xfdffffff pref] Aug 13 01:44:42.886860 kernel: pci 0000:00:01.0: BAR 2 [mem 0xfebd0000-0xfebd0fff] Aug 13 01:44:42.887000 kernel: pci 0000:00:01.0: ROM [mem 0xfebc0000-0xfebcffff pref] Aug 13 01:44:42.887116 kernel: pci 0000:00:01.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff] Aug 13 01:44:42.887237 kernel: pci 0000:00:02.0: [1af4:1004] type 00 class 0x010000 conventional PCI endpoint Aug 13 01:44:42.887345 kernel: pci 0000:00:02.0: BAR 0 [io 0xc000-0xc03f] Aug 13 01:44:42.887451 kernel: pci 0000:00:02.0: BAR 1 [mem 0xfebd1000-0xfebd1fff] Aug 13 01:44:42.887768 kernel: pci 0000:00:02.0: BAR 4 [mem 0xfe000000-0xfe003fff 64bit pref] Aug 13 01:44:42.888162 kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint Aug 13 01:44:42.888281 kernel: pci 0000:00:03.0: BAR 0 [io 0xc040-0xc07f] Aug 13 01:44:42.888449 kernel: pci 0000:00:03.0: BAR 1 [mem 0xfebd2000-0xfebd2fff] Aug 13 01:44:42.888567 kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe004000-0xfe007fff 64bit pref] Aug 13 01:44:42.888680 kernel: pci 0000:00:03.0: ROM [mem 0xfeb80000-0xfebbffff pref] Aug 13 01:44:42.888801 kernel: pci 0000:00:1f.0: [8086:2918] type 00 class 0x060100 conventional PCI endpoint Aug 13 01:44:42.888913 kernel: pci 0000:00:1f.0: quirk: [io 0x0600-0x067f] claimed by ICH6 ACPI/GPIO/TCO Aug 13 01:44:42.889064 kernel: pci 0000:00:1f.2: [8086:2922] type 00 class 0x010601 conventional PCI endpoint Aug 13 01:44:42.889186 kernel: pci 0000:00:1f.2: BAR 4 [io 0xc0c0-0xc0df] Aug 13 01:44:42.889300 kernel: pci 0000:00:1f.2: BAR 5 [mem 0xfebd3000-0xfebd3fff] Aug 13 01:44:42.889421 kernel: pci 0000:00:1f.3: [8086:2930] type 00 class 0x0c0500 conventional PCI endpoint Aug 13 01:44:42.889535 kernel: pci 0000:00:1f.3: BAR 4 [io 0x0700-0x073f] Aug 13 01:44:42.889544 kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10 Aug 13 01:44:42.889551 kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10 Aug 13 01:44:42.889558 kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11 Aug 13 01:44:42.889565 kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11 Aug 13 01:44:42.889574 kernel: ACPI: PCI: Interrupt link LNKE configured for IRQ 10 Aug 13 01:44:42.889581 kernel: ACPI: PCI: Interrupt link LNKF configured for IRQ 10 Aug 13 01:44:42.889587 kernel: ACPI: PCI: Interrupt link LNKG configured for IRQ 11 Aug 13 01:44:42.889594 kernel: ACPI: PCI: Interrupt link LNKH configured for IRQ 11 Aug 13 01:44:42.889600 kernel: ACPI: PCI: Interrupt link GSIA configured for IRQ 16 Aug 13 01:44:42.889607 kernel: ACPI: PCI: Interrupt link GSIB configured for IRQ 17 Aug 13 01:44:42.889614 kernel: ACPI: PCI: Interrupt link GSIC configured for IRQ 18 Aug 13 01:44:42.889620 kernel: ACPI: PCI: Interrupt link GSID configured for IRQ 19 Aug 13 01:44:42.889627 kernel: ACPI: PCI: Interrupt link GSIE configured for IRQ 20 Aug 13 01:44:42.889636 kernel: ACPI: PCI: Interrupt link GSIF configured for IRQ 21 Aug 13 01:44:42.889642 kernel: ACPI: PCI: Interrupt link GSIG configured for IRQ 22 Aug 13 01:44:42.889649 kernel: ACPI: PCI: Interrupt link GSIH configured for IRQ 23 Aug 13 01:44:42.889655 kernel: iommu: Default domain type: Translated Aug 13 01:44:42.889662 kernel: iommu: DMA domain TLB invalidation policy: lazy mode Aug 13 01:44:42.889669 kernel: PCI: Using ACPI for IRQ routing Aug 13 01:44:42.889675 kernel: PCI: pci_cache_line_size set to 64 bytes Aug 13 01:44:42.889682 kernel: e820: reserve RAM buffer [mem 0x0009f800-0x0009ffff] Aug 13 01:44:42.889688 kernel: e820: reserve RAM buffer [mem 0x7ffdd000-0x7fffffff] Aug 13 01:44:42.889800 kernel: pci 0000:00:01.0: vgaarb: setting as boot VGA device Aug 13 01:44:42.889911 kernel: pci 0000:00:01.0: vgaarb: bridge control possible Aug 13 01:44:42.890057 kernel: pci 0000:00:01.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none Aug 13 01:44:42.890069 kernel: vgaarb: loaded Aug 13 01:44:42.890076 kernel: hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0 Aug 13 01:44:42.890082 kernel: hpet0: 3 comparators, 64-bit 100.000000 MHz counter Aug 13 01:44:42.890089 kernel: clocksource: Switched to clocksource kvm-clock Aug 13 01:44:42.890096 kernel: VFS: Disk quotas dquot_6.6.0 Aug 13 01:44:42.890106 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Aug 13 01:44:42.890113 kernel: pnp: PnP ACPI init Aug 13 01:44:42.890235 kernel: system 00:04: [mem 0xb0000000-0xbfffffff window] has been reserved Aug 13 01:44:42.890246 kernel: pnp: PnP ACPI: found 5 devices Aug 13 01:44:42.890253 kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns Aug 13 01:44:42.890260 kernel: NET: Registered PF_INET protocol family Aug 13 01:44:42.890266 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Aug 13 01:44:42.890273 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Aug 13 01:44:42.890282 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Aug 13 01:44:42.890289 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Aug 13 01:44:42.890296 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Aug 13 01:44:42.890302 kernel: TCP: Hash tables configured (established 32768 bind 32768) Aug 13 01:44:42.890309 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:44:42.890316 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Aug 13 01:44:42.890322 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Aug 13 01:44:42.890329 kernel: NET: Registered PF_XDP protocol family Aug 13 01:44:42.890435 kernel: pci_bus 0000:00: resource 4 [io 0x0000-0x0cf7 window] Aug 13 01:44:42.890543 kernel: pci_bus 0000:00: resource 5 [io 0x0d00-0xffff window] Aug 13 01:44:42.890647 kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window] Aug 13 01:44:42.890750 kernel: pci_bus 0000:00: resource 7 [mem 0x80000000-0xafffffff window] Aug 13 01:44:42.890852 kernel: pci_bus 0000:00: resource 8 [mem 0xc0000000-0xfebfffff window] Aug 13 01:44:42.890988 kernel: pci_bus 0000:00: resource 9 [mem 0x180000000-0x97fffffff window] Aug 13 01:44:42.890999 kernel: PCI: CLS 0 bytes, default 64 Aug 13 01:44:42.891006 kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Aug 13 01:44:42.891023 kernel: software IO TLB: mapped [mem 0x000000007bfdd000-0x000000007ffdd000] (64MB) Aug 13 01:44:42.891034 kernel: clocksource: tsc: mask: 0xffffffffffffffff max_cycles: 0x39a85afc727, max_idle_ns: 881590685098 ns Aug 13 01:44:42.891041 kernel: Initialise system trusted keyrings Aug 13 01:44:42.891048 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Aug 13 01:44:42.891054 kernel: Key type asymmetric registered Aug 13 01:44:42.891061 kernel: Asymmetric key parser 'x509' registered Aug 13 01:44:42.891067 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Aug 13 01:44:42.891074 kernel: io scheduler mq-deadline registered Aug 13 01:44:42.891081 kernel: io scheduler kyber registered Aug 13 01:44:42.891087 kernel: io scheduler bfq registered Aug 13 01:44:42.891096 kernel: ioatdma: Intel(R) QuickData Technology Driver 5.00 Aug 13 01:44:42.891103 kernel: ACPI: \_SB_.GSIG: Enabled at IRQ 22 Aug 13 01:44:42.891110 kernel: ACPI: \_SB_.GSIH: Enabled at IRQ 23 Aug 13 01:44:42.891116 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Aug 13 01:44:42.891123 kernel: 00:02: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A Aug 13 01:44:42.891130 kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12 Aug 13 01:44:42.891136 kernel: serio: i8042 KBD port at 0x60,0x64 irq 1 Aug 13 01:44:42.891143 kernel: serio: i8042 AUX port at 0x60,0x64 irq 12 Aug 13 01:44:42.891260 kernel: rtc_cmos 00:03: RTC can wake from S4 Aug 13 01:44:42.891275 kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input0 Aug 13 01:44:42.891375 kernel: rtc_cmos 00:03: registered as rtc0 Aug 13 01:44:42.891477 kernel: rtc_cmos 00:03: setting system clock to 2025-08-13T01:44:42 UTC (1755049482) Aug 13 01:44:42.891583 kernel: rtc_cmos 00:03: alarms up to one day, y3k, 242 bytes nvram, hpet irqs Aug 13 01:44:42.891592 kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled Aug 13 01:44:42.891599 kernel: NET: Registered PF_INET6 protocol family Aug 13 01:44:42.891605 kernel: Segment Routing with IPv6 Aug 13 01:44:42.891612 kernel: In-situ OAM (IOAM) with IPv6 Aug 13 01:44:42.891621 kernel: NET: Registered PF_PACKET protocol family Aug 13 01:44:42.891628 kernel: Key type dns_resolver registered Aug 13 01:44:42.891635 kernel: IPI shorthand broadcast: enabled Aug 13 01:44:42.891641 kernel: sched_clock: Marking stable (2784003293, 228318619)->(3050482275, -38160363) Aug 13 01:44:42.891648 kernel: registered taskstats version 1 Aug 13 01:44:42.891654 kernel: Loading compiled-in X.509 certificates Aug 13 01:44:42.891661 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.40-flatcar: dee0b464d3f7f8d09744a2392f69dde258bc95c0' Aug 13 01:44:42.891668 kernel: Demotion targets for Node 0: null Aug 13 01:44:42.891675 kernel: Key type .fscrypt registered Aug 13 01:44:42.891683 kernel: Key type fscrypt-provisioning registered Aug 13 01:44:42.891690 kernel: ima: No TPM chip found, activating TPM-bypass! Aug 13 01:44:42.891697 kernel: ima: Allocated hash algorithm: sha1 Aug 13 01:44:42.891703 kernel: ima: No architecture policies found Aug 13 01:44:42.891710 kernel: clk: Disabling unused clocks Aug 13 01:44:42.891716 kernel: Warning: unable to open an initial console. Aug 13 01:44:42.891723 kernel: Freeing unused kernel image (initmem) memory: 54444K Aug 13 01:44:42.891730 kernel: Write protecting the kernel read-only data: 24576k Aug 13 01:44:42.891737 kernel: Freeing unused kernel image (rodata/data gap) memory: 280K Aug 13 01:44:42.891745 kernel: Run /init as init process Aug 13 01:44:42.891751 kernel: with arguments: Aug 13 01:44:42.891758 kernel: /init Aug 13 01:44:42.891764 kernel: with environment: Aug 13 01:44:42.891771 kernel: HOME=/ Aug 13 01:44:42.891790 kernel: TERM=linux Aug 13 01:44:42.891799 kernel: BOOT_IMAGE=/flatcar/vmlinuz-a Aug 13 01:44:42.891807 systemd[1]: Successfully made /usr/ read-only. Aug 13 01:44:42.891818 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:44:42.891826 systemd[1]: Detected virtualization kvm. Aug 13 01:44:42.891833 systemd[1]: Detected architecture x86-64. Aug 13 01:44:42.891840 systemd[1]: Running in initrd. Aug 13 01:44:42.891847 systemd[1]: No hostname configured, using default hostname. Aug 13 01:44:42.891855 systemd[1]: Hostname set to . Aug 13 01:44:42.891864 systemd[1]: Initializing machine ID from random generator. Aug 13 01:44:42.891871 systemd[1]: Queued start job for default target initrd.target. Aug 13 01:44:42.891881 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:44:42.891888 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:44:42.891896 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Aug 13 01:44:42.891904 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:44:42.891911 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Aug 13 01:44:42.891919 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Aug 13 01:44:42.891927 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Aug 13 01:44:42.891937 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Aug 13 01:44:42.891974 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:44:42.891982 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:44:42.891990 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:44:42.891997 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:44:42.892005 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:44:42.892012 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:44:42.892019 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:44:42.892029 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:44:42.892037 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Aug 13 01:44:42.892044 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Aug 13 01:44:42.892052 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:44:42.892059 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:44:42.892066 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:44:42.892074 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:44:42.892083 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Aug 13 01:44:42.892091 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:44:42.892098 systemd[1]: Finished network-cleanup.service - Network Cleanup. Aug 13 01:44:42.892106 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Aug 13 01:44:42.892113 systemd[1]: Starting systemd-fsck-usr.service... Aug 13 01:44:42.892121 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:44:42.892128 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:44:42.892138 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:44:42.892145 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Aug 13 01:44:42.892153 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:44:42.892160 systemd[1]: Finished systemd-fsck-usr.service. Aug 13 01:44:42.892170 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:44:42.892198 systemd-journald[206]: Collecting audit messages is disabled. Aug 13 01:44:42.892215 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:44:42.892223 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Aug 13 01:44:42.892233 systemd-journald[206]: Journal started Aug 13 01:44:42.892250 systemd-journald[206]: Runtime Journal (/run/log/journal/83d87bea46c247799f6a5e54b5b73fcb) is 8M, max 78.5M, 70.5M free. Aug 13 01:44:42.854109 systemd-modules-load[207]: Inserted module 'overlay' Aug 13 01:44:42.942409 kernel: Bridge firewalling registered Aug 13 01:44:42.942433 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:44:42.894973 systemd-modules-load[207]: Inserted module 'br_netfilter' Aug 13 01:44:42.943140 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:44:42.944169 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:44:42.947069 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Aug 13 01:44:42.950087 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:44:42.953041 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:44:42.958451 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:44:42.970283 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:44:42.971767 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:44:42.974174 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:44:42.976104 systemd-tmpfiles[226]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Aug 13 01:44:42.977047 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Aug 13 01:44:42.984062 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:44:42.988055 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:44:42.996637 dracut-cmdline[242]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 rootflags=rw mount.usrflags=ro BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyS0,115200n8 console=tty0 flatcar.first_boot=detected flatcar.oem.id=akamai verity.usrhash=215bdedb8de38f6b96ec4f9db80853e25015f60454b867e319fdcb9244320a21 Aug 13 01:44:43.028181 systemd-resolved[244]: Positive Trust Anchors: Aug 13 01:44:43.028804 systemd-resolved[244]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:44:43.028831 systemd-resolved[244]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:44:43.033929 systemd-resolved[244]: Defaulting to hostname 'linux'. Aug 13 01:44:43.034893 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:44:43.035719 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:44:43.077976 kernel: SCSI subsystem initialized Aug 13 01:44:43.085966 kernel: Loading iSCSI transport class v2.0-870. Aug 13 01:44:43.095980 kernel: iscsi: registered transport (tcp) Aug 13 01:44:43.115432 kernel: iscsi: registered transport (qla4xxx) Aug 13 01:44:43.115471 kernel: QLogic iSCSI HBA Driver Aug 13 01:44:43.133104 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:44:43.155649 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:44:43.157923 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:44:43.203296 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Aug 13 01:44:43.205784 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Aug 13 01:44:43.256965 kernel: raid6: avx2x4 gen() 33834 MB/s Aug 13 01:44:43.274964 kernel: raid6: avx2x2 gen() 31957 MB/s Aug 13 01:44:43.293290 kernel: raid6: avx2x1 gen() 23228 MB/s Aug 13 01:44:43.293310 kernel: raid6: using algorithm avx2x4 gen() 33834 MB/s Aug 13 01:44:43.312279 kernel: raid6: .... xor() 5011 MB/s, rmw enabled Aug 13 01:44:43.312323 kernel: raid6: using avx2x2 recovery algorithm Aug 13 01:44:43.330975 kernel: xor: automatically using best checksumming function avx Aug 13 01:44:43.465985 kernel: Btrfs loaded, zoned=no, fsverity=no Aug 13 01:44:43.472541 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:44:43.474654 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:44:43.502793 systemd-udevd[453]: Using default interface naming scheme 'v255'. Aug 13 01:44:43.508061 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:44:43.511759 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Aug 13 01:44:43.530357 dracut-pre-trigger[460]: rd.md=0: removing MD RAID activation Aug 13 01:44:43.556293 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:44:43.559062 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:44:43.626835 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:44:43.629468 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Aug 13 01:44:43.687960 kernel: cryptd: max_cpu_qlen set to 1000 Aug 13 01:44:43.700960 kernel: virtio_scsi virtio0: 2/0/0 default/read/poll queues Aug 13 01:44:43.710976 kernel: libata version 3.00 loaded. Aug 13 01:44:43.720040 kernel: scsi host0: Virtio SCSI HBA Aug 13 01:44:43.837826 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:44:43.909835 kernel: scsi 0:0:0:0: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Aug 13 01:44:43.910084 kernel: AES CTR mode by8 optimization enabled Aug 13 01:44:43.910096 kernel: ahci 0000:00:1f.2: version 3.0 Aug 13 01:44:43.910243 kernel: ACPI: \_SB_.GSIA: Enabled at IRQ 16 Aug 13 01:44:43.910255 kernel: ahci 0000:00:1f.2: AHCI vers 0001.0000, 32 command slots, 1.5 Gbps, SATA mode Aug 13 01:44:43.910403 kernel: ahci 0000:00:1f.2: 6/6 ports implemented (port mask 0x3f) Aug 13 01:44:43.910582 kernel: ahci 0000:00:1f.2: flags: 64bit ncq only Aug 13 01:44:43.910745 kernel: scsi host1: ahci Aug 13 01:44:43.910918 kernel: input: ImExPS/2 Generic Explorer Mouse as /devices/platform/i8042/serio1/input/input2 Aug 13 01:44:43.910935 kernel: scsi host2: ahci Aug 13 01:44:43.911110 kernel: scsi host3: ahci Aug 13 01:44:43.911247 kernel: scsi host4: ahci Aug 13 01:44:43.911377 kernel: scsi host5: ahci Aug 13 01:44:43.911607 kernel: scsi host6: ahci Aug 13 01:44:43.911741 kernel: ata1: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3100 irq 46 lpm-pol 0 Aug 13 01:44:43.911752 kernel: ata2: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3180 irq 46 lpm-pol 0 Aug 13 01:44:43.911762 kernel: ata3: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3200 irq 46 lpm-pol 0 Aug 13 01:44:43.911771 kernel: ata4: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3280 irq 46 lpm-pol 0 Aug 13 01:44:43.911780 kernel: ata5: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3300 irq 46 lpm-pol 0 Aug 13 01:44:43.911789 kernel: ata6: SATA max UDMA/133 abar m4096@0xfebd3000 port 0xfebd3380 irq 46 lpm-pol 0 Aug 13 01:44:43.841267 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:44:43.910417 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:44:43.913161 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:44:43.914096 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:44:43.987390 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:44:44.190743 kernel: ata3: SATA link down (SStatus 0 SControl 300) Aug 13 01:44:44.190805 kernel: ata1: SATA link down (SStatus 0 SControl 300) Aug 13 01:44:44.190827 kernel: ata4: SATA link down (SStatus 0 SControl 300) Aug 13 01:44:44.190836 kernel: ata2: SATA link down (SStatus 0 SControl 300) Aug 13 01:44:44.197966 kernel: ata6: SATA link down (SStatus 0 SControl 300) Aug 13 01:44:44.197991 kernel: ata5: SATA link down (SStatus 0 SControl 300) Aug 13 01:44:44.217966 kernel: sd 0:0:0:0: Power-on or device reset occurred Aug 13 01:44:44.218154 kernel: sd 0:0:0:0: [sda] 9297920 512-byte logical blocks: (4.76 GB/4.43 GiB) Aug 13 01:44:44.242747 kernel: sd 0:0:0:0: [sda] Write Protect is off Aug 13 01:44:44.242905 kernel: sd 0:0:0:0: [sda] Mode Sense: 63 00 00 08 Aug 13 01:44:44.243067 kernel: sd 0:0:0:0: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Aug 13 01:44:44.255428 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Aug 13 01:44:44.255451 kernel: GPT:9289727 != 9297919 Aug 13 01:44:44.255461 kernel: GPT:Alternate GPT header not at the end of the disk. Aug 13 01:44:44.256822 kernel: GPT:9289727 != 9297919 Aug 13 01:44:44.258073 kernel: GPT: Use GNU Parted to correct GPT errors. Aug 13 01:44:44.260251 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:44:44.261729 kernel: sd 0:0:0:0: [sda] Attached SCSI disk Aug 13 01:44:44.307987 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Aug 13 01:44:44.317358 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Aug 13 01:44:44.332769 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:44:44.334273 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Aug 13 01:44:44.341725 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Aug 13 01:44:44.342339 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - QEMU_HARDDISK USR-A. Aug 13 01:44:44.344548 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:44:44.345157 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:44:44.346420 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:44:44.348322 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Aug 13 01:44:44.351089 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Aug 13 01:44:44.363079 disk-uuid[637]: Primary Header is updated. Aug 13 01:44:44.363079 disk-uuid[637]: Secondary Entries is updated. Aug 13 01:44:44.363079 disk-uuid[637]: Secondary Header is updated. Aug 13 01:44:44.368445 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:44:44.372986 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:44:44.383972 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:44:45.387972 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Aug 13 01:44:45.388342 disk-uuid[640]: The operation has completed successfully. Aug 13 01:44:45.440014 systemd[1]: disk-uuid.service: Deactivated successfully. Aug 13 01:44:45.440132 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Aug 13 01:44:45.461376 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Aug 13 01:44:45.477032 sh[659]: Success Aug 13 01:44:45.494684 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Aug 13 01:44:45.494718 kernel: device-mapper: uevent: version 1.0.3 Aug 13 01:44:45.495317 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Aug 13 01:44:45.505965 kernel: device-mapper: verity: sha256 using shash "sha256-ni" Aug 13 01:44:45.546386 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Aug 13 01:44:45.550011 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Aug 13 01:44:45.561897 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Aug 13 01:44:45.572063 kernel: BTRFS info: 'norecovery' is for compatibility only, recommended to use 'rescue=nologreplay' Aug 13 01:44:45.572088 kernel: BTRFS: device fsid 0c0338fb-9434-41c1-99a2-737cbe2351c4 devid 1 transid 44 /dev/mapper/usr (254:0) scanned by mount (671) Aug 13 01:44:45.574966 kernel: BTRFS info (device dm-0): first mount of filesystem 0c0338fb-9434-41c1-99a2-737cbe2351c4 Aug 13 01:44:45.577404 kernel: BTRFS info (device dm-0): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:44:45.579110 kernel: BTRFS info (device dm-0): using free-space-tree Aug 13 01:44:45.586864 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Aug 13 01:44:45.587775 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:44:45.588630 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Aug 13 01:44:45.590037 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Aug 13 01:44:45.592473 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Aug 13 01:44:45.621967 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (706) Aug 13 01:44:45.625550 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:44:45.625598 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:44:45.627702 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:44:45.635977 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:44:45.637341 systemd[1]: Finished ignition-setup.service - Ignition (setup). Aug 13 01:44:45.640054 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Aug 13 01:44:45.700412 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:44:45.704084 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:44:45.744115 ignition[769]: Ignition 2.21.0 Aug 13 01:44:45.744822 ignition[769]: Stage: fetch-offline Aug 13 01:44:45.744855 ignition[769]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:45.744865 ignition[769]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:45.744936 ignition[769]: parsed url from cmdline: "" Aug 13 01:44:45.744961 ignition[769]: no config URL provided Aug 13 01:44:45.744967 ignition[769]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:44:45.744975 ignition[769]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:44:45.744980 ignition[769]: failed to fetch config: resource requires networking Aug 13 01:44:45.745129 ignition[769]: Ignition finished successfully Aug 13 01:44:45.749176 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:44:45.749239 systemd-networkd[841]: lo: Link UP Aug 13 01:44:45.749243 systemd-networkd[841]: lo: Gained carrier Aug 13 01:44:45.750681 systemd-networkd[841]: Enumeration completed Aug 13 01:44:45.751150 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:44:45.751155 systemd-networkd[841]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:44:45.751198 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:44:45.752612 systemd[1]: Reached target network.target - Network. Aug 13 01:44:45.753158 systemd-networkd[841]: eth0: Link UP Aug 13 01:44:45.753598 systemd-networkd[841]: eth0: Gained carrier Aug 13 01:44:45.753607 systemd-networkd[841]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:44:45.756704 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Aug 13 01:44:45.774739 ignition[849]: Ignition 2.21.0 Aug 13 01:44:45.775370 ignition[849]: Stage: fetch Aug 13 01:44:45.775471 ignition[849]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:45.775481 ignition[849]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:45.775540 ignition[849]: parsed url from cmdline: "" Aug 13 01:44:45.775543 ignition[849]: no config URL provided Aug 13 01:44:45.775547 ignition[849]: reading system config file "/usr/lib/ignition/user.ign" Aug 13 01:44:45.775555 ignition[849]: no config at "/usr/lib/ignition/user.ign" Aug 13 01:44:45.775583 ignition[849]: PUT http://169.254.169.254/v1/token: attempt #1 Aug 13 01:44:45.776265 ignition[849]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:44:45.977193 ignition[849]: PUT http://169.254.169.254/v1/token: attempt #2 Aug 13 01:44:45.977336 ignition[849]: PUT error: Put "http://169.254.169.254/v1/token": dial tcp 169.254.169.254:80: connect: network is unreachable Aug 13 01:44:46.252012 systemd-networkd[841]: eth0: DHCPv4 address 172.232.7.133/24, gateway 172.232.7.1 acquired from 23.205.167.118 Aug 13 01:44:46.377566 ignition[849]: PUT http://169.254.169.254/v1/token: attempt #3 Aug 13 01:44:46.481501 ignition[849]: PUT result: OK Aug 13 01:44:46.481563 ignition[849]: GET http://169.254.169.254/v1/user-data: attempt #1 Aug 13 01:44:46.608395 ignition[849]: GET result: OK Aug 13 01:44:46.608506 ignition[849]: parsing config with SHA512: 95ee0d351e3b5ffa0b60b9ff3ede7e51cfe2e16010c09b9e70c7cbcc75d85f371a033ef2ead028e7e9cd54307eaab2e0e8991cedb71f9c015772b480223e42c2 Aug 13 01:44:46.611286 unknown[849]: fetched base config from "system" Aug 13 01:44:46.612016 unknown[849]: fetched base config from "system" Aug 13 01:44:46.612239 ignition[849]: fetch: fetch complete Aug 13 01:44:46.612026 unknown[849]: fetched user config from "akamai" Aug 13 01:44:46.612244 ignition[849]: fetch: fetch passed Aug 13 01:44:46.612283 ignition[849]: Ignition finished successfully Aug 13 01:44:46.614933 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Aug 13 01:44:46.637878 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Aug 13 01:44:46.661454 ignition[856]: Ignition 2.21.0 Aug 13 01:44:46.661468 ignition[856]: Stage: kargs Aug 13 01:44:46.661598 ignition[856]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:46.661609 ignition[856]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:46.662460 ignition[856]: kargs: kargs passed Aug 13 01:44:46.663979 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Aug 13 01:44:46.662501 ignition[856]: Ignition finished successfully Aug 13 01:44:46.666114 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Aug 13 01:44:46.691997 ignition[862]: Ignition 2.21.0 Aug 13 01:44:46.692007 ignition[862]: Stage: disks Aug 13 01:44:46.692132 ignition[862]: no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:46.692142 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:46.694485 systemd[1]: Finished ignition-disks.service - Ignition (disks). Aug 13 01:44:46.693097 ignition[862]: disks: disks passed Aug 13 01:44:46.695541 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Aug 13 01:44:46.693133 ignition[862]: Ignition finished successfully Aug 13 01:44:46.696510 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Aug 13 01:44:46.697553 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:44:46.698479 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:44:46.699636 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:44:46.701375 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Aug 13 01:44:46.729434 systemd-fsck[870]: ROOT: clean, 15/553520 files, 52789/553472 blocks Aug 13 01:44:46.732663 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Aug 13 01:44:46.734578 systemd[1]: Mounting sysroot.mount - /sysroot... Aug 13 01:44:46.834981 kernel: EXT4-fs (sda9): mounted filesystem 069caac6-7833-4acd-8940-01a7ff7d1281 r/w with ordered data mode. Quota mode: none. Aug 13 01:44:46.835892 systemd[1]: Mounted sysroot.mount - /sysroot. Aug 13 01:44:46.836959 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Aug 13 01:44:46.839280 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:44:46.842004 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Aug 13 01:44:46.844758 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Aug 13 01:44:46.845742 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Aug 13 01:44:46.846769 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:44:46.851827 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Aug 13 01:44:46.853984 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Aug 13 01:44:46.863406 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (878) Aug 13 01:44:46.863433 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:44:46.867016 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:44:46.867042 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:44:46.872599 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:44:46.904223 initrd-setup-root[902]: cut: /sysroot/etc/passwd: No such file or directory Aug 13 01:44:46.908432 initrd-setup-root[909]: cut: /sysroot/etc/group: No such file or directory Aug 13 01:44:46.912715 initrd-setup-root[916]: cut: /sysroot/etc/shadow: No such file or directory Aug 13 01:44:46.915358 systemd-networkd[841]: eth0: Gained IPv6LL Aug 13 01:44:46.917698 initrd-setup-root[923]: cut: /sysroot/etc/gshadow: No such file or directory Aug 13 01:44:46.996825 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Aug 13 01:44:46.998542 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Aug 13 01:44:47.000253 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Aug 13 01:44:47.011683 systemd[1]: sysroot-oem.mount: Deactivated successfully. Aug 13 01:44:47.014493 kernel: BTRFS info (device sda6): last unmount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:44:47.028703 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Aug 13 01:44:47.036575 ignition[992]: INFO : Ignition 2.21.0 Aug 13 01:44:47.037967 ignition[992]: INFO : Stage: mount Aug 13 01:44:47.037967 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:47.037967 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:47.041027 ignition[992]: INFO : mount: mount passed Aug 13 01:44:47.041027 ignition[992]: INFO : Ignition finished successfully Aug 13 01:44:47.040328 systemd[1]: Finished ignition-mount.service - Ignition (mount). Aug 13 01:44:47.042706 systemd[1]: Starting ignition-files.service - Ignition (files)... Aug 13 01:44:47.838121 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Aug 13 01:44:47.860961 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1003) Aug 13 01:44:47.860991 kernel: BTRFS info (device sda6): first mount of filesystem 900bf3f4-cc50-4925-b275-d85854bb916f Aug 13 01:44:47.865506 kernel: BTRFS info (device sda6): using crc32c (crc32c-intel) checksum algorithm Aug 13 01:44:47.865529 kernel: BTRFS info (device sda6): using free-space-tree Aug 13 01:44:47.871618 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Aug 13 01:44:47.899668 ignition[1020]: INFO : Ignition 2.21.0 Aug 13 01:44:47.900484 ignition[1020]: INFO : Stage: files Aug 13 01:44:47.900484 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:47.900484 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:47.902474 ignition[1020]: DEBUG : files: compiled without relabeling support, skipping Aug 13 01:44:47.903589 ignition[1020]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Aug 13 01:44:47.904442 ignition[1020]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Aug 13 01:44:47.906676 ignition[1020]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Aug 13 01:44:47.907755 ignition[1020]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Aug 13 01:44:47.907755 ignition[1020]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Aug 13 01:44:47.907221 unknown[1020]: wrote ssh authorized keys file for user: core Aug 13 01:44:47.909971 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 01:44:47.909971 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-amd64.tar.gz: attempt #1 Aug 13 01:44:48.307968 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Aug 13 01:44:50.775115 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-amd64.tar.gz" Aug 13 01:44:50.777131 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Aug 13 01:44:50.777131 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Aug 13 01:44:50.777131 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:44:50.777131 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Aug 13 01:44:50.777131 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:44:50.777131 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Aug 13 01:44:50.777131 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:44:50.777131 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Aug 13 01:44:50.784694 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:44:50.784694 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Aug 13 01:44:50.784694 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:44:50.784694 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:44:50.784694 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:44:50.784694 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-x86-64.raw: attempt #1 Aug 13 01:44:51.308869 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Aug 13 01:44:51.717494 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-x86-64.raw" Aug 13 01:44:51.718901 ignition[1020]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Aug 13 01:44:51.718901 ignition[1020]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:44:51.720877 ignition[1020]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Aug 13 01:44:51.720877 ignition[1020]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Aug 13 01:44:51.720877 ignition[1020]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Aug 13 01:44:51.720877 ignition[1020]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:44:51.720877 ignition[1020]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Aug 13 01:44:51.720877 ignition[1020]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Aug 13 01:44:51.720877 ignition[1020]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Aug 13 01:44:51.720877 ignition[1020]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Aug 13 01:44:51.720877 ignition[1020]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:44:51.732484 ignition[1020]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Aug 13 01:44:51.732484 ignition[1020]: INFO : files: files passed Aug 13 01:44:51.732484 ignition[1020]: INFO : Ignition finished successfully Aug 13 01:44:51.724121 systemd[1]: Finished ignition-files.service - Ignition (files). Aug 13 01:44:51.730054 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Aug 13 01:44:51.732535 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Aug 13 01:44:51.743468 systemd[1]: ignition-quench.service: Deactivated successfully. Aug 13 01:44:51.743590 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Aug 13 01:44:51.751622 initrd-setup-root-after-ignition[1050]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:44:51.751622 initrd-setup-root-after-ignition[1050]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:44:51.753885 initrd-setup-root-after-ignition[1054]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Aug 13 01:44:51.754500 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:44:51.756204 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Aug 13 01:44:51.757554 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Aug 13 01:44:51.809099 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Aug 13 01:44:51.809231 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Aug 13 01:44:51.810522 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Aug 13 01:44:51.811549 systemd[1]: Reached target initrd.target - Initrd Default Target. Aug 13 01:44:51.812784 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Aug 13 01:44:51.817041 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Aug 13 01:44:51.853165 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:44:51.855528 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Aug 13 01:44:51.876639 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:44:51.877957 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:44:51.879185 systemd[1]: Stopped target timers.target - Timer Units. Aug 13 01:44:51.880186 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Aug 13 01:44:51.880333 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Aug 13 01:44:51.881540 systemd[1]: Stopped target initrd.target - Initrd Default Target. Aug 13 01:44:51.882320 systemd[1]: Stopped target basic.target - Basic System. Aug 13 01:44:51.883700 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Aug 13 01:44:51.884967 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Aug 13 01:44:51.886026 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Aug 13 01:44:51.887235 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Aug 13 01:44:51.888445 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Aug 13 01:44:51.889888 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Aug 13 01:44:51.891150 systemd[1]: Stopped target sysinit.target - System Initialization. Aug 13 01:44:51.892560 systemd[1]: Stopped target local-fs.target - Local File Systems. Aug 13 01:44:51.893756 systemd[1]: Stopped target swap.target - Swaps. Aug 13 01:44:51.895119 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Aug 13 01:44:51.895252 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Aug 13 01:44:51.896499 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:44:51.897291 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:44:51.898360 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Aug 13 01:44:51.898674 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:44:51.899579 systemd[1]: dracut-initqueue.service: Deactivated successfully. Aug 13 01:44:51.899672 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Aug 13 01:44:51.901483 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Aug 13 01:44:51.901628 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Aug 13 01:44:51.902850 systemd[1]: ignition-files.service: Deactivated successfully. Aug 13 01:44:51.902998 systemd[1]: Stopped ignition-files.service - Ignition (files). Aug 13 01:44:51.906020 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Aug 13 01:44:51.909071 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Aug 13 01:44:51.910526 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Aug 13 01:44:51.911020 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:44:51.911858 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Aug 13 01:44:51.912016 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Aug 13 01:44:51.918788 systemd[1]: initrd-cleanup.service: Deactivated successfully. Aug 13 01:44:51.918889 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Aug 13 01:44:51.934728 ignition[1074]: INFO : Ignition 2.21.0 Aug 13 01:44:51.934728 ignition[1074]: INFO : Stage: umount Aug 13 01:44:51.937520 ignition[1074]: INFO : no configs at "/usr/lib/ignition/base.d" Aug 13 01:44:51.937520 ignition[1074]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/akamai" Aug 13 01:44:51.940978 ignition[1074]: INFO : umount: umount passed Aug 13 01:44:51.940978 ignition[1074]: INFO : Ignition finished successfully Aug 13 01:44:51.942172 systemd[1]: ignition-mount.service: Deactivated successfully. Aug 13 01:44:51.942313 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Aug 13 01:44:51.966274 systemd[1]: sysroot-boot.mount: Deactivated successfully. Aug 13 01:44:51.966830 systemd[1]: sysroot-boot.service: Deactivated successfully. Aug 13 01:44:51.966937 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Aug 13 01:44:51.968786 systemd[1]: ignition-disks.service: Deactivated successfully. Aug 13 01:44:51.968861 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Aug 13 01:44:51.970163 systemd[1]: ignition-kargs.service: Deactivated successfully. Aug 13 01:44:51.970211 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Aug 13 01:44:51.971430 systemd[1]: ignition-fetch.service: Deactivated successfully. Aug 13 01:44:51.971475 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Aug 13 01:44:51.972663 systemd[1]: Stopped target network.target - Network. Aug 13 01:44:51.974012 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Aug 13 01:44:51.974063 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Aug 13 01:44:51.975272 systemd[1]: Stopped target paths.target - Path Units. Aug 13 01:44:51.976594 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Aug 13 01:44:51.979992 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:44:51.980587 systemd[1]: Stopped target slices.target - Slice Units. Aug 13 01:44:51.981992 systemd[1]: Stopped target sockets.target - Socket Units. Aug 13 01:44:51.983055 systemd[1]: iscsid.socket: Deactivated successfully. Aug 13 01:44:51.983098 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Aug 13 01:44:51.984258 systemd[1]: iscsiuio.socket: Deactivated successfully. Aug 13 01:44:51.984300 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Aug 13 01:44:51.985279 systemd[1]: ignition-setup.service: Deactivated successfully. Aug 13 01:44:51.985493 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Aug 13 01:44:51.986624 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Aug 13 01:44:51.986668 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Aug 13 01:44:51.987626 systemd[1]: initrd-setup-root.service: Deactivated successfully. Aug 13 01:44:51.987673 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Aug 13 01:44:51.988765 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Aug 13 01:44:51.990173 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Aug 13 01:44:51.995593 systemd[1]: systemd-resolved.service: Deactivated successfully. Aug 13 01:44:51.995712 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Aug 13 01:44:52.001028 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Aug 13 01:44:52.001319 systemd[1]: systemd-networkd.service: Deactivated successfully. Aug 13 01:44:52.001464 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Aug 13 01:44:52.004074 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Aug 13 01:44:52.005063 systemd[1]: Stopped target network-pre.target - Preparation for Network. Aug 13 01:44:52.005828 systemd[1]: systemd-networkd.socket: Deactivated successfully. Aug 13 01:44:52.005881 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:44:52.008010 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Aug 13 01:44:52.009257 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Aug 13 01:44:52.009312 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Aug 13 01:44:52.010882 systemd[1]: systemd-sysctl.service: Deactivated successfully. Aug 13 01:44:52.010929 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:44:52.013364 systemd[1]: systemd-modules-load.service: Deactivated successfully. Aug 13 01:44:52.013418 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Aug 13 01:44:52.014919 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Aug 13 01:44:52.014981 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:44:52.016054 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:44:52.021126 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Aug 13 01:44:52.021193 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:44:52.023203 systemd[1]: systemd-udevd.service: Deactivated successfully. Aug 13 01:44:52.023360 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:44:52.024701 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Aug 13 01:44:52.024767 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Aug 13 01:44:52.027456 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Aug 13 01:44:52.027497 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:44:52.028035 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Aug 13 01:44:52.028081 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Aug 13 01:44:52.029787 systemd[1]: dracut-cmdline.service: Deactivated successfully. Aug 13 01:44:52.029832 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Aug 13 01:44:52.031024 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Aug 13 01:44:52.031075 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Aug 13 01:44:52.035097 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Aug 13 01:44:52.035714 systemd[1]: systemd-network-generator.service: Deactivated successfully. Aug 13 01:44:52.035766 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:44:52.038076 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Aug 13 01:44:52.038128 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:44:52.039705 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Aug 13 01:44:52.039749 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:44:52.041024 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Aug 13 01:44:52.041070 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:44:52.042179 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Aug 13 01:44:52.042224 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:44:52.047038 systemd[1]: run-credentials-systemd\x2dnetwork\x2dgenerator.service.mount: Deactivated successfully. Aug 13 01:44:52.047096 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev\x2dearly.service.mount: Deactivated successfully. Aug 13 01:44:52.047138 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully. Aug 13 01:44:52.047181 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Aug 13 01:44:52.047570 systemd[1]: network-cleanup.service: Deactivated successfully. Aug 13 01:44:52.047688 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Aug 13 01:44:52.052909 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Aug 13 01:44:52.053056 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Aug 13 01:44:52.054414 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Aug 13 01:44:52.055921 systemd[1]: Starting initrd-switch-root.service - Switch Root... Aug 13 01:44:52.081804 systemd[1]: Switching root. Aug 13 01:44:52.110445 systemd-journald[206]: Journal stopped Aug 13 01:44:53.180499 systemd-journald[206]: Received SIGTERM from PID 1 (systemd). Aug 13 01:44:53.180527 kernel: SELinux: policy capability network_peer_controls=1 Aug 13 01:44:53.180539 kernel: SELinux: policy capability open_perms=1 Aug 13 01:44:53.180551 kernel: SELinux: policy capability extended_socket_class=1 Aug 13 01:44:53.180560 kernel: SELinux: policy capability always_check_network=0 Aug 13 01:44:53.180569 kernel: SELinux: policy capability cgroup_seclabel=1 Aug 13 01:44:53.180578 kernel: SELinux: policy capability nnp_nosuid_transition=1 Aug 13 01:44:53.180588 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Aug 13 01:44:53.180596 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Aug 13 01:44:53.180605 kernel: SELinux: policy capability userspace_initial_context=0 Aug 13 01:44:53.180616 kernel: audit: type=1403 audit(1755049492.274:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Aug 13 01:44:53.180625 systemd[1]: Successfully loaded SELinux policy in 72.780ms. Aug 13 01:44:53.180636 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 11.211ms. Aug 13 01:44:53.180646 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Aug 13 01:44:53.180656 systemd[1]: Detected virtualization kvm. Aug 13 01:44:53.180668 systemd[1]: Detected architecture x86-64. Aug 13 01:44:53.180677 systemd[1]: Detected first boot. Aug 13 01:44:53.180687 systemd[1]: Initializing machine ID from random generator. Aug 13 01:44:53.180696 zram_generator::config[1125]: No configuration found. Aug 13 01:44:53.180706 kernel: Guest personality initialized and is inactive Aug 13 01:44:53.180715 kernel: VMCI host device registered (name=vmci, major=10, minor=125) Aug 13 01:44:53.180725 kernel: Initialized host personality Aug 13 01:44:53.180736 kernel: NET: Registered PF_VSOCK protocol family Aug 13 01:44:53.180745 systemd[1]: Populated /etc with preset unit settings. Aug 13 01:44:53.180755 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Aug 13 01:44:53.180765 systemd[1]: initrd-switch-root.service: Deactivated successfully. Aug 13 01:44:53.180774 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Aug 13 01:44:53.180784 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Aug 13 01:44:53.180794 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Aug 13 01:44:53.180805 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Aug 13 01:44:53.180815 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Aug 13 01:44:53.180825 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Aug 13 01:44:53.180834 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Aug 13 01:44:53.180844 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Aug 13 01:44:53.180880 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Aug 13 01:44:53.180890 systemd[1]: Created slice user.slice - User and Session Slice. Aug 13 01:44:53.180903 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Aug 13 01:44:53.180912 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Aug 13 01:44:53.180922 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Aug 13 01:44:53.180932 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Aug 13 01:44:53.180981 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Aug 13 01:44:53.181007 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Aug 13 01:44:53.181019 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Aug 13 01:44:53.181029 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Aug 13 01:44:53.181041 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Aug 13 01:44:53.181051 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Aug 13 01:44:53.181061 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Aug 13 01:44:53.181070 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Aug 13 01:44:53.181083 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Aug 13 01:44:53.181093 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Aug 13 01:44:53.181103 systemd[1]: Reached target remote-fs.target - Remote File Systems. Aug 13 01:44:53.181113 systemd[1]: Reached target slices.target - Slice Units. Aug 13 01:44:53.181125 systemd[1]: Reached target swap.target - Swaps. Aug 13 01:44:53.181135 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Aug 13 01:44:53.181145 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Aug 13 01:44:53.181155 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Aug 13 01:44:53.181166 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Aug 13 01:44:53.181178 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Aug 13 01:44:53.181188 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Aug 13 01:44:53.181198 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Aug 13 01:44:53.181208 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Aug 13 01:44:53.181217 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Aug 13 01:44:53.181227 systemd[1]: Mounting media.mount - External Media Directory... Aug 13 01:44:53.181237 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:44:53.181247 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Aug 13 01:44:53.181259 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Aug 13 01:44:53.181269 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Aug 13 01:44:53.181279 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Aug 13 01:44:53.181289 systemd[1]: Reached target machines.target - Containers. Aug 13 01:44:53.181299 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Aug 13 01:44:53.181309 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:44:53.181319 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Aug 13 01:44:53.181329 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Aug 13 01:44:53.181341 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:44:53.181351 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:44:53.181361 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:44:53.181371 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Aug 13 01:44:53.181380 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:44:53.181390 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Aug 13 01:44:53.181401 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Aug 13 01:44:53.181411 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Aug 13 01:44:53.181421 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Aug 13 01:44:53.181433 systemd[1]: Stopped systemd-fsck-usr.service. Aug 13 01:44:53.181443 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:44:53.181453 systemd[1]: Starting systemd-journald.service - Journal Service... Aug 13 01:44:53.181462 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Aug 13 01:44:53.181620 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Aug 13 01:44:53.181630 kernel: loop: module loaded Aug 13 01:44:53.181639 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Aug 13 01:44:53.181649 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Aug 13 01:44:53.181662 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Aug 13 01:44:53.181672 systemd[1]: verity-setup.service: Deactivated successfully. Aug 13 01:44:53.181681 systemd[1]: Stopped verity-setup.service. Aug 13 01:44:53.181691 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:44:53.181701 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Aug 13 01:44:53.181711 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Aug 13 01:44:53.181721 systemd[1]: Mounted media.mount - External Media Directory. Aug 13 01:44:53.181730 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Aug 13 01:44:53.181742 kernel: fuse: init (API version 7.41) Aug 13 01:44:53.181751 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Aug 13 01:44:53.181784 systemd-journald[1212]: Collecting audit messages is disabled. Aug 13 01:44:53.181804 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Aug 13 01:44:53.181817 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Aug 13 01:44:53.181827 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Aug 13 01:44:53.181837 systemd[1]: modprobe@configfs.service: Deactivated successfully. Aug 13 01:44:53.181847 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Aug 13 01:44:53.181857 systemd-journald[1212]: Journal started Aug 13 01:44:53.181876 systemd-journald[1212]: Runtime Journal (/run/log/journal/c592b2dfed5e49bea03771998f39c43a) is 8M, max 78.5M, 70.5M free. Aug 13 01:44:53.185682 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:44:53.185707 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:44:52.843277 systemd[1]: Queued start job for default target multi-user.target. Aug 13 01:44:52.869888 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Aug 13 01:44:52.870325 systemd[1]: systemd-journald.service: Deactivated successfully. Aug 13 01:44:53.191178 systemd[1]: Started systemd-journald.service - Journal Service. Aug 13 01:44:53.189070 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:44:53.189265 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:44:53.193648 systemd[1]: modprobe@fuse.service: Deactivated successfully. Aug 13 01:44:53.193866 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Aug 13 01:44:53.194664 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:44:53.195034 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:44:53.196055 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Aug 13 01:44:53.197021 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Aug 13 01:44:53.197875 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Aug 13 01:44:53.215734 systemd[1]: Reached target network-pre.target - Preparation for Network. Aug 13 01:44:53.222254 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Aug 13 01:44:53.231964 kernel: ACPI: bus type drm_connector registered Aug 13 01:44:53.233562 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Aug 13 01:44:53.234304 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Aug 13 01:44:53.234396 systemd[1]: Reached target local-fs.target - Local File Systems. Aug 13 01:44:53.236155 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Aug 13 01:44:53.240056 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Aug 13 01:44:53.240818 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:44:53.243130 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Aug 13 01:44:53.245432 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Aug 13 01:44:53.246092 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:44:53.252668 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Aug 13 01:44:53.253377 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:44:53.254852 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Aug 13 01:44:53.259100 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Aug 13 01:44:53.261760 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Aug 13 01:44:53.266234 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:44:53.266513 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:44:53.268139 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Aug 13 01:44:53.269360 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Aug 13 01:44:53.275217 systemd-journald[1212]: Time spent on flushing to /var/log/journal/c592b2dfed5e49bea03771998f39c43a is 28.110ms for 1000 entries. Aug 13 01:44:53.275217 systemd-journald[1212]: System Journal (/var/log/journal/c592b2dfed5e49bea03771998f39c43a) is 8M, max 195.6M, 187.6M free. Aug 13 01:44:53.336246 systemd-journald[1212]: Received client request to flush runtime journal. Aug 13 01:44:53.270713 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Aug 13 01:44:53.313019 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Aug 13 01:44:53.342996 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Aug 13 01:44:53.350843 kernel: loop0: detected capacity change from 0 to 113872 Aug 13 01:44:53.348169 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Aug 13 01:44:53.349040 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Aug 13 01:44:53.351888 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Aug 13 01:44:53.375050 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Aug 13 01:44:53.382968 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Aug 13 01:44:53.397890 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Aug 13 01:44:53.401885 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Aug 13 01:44:53.402517 systemd-tmpfiles[1246]: ACLs are not supported, ignoring. Aug 13 01:44:53.409973 kernel: loop1: detected capacity change from 0 to 146240 Aug 13 01:44:53.415286 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Aug 13 01:44:53.420181 systemd[1]: Starting systemd-sysusers.service - Create System Users... Aug 13 01:44:53.454978 kernel: loop2: detected capacity change from 0 to 224512 Aug 13 01:44:53.488070 systemd[1]: Finished systemd-sysusers.service - Create System Users. Aug 13 01:44:53.490652 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Aug 13 01:44:53.496974 kernel: loop3: detected capacity change from 0 to 8 Aug 13 01:44:53.518654 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Aug 13 01:44:53.518907 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Aug 13 01:44:53.524987 kernel: loop4: detected capacity change from 0 to 113872 Aug 13 01:44:53.528085 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Aug 13 01:44:53.543971 kernel: loop5: detected capacity change from 0 to 146240 Aug 13 01:44:53.563003 kernel: loop6: detected capacity change from 0 to 224512 Aug 13 01:44:53.593968 kernel: loop7: detected capacity change from 0 to 8 Aug 13 01:44:53.598200 (sd-merge)[1272]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-akamai'. Aug 13 01:44:53.599140 (sd-merge)[1272]: Merged extensions into '/usr'. Aug 13 01:44:53.607587 systemd[1]: Reload requested from client PID 1245 ('systemd-sysext') (unit systemd-sysext.service)... Aug 13 01:44:53.607693 systemd[1]: Reloading... Aug 13 01:44:53.712013 zram_generator::config[1295]: No configuration found. Aug 13 01:44:53.830028 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:44:53.844457 ldconfig[1240]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Aug 13 01:44:53.902825 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Aug 13 01:44:53.903439 systemd[1]: Reloading finished in 294 ms. Aug 13 01:44:53.919754 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Aug 13 01:44:53.920976 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Aug 13 01:44:53.931668 systemd[1]: Starting ensure-sysext.service... Aug 13 01:44:53.935510 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Aug 13 01:44:53.950859 systemd[1]: Reload requested from client PID 1342 ('systemctl') (unit ensure-sysext.service)... Aug 13 01:44:53.950872 systemd[1]: Reloading... Aug 13 01:44:53.993116 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Aug 13 01:44:53.993161 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Aug 13 01:44:53.993483 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Aug 13 01:44:53.993725 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Aug 13 01:44:53.997604 systemd-tmpfiles[1343]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Aug 13 01:44:53.997846 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Aug 13 01:44:53.997918 systemd-tmpfiles[1343]: ACLs are not supported, ignoring. Aug 13 01:44:54.014964 zram_generator::config[1365]: No configuration found. Aug 13 01:44:54.020814 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:44:54.021966 systemd-tmpfiles[1343]: Skipping /boot Aug 13 01:44:54.043116 systemd-tmpfiles[1343]: Detected autofs mount point /boot during canonicalization of boot. Aug 13 01:44:54.043127 systemd-tmpfiles[1343]: Skipping /boot Aug 13 01:44:54.137625 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:44:54.208808 systemd[1]: Reloading finished in 257 ms. Aug 13 01:44:54.224811 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Aug 13 01:44:54.241175 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Aug 13 01:44:54.249501 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:44:54.253006 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Aug 13 01:44:54.259454 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Aug 13 01:44:54.265219 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Aug 13 01:44:54.268203 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Aug 13 01:44:54.277787 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Aug 13 01:44:54.282570 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:44:54.282735 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:44:54.285735 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:44:54.289318 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:44:54.297606 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:44:54.298265 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:44:54.298358 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:44:54.304169 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Aug 13 01:44:54.304700 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:44:54.305897 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:44:54.307146 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:44:54.308262 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:44:54.308460 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:44:54.322106 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:44:54.322303 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:44:54.325744 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Aug 13 01:44:54.330559 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Aug 13 01:44:54.332059 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:44:54.332202 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:44:54.332329 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:44:54.334496 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Aug 13 01:44:54.335658 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:44:54.336919 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:44:54.346907 systemd[1]: Starting systemd-update-done.service - Update is Completed... Aug 13 01:44:54.348714 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Aug 13 01:44:54.358775 systemd-udevd[1419]: Using default interface naming scheme 'v255'. Aug 13 01:44:54.364126 systemd[1]: proc-xen.mount - /proc/xen was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:44:54.364329 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Aug 13 01:44:54.368464 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Aug 13 01:44:54.377050 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Aug 13 01:44:54.377679 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Aug 13 01:44:54.377773 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Aug 13 01:44:54.377887 systemd[1]: xenserver-pv-version.service - Set fake PV driver version for XenServer was skipped because of an unmet condition check (ConditionVirtualization=xen). Aug 13 01:44:54.380131 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Aug 13 01:44:54.380906 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Aug 13 01:44:54.386084 systemd[1]: Finished ensure-sysext.service. Aug 13 01:44:54.389103 systemd[1]: modprobe@drm.service: Deactivated successfully. Aug 13 01:44:54.393674 augenrules[1454]: No rules Aug 13 01:44:54.394580 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Aug 13 01:44:54.396581 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:44:54.398205 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:44:54.403903 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Aug 13 01:44:54.405443 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Aug 13 01:44:54.406426 systemd[1]: modprobe@loop.service: Deactivated successfully. Aug 13 01:44:54.406989 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Aug 13 01:44:54.413786 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Aug 13 01:44:54.413865 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Aug 13 01:44:54.419182 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Aug 13 01:44:54.420762 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Aug 13 01:44:54.421903 systemd[1]: Finished systemd-update-done.service - Update is Completed. Aug 13 01:44:54.425353 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Aug 13 01:44:54.430468 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Aug 13 01:44:54.436283 systemd[1]: Starting systemd-networkd.service - Network Configuration... Aug 13 01:44:54.459559 systemd[1]: Started systemd-userdbd.service - User Database Manager. Aug 13 01:44:54.547186 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Aug 13 01:44:54.595970 kernel: mousedev: PS/2 mouse device common for all mice Aug 13 01:44:54.612968 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input3 Aug 13 01:44:54.625975 kernel: ACPI: button: Power Button [PWRF] Aug 13 01:44:54.641446 kernel: i801_smbus 0000:00:1f.3: SMBus using PCI interrupt Aug 13 01:44:54.641918 kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD Aug 13 01:44:54.728757 systemd-networkd[1470]: lo: Link UP Aug 13 01:44:54.728770 systemd-networkd[1470]: lo: Gained carrier Aug 13 01:44:54.730554 systemd-networkd[1470]: Enumeration completed Aug 13 01:44:54.730664 systemd[1]: Started systemd-networkd.service - Network Configuration. Aug 13 01:44:54.731272 systemd-networkd[1470]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:44:54.731277 systemd-networkd[1470]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Aug 13 01:44:54.731820 systemd-networkd[1470]: eth0: Link UP Aug 13 01:44:54.732095 systemd-networkd[1470]: eth0: Gained carrier Aug 13 01:44:54.732107 systemd-networkd[1470]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Aug 13 01:44:54.733167 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Aug 13 01:44:54.737538 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Aug 13 01:44:54.746872 systemd-resolved[1417]: Positive Trust Anchors: Aug 13 01:44:54.747977 systemd-resolved[1417]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Aug 13 01:44:54.748061 systemd-resolved[1417]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Aug 13 01:44:54.753914 systemd-resolved[1417]: Defaulting to hostname 'linux'. Aug 13 01:44:54.759808 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Aug 13 01:44:54.768569 systemd[1]: Reached target network.target - Network. Aug 13 01:44:54.769939 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Aug 13 01:44:54.783589 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Aug 13 01:44:54.789690 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Aug 13 01:44:54.791041 systemd[1]: Reached target sysinit.target - System Initialization. Aug 13 01:44:54.791636 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Aug 13 01:44:54.792250 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Aug 13 01:44:54.792809 systemd[1]: Started google-oslogin-cache.timer - NSS cache refresh timer. Aug 13 01:44:54.794304 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Aug 13 01:44:54.794877 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Aug 13 01:44:54.794901 systemd[1]: Reached target paths.target - Path Units. Aug 13 01:44:54.795537 systemd[1]: Reached target time-set.target - System Time Set. Aug 13 01:44:54.797227 systemd[1]: Started logrotate.timer - Daily rotation of log files. Aug 13 01:44:54.797904 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Aug 13 01:44:54.798625 systemd[1]: Reached target timers.target - Timer Units. Aug 13 01:44:54.828368 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Aug 13 01:44:54.832253 systemd[1]: Starting docker.socket - Docker Socket for the API... Aug 13 01:44:54.836250 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Aug 13 01:44:54.838656 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Aug 13 01:44:54.840007 systemd[1]: Reached target ssh-access.target - SSH Access Available. Aug 13 01:44:54.850074 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Aug 13 01:44:54.850969 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Aug 13 01:44:54.852065 systemd[1]: Listening on docker.socket - Docker Socket for the API. Aug 13 01:44:54.853352 systemd[1]: Reached target sockets.target - Socket Units. Aug 13 01:44:54.854995 systemd[1]: Reached target basic.target - Basic System. Aug 13 01:44:54.855529 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:44:54.855553 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Aug 13 01:44:54.856615 systemd[1]: Starting containerd.service - containerd container runtime... Aug 13 01:44:54.862067 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Aug 13 01:44:54.865135 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Aug 13 01:44:54.868120 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Aug 13 01:44:54.871923 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Aug 13 01:44:54.875301 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Aug 13 01:44:54.876998 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Aug 13 01:44:54.878764 systemd[1]: Starting google-oslogin-cache.service - NSS cache refresh... Aug 13 01:44:54.882039 kernel: EDAC MC: Ver: 3.0.0 Aug 13 01:44:54.884076 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Aug 13 01:44:54.886128 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Aug 13 01:44:54.891149 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Aug 13 01:44:54.896084 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Aug 13 01:44:54.905692 jq[1530]: false Aug 13 01:44:54.908744 systemd[1]: Starting systemd-logind.service - User Login Management... Aug 13 01:44:54.912171 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Aug 13 01:44:54.912602 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Aug 13 01:44:54.917360 systemd[1]: Starting update-engine.service - Update Engine... Aug 13 01:44:54.934054 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Aug 13 01:44:54.939269 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Aug 13 01:44:54.941311 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Aug 13 01:44:54.942018 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Aug 13 01:44:54.973973 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Refreshing passwd entry cache Aug 13 01:44:54.983273 oslogin_cache_refresh[1532]: Refreshing passwd entry cache Aug 13 01:44:54.992205 extend-filesystems[1531]: Found /dev/sda6 Aug 13 01:44:54.996083 jq[1543]: true Aug 13 01:44:55.013068 extend-filesystems[1531]: Found /dev/sda9 Aug 13 01:44:55.013631 jq[1562]: true Aug 13 01:44:55.020810 oslogin_cache_refresh[1532]: Failure getting users, quitting Aug 13 01:44:55.016706 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Aug 13 01:44:55.023182 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Failure getting users, quitting Aug 13 01:44:55.023182 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:44:55.023182 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Refreshing group entry cache Aug 13 01:44:55.020826 oslogin_cache_refresh[1532]: Produced empty passwd cache file, removing /etc/oslogin_passwd.cache.bak. Aug 13 01:44:55.017011 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Aug 13 01:44:55.020863 oslogin_cache_refresh[1532]: Refreshing group entry cache Aug 13 01:44:55.031631 extend-filesystems[1531]: Checking size of /dev/sda9 Aug 13 01:44:55.027701 systemd[1]: google-oslogin-cache.service: Deactivated successfully. Aug 13 01:44:55.026119 oslogin_cache_refresh[1532]: Failure getting groups, quitting Aug 13 01:44:55.032342 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Failure getting groups, quitting Aug 13 01:44:55.032342 google_oslogin_nss_cache[1532]: oslogin_cache_refresh[1532]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:44:55.029962 systemd[1]: Finished google-oslogin-cache.service - NSS cache refresh. Aug 13 01:44:55.026128 oslogin_cache_refresh[1532]: Produced empty group cache file, removing /etc/oslogin_group.cache.bak. Aug 13 01:44:55.039279 (ntainerd)[1559]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Aug 13 01:44:55.049137 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Aug 13 01:44:55.050573 update_engine[1542]: I20250813 01:44:55.049700 1542 main.cc:92] Flatcar Update Engine starting Aug 13 01:44:55.061863 tar[1546]: linux-amd64/LICENSE Aug 13 01:44:55.061863 tar[1546]: linux-amd64/helm Aug 13 01:44:55.062870 dbus-daemon[1528]: [system] SELinux support is enabled Aug 13 01:44:55.063025 systemd[1]: Started dbus.service - D-Bus System Message Bus. Aug 13 01:44:55.067144 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Aug 13 01:44:55.067260 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Aug 13 01:44:55.068692 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Aug 13 01:44:55.068799 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Aug 13 01:44:55.086236 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Aug 13 01:44:55.090567 extend-filesystems[1531]: Resized partition /dev/sda9 Aug 13 01:44:55.090621 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Aug 13 01:44:55.102777 extend-filesystems[1598]: resize2fs 1.47.2 (1-Jan-2025) Aug 13 01:44:55.114664 coreos-metadata[1527]: Aug 13 01:44:55.114 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:44:55.117199 systemd[1]: Started update-engine.service - Update Engine. Aug 13 01:44:55.121039 update_engine[1542]: I20250813 01:44:55.120342 1542 update_check_scheduler.cc:74] Next update check in 5m5s Aug 13 01:44:55.127747 kernel: EXT4-fs (sda9): resizing filesystem from 553472 to 555003 blocks Aug 13 01:44:55.122857 systemd[1]: Started locksmithd.service - Cluster reboot manager. Aug 13 01:44:55.126540 systemd[1]: motdgen.service: Deactivated successfully. Aug 13 01:44:55.126814 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Aug 13 01:44:55.131288 kernel: EXT4-fs (sda9): resized filesystem to 555003 Aug 13 01:44:55.133170 bash[1596]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:44:55.133892 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Aug 13 01:44:55.141927 systemd[1]: Starting sshkeys.service... Aug 13 01:44:55.145383 extend-filesystems[1598]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Aug 13 01:44:55.145383 extend-filesystems[1598]: old_desc_blocks = 1, new_desc_blocks = 1 Aug 13 01:44:55.145383 extend-filesystems[1598]: The filesystem on /dev/sda9 is now 555003 (4k) blocks long. Aug 13 01:44:55.162107 extend-filesystems[1531]: Resized filesystem in /dev/sda9 Aug 13 01:44:55.148676 systemd[1]: extend-filesystems.service: Deactivated successfully. Aug 13 01:44:55.150002 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Aug 13 01:44:55.186748 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Aug 13 01:44:55.199500 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Aug 13 01:44:55.203899 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Aug 13 01:44:55.251185 systemd-networkd[1470]: eth0: DHCPv4 address 172.232.7.133/24, gateway 172.232.7.1 acquired from 23.205.167.118 Aug 13 01:44:55.251331 dbus-daemon[1528]: [system] Activating via systemd: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1470 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Aug 13 01:44:55.257338 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Aug 13 01:44:55.262032 systemd-timesyncd[1464]: Network configuration changed, trying to establish connection. Aug 13 01:44:55.385514 coreos-metadata[1616]: Aug 13 01:44:55.384 INFO Putting http://169.254.169.254/v1/token: Attempt #1 Aug 13 01:44:55.462005 containerd[1559]: time="2025-08-13T01:44:55Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Aug 13 01:44:55.462005 containerd[1559]: time="2025-08-13T01:44:55.458857285Z" level=info msg="starting containerd" revision=06b99ca80cdbfbc6cc8bd567021738c9af2b36ce version=v2.0.4 Aug 13 01:44:55.504114 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Aug 13 01:44:55.504906 systemd-logind[1539]: Watching system buttons on /dev/input/event2 (Power Button) Aug 13 01:44:55.504927 systemd-logind[1539]: Watching system buttons on /dev/input/event0 (AT Translated Set 2 keyboard) Aug 13 01:44:55.505883 systemd-logind[1539]: New seat seat0. Aug 13 01:44:55.508027 systemd[1]: Started systemd-logind.service - User Login Management. Aug 13 01:44:55.524971 coreos-metadata[1616]: Aug 13 01:44:55.524 INFO Fetching http://169.254.169.254/v1/ssh-keys: Attempt #1 Aug 13 01:44:55.525017 containerd[1559]: time="2025-08-13T01:44:55.524647947Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="9.17µs" Aug 13 01:44:55.525017 containerd[1559]: time="2025-08-13T01:44:55.524674697Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Aug 13 01:44:55.525017 containerd[1559]: time="2025-08-13T01:44:55.524691997Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Aug 13 01:44:55.525017 containerd[1559]: time="2025-08-13T01:44:55.524850718Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Aug 13 01:44:55.525017 containerd[1559]: time="2025-08-13T01:44:55.524864448Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Aug 13 01:44:55.525017 containerd[1559]: time="2025-08-13T01:44:55.524887058Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:44:55.529982 containerd[1559]: time="2025-08-13T01:44:55.527984919Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Aug 13 01:44:55.529982 containerd[1559]: time="2025-08-13T01:44:55.528006399Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:44:55.529982 containerd[1559]: time="2025-08-13T01:44:55.528217169Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Aug 13 01:44:55.529982 containerd[1559]: time="2025-08-13T01:44:55.528230839Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:44:55.529982 containerd[1559]: time="2025-08-13T01:44:55.528241259Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Aug 13 01:44:55.529982 containerd[1559]: time="2025-08-13T01:44:55.528248289Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Aug 13 01:44:55.529982 containerd[1559]: time="2025-08-13T01:44:55.528332949Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Aug 13 01:44:55.529982 containerd[1559]: time="2025-08-13T01:44:55.528534019Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:44:55.529982 containerd[1559]: time="2025-08-13T01:44:55.528563549Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Aug 13 01:44:55.529982 containerd[1559]: time="2025-08-13T01:44:55.528572689Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Aug 13 01:44:55.532379 containerd[1559]: time="2025-08-13T01:44:55.531972641Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Aug 13 01:44:55.532379 containerd[1559]: time="2025-08-13T01:44:55.532187091Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Aug 13 01:44:55.532379 containerd[1559]: time="2025-08-13T01:44:55.532252781Z" level=info msg="metadata content store policy set" policy=shared Aug 13 01:44:55.540962 containerd[1559]: time="2025-08-13T01:44:55.540327735Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Aug 13 01:44:55.540962 containerd[1559]: time="2025-08-13T01:44:55.540367975Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Aug 13 01:44:55.540962 containerd[1559]: time="2025-08-13T01:44:55.540440475Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Aug 13 01:44:55.540962 containerd[1559]: time="2025-08-13T01:44:55.540458705Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Aug 13 01:44:55.540962 containerd[1559]: time="2025-08-13T01:44:55.540470205Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Aug 13 01:44:55.540962 containerd[1559]: time="2025-08-13T01:44:55.540479425Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Aug 13 01:44:55.540962 containerd[1559]: time="2025-08-13T01:44:55.540500685Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Aug 13 01:44:55.540962 containerd[1559]: time="2025-08-13T01:44:55.540511985Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Aug 13 01:44:55.540962 containerd[1559]: time="2025-08-13T01:44:55.540522215Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Aug 13 01:44:55.540962 containerd[1559]: time="2025-08-13T01:44:55.540531565Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Aug 13 01:44:55.540962 containerd[1559]: time="2025-08-13T01:44:55.540539615Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Aug 13 01:44:55.540962 containerd[1559]: time="2025-08-13T01:44:55.540551705Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Aug 13 01:44:55.540962 containerd[1559]: time="2025-08-13T01:44:55.540651205Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Aug 13 01:44:55.540962 containerd[1559]: time="2025-08-13T01:44:55.540669405Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Aug 13 01:44:55.541202 containerd[1559]: time="2025-08-13T01:44:55.540684345Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Aug 13 01:44:55.541202 containerd[1559]: time="2025-08-13T01:44:55.540694175Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Aug 13 01:44:55.541202 containerd[1559]: time="2025-08-13T01:44:55.540703665Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Aug 13 01:44:55.541202 containerd[1559]: time="2025-08-13T01:44:55.540712965Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Aug 13 01:44:55.541202 containerd[1559]: time="2025-08-13T01:44:55.540723195Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Aug 13 01:44:55.541202 containerd[1559]: time="2025-08-13T01:44:55.540733155Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Aug 13 01:44:55.541202 containerd[1559]: time="2025-08-13T01:44:55.540743515Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Aug 13 01:44:55.541202 containerd[1559]: time="2025-08-13T01:44:55.540752785Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Aug 13 01:44:55.541202 containerd[1559]: time="2025-08-13T01:44:55.540761636Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Aug 13 01:44:55.541202 containerd[1559]: time="2025-08-13T01:44:55.540811676Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Aug 13 01:44:55.541202 containerd[1559]: time="2025-08-13T01:44:55.540822396Z" level=info msg="Start snapshots syncer" Aug 13 01:44:55.541202 containerd[1559]: time="2025-08-13T01:44:55.540841156Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Aug 13 01:44:55.545643 containerd[1559]: time="2025-08-13T01:44:55.545018168Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Aug 13 01:44:55.545643 containerd[1559]: time="2025-08-13T01:44:55.545087318Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Aug 13 01:44:55.548754 containerd[1559]: time="2025-08-13T01:44:55.547926139Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Aug 13 01:44:55.548754 containerd[1559]: time="2025-08-13T01:44:55.548058009Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Aug 13 01:44:55.548754 containerd[1559]: time="2025-08-13T01:44:55.548078469Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Aug 13 01:44:55.548754 containerd[1559]: time="2025-08-13T01:44:55.548104289Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Aug 13 01:44:55.548754 containerd[1559]: time="2025-08-13T01:44:55.548114129Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Aug 13 01:44:55.548754 containerd[1559]: time="2025-08-13T01:44:55.548124149Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Aug 13 01:44:55.548754 containerd[1559]: time="2025-08-13T01:44:55.548133229Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Aug 13 01:44:55.548754 containerd[1559]: time="2025-08-13T01:44:55.548142409Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Aug 13 01:44:55.548754 containerd[1559]: time="2025-08-13T01:44:55.548166089Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Aug 13 01:44:55.548754 containerd[1559]: time="2025-08-13T01:44:55.548177259Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Aug 13 01:44:55.548754 containerd[1559]: time="2025-08-13T01:44:55.548191219Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Aug 13 01:44:55.548754 containerd[1559]: time="2025-08-13T01:44:55.548221369Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:44:55.548754 containerd[1559]: time="2025-08-13T01:44:55.548232349Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Aug 13 01:44:55.548754 containerd[1559]: time="2025-08-13T01:44:55.548239459Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:44:55.549054 containerd[1559]: time="2025-08-13T01:44:55.548247519Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Aug 13 01:44:55.549054 containerd[1559]: time="2025-08-13T01:44:55.548254459Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Aug 13 01:44:55.549054 containerd[1559]: time="2025-08-13T01:44:55.548263179Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Aug 13 01:44:55.549054 containerd[1559]: time="2025-08-13T01:44:55.548271549Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Aug 13 01:44:55.549054 containerd[1559]: time="2025-08-13T01:44:55.548286139Z" level=info msg="runtime interface created" Aug 13 01:44:55.549054 containerd[1559]: time="2025-08-13T01:44:55.548290769Z" level=info msg="created NRI interface" Aug 13 01:44:55.549054 containerd[1559]: time="2025-08-13T01:44:55.548299139Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Aug 13 01:44:55.549054 containerd[1559]: time="2025-08-13T01:44:55.548308319Z" level=info msg="Connect containerd service" Aug 13 01:44:55.549054 containerd[1559]: time="2025-08-13T01:44:55.548330619Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Aug 13 01:44:55.553748 containerd[1559]: time="2025-08-13T01:44:55.553035772Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:44:55.602291 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Aug 13 01:44:55.605514 dbus-daemon[1528]: [system] Successfully activated service 'org.freedesktop.hostname1' Aug 13 01:44:55.611372 dbus-daemon[1528]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.6' (uid=0 pid=1620 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Aug 13 01:44:55.620218 systemd[1]: Starting polkit.service - Authorization Manager... Aug 13 01:44:56.749599 systemd-timesyncd[1464]: Contacted time server 66.118.231.14:123 (0.flatcar.pool.ntp.org). Aug 13 01:44:56.749790 systemd-timesyncd[1464]: Initial clock synchronization to Wed 2025-08-13 01:44:56.748899 UTC. Aug 13 01:44:56.749839 systemd-resolved[1417]: Clock change detected. Flushing caches. Aug 13 01:44:56.763917 coreos-metadata[1616]: Aug 13 01:44:56.763 INFO Fetch successful Aug 13 01:44:56.787948 locksmithd[1600]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Aug 13 01:44:56.809907 update-ssh-keys[1645]: Updated "/home/core/.ssh/authorized_keys" Aug 13 01:44:56.810539 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Aug 13 01:44:56.818820 systemd[1]: Finished sshkeys.service. Aug 13 01:44:56.830785 containerd[1559]: time="2025-08-13T01:44:56.830315866Z" level=info msg="Start subscribing containerd event" Aug 13 01:44:56.830785 containerd[1559]: time="2025-08-13T01:44:56.830357796Z" level=info msg="Start recovering state" Aug 13 01:44:56.830785 containerd[1559]: time="2025-08-13T01:44:56.830461406Z" level=info msg="Start event monitor" Aug 13 01:44:56.830785 containerd[1559]: time="2025-08-13T01:44:56.830474706Z" level=info msg="Start cni network conf syncer for default" Aug 13 01:44:56.830785 containerd[1559]: time="2025-08-13T01:44:56.830480806Z" level=info msg="Start streaming server" Aug 13 01:44:56.830785 containerd[1559]: time="2025-08-13T01:44:56.830493696Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Aug 13 01:44:56.830785 containerd[1559]: time="2025-08-13T01:44:56.830500656Z" level=info msg="runtime interface starting up..." Aug 13 01:44:56.830785 containerd[1559]: time="2025-08-13T01:44:56.830505716Z" level=info msg="starting plugins..." Aug 13 01:44:56.830785 containerd[1559]: time="2025-08-13T01:44:56.830517806Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Aug 13 01:44:56.831456 containerd[1559]: time="2025-08-13T01:44:56.831400807Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Aug 13 01:44:56.832022 containerd[1559]: time="2025-08-13T01:44:56.831962047Z" level=info msg=serving... address=/run/containerd/containerd.sock Aug 13 01:44:56.832208 systemd[1]: Started containerd.service - containerd container runtime. Aug 13 01:44:56.838882 containerd[1559]: time="2025-08-13T01:44:56.837984850Z" level=info msg="containerd successfully booted in 0.311017s" Aug 13 01:44:56.854511 polkitd[1636]: Started polkitd version 126 Aug 13 01:44:56.863811 polkitd[1636]: Loading rules from directory /etc/polkit-1/rules.d Aug 13 01:44:56.864321 polkitd[1636]: Loading rules from directory /run/polkit-1/rules.d Aug 13 01:44:56.864926 polkitd[1636]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:44:56.865186 polkitd[1636]: Loading rules from directory /usr/local/share/polkit-1/rules.d Aug 13 01:44:56.865258 polkitd[1636]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Aug 13 01:44:56.866914 polkitd[1636]: Loading rules from directory /usr/share/polkit-1/rules.d Aug 13 01:44:56.867506 polkitd[1636]: Finished loading, compiling and executing 2 rules Aug 13 01:44:56.867730 systemd[1]: Started polkit.service - Authorization Manager. Aug 13 01:44:56.869450 dbus-daemon[1528]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Aug 13 01:44:56.869883 polkitd[1636]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Aug 13 01:44:56.884633 systemd-resolved[1417]: System hostname changed to '172-232-7-133'. Aug 13 01:44:56.884722 systemd-hostnamed[1620]: Hostname set to <172-232-7-133> (transient) Aug 13 01:44:57.067270 tar[1546]: linux-amd64/README.md Aug 13 01:44:57.083541 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Aug 13 01:44:57.100521 sshd_keygen[1585]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Aug 13 01:44:57.121391 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Aug 13 01:44:57.123935 systemd[1]: Starting issuegen.service - Generate /run/issue... Aug 13 01:44:57.143072 systemd[1]: issuegen.service: Deactivated successfully. Aug 13 01:44:57.143310 systemd[1]: Finished issuegen.service - Generate /run/issue. Aug 13 01:44:57.146554 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Aug 13 01:44:57.162498 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Aug 13 01:44:57.165007 systemd[1]: Started getty@tty1.service - Getty on tty1. Aug 13 01:44:57.169060 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Aug 13 01:44:57.169718 systemd[1]: Reached target getty.target - Login Prompts. Aug 13 01:44:57.201745 coreos-metadata[1527]: Aug 13 01:44:57.201 INFO Putting http://169.254.169.254/v1/token: Attempt #2 Aug 13 01:44:57.308938 coreos-metadata[1527]: Aug 13 01:44:57.308 INFO Fetching http://169.254.169.254/v1/instance: Attempt #1 Aug 13 01:44:57.522306 coreos-metadata[1527]: Aug 13 01:44:57.522 INFO Fetch successful Aug 13 01:44:57.522306 coreos-metadata[1527]: Aug 13 01:44:57.522 INFO Fetching http://169.254.169.254/v1/network: Attempt #1 Aug 13 01:44:57.527082 systemd-networkd[1470]: eth0: Gained IPv6LL Aug 13 01:44:57.530077 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Aug 13 01:44:57.531156 systemd[1]: Reached target network-online.target - Network is Online. Aug 13 01:44:57.533737 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:44:57.536022 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Aug 13 01:44:57.568330 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Aug 13 01:44:57.820277 coreos-metadata[1527]: Aug 13 01:44:57.820 INFO Fetch successful Aug 13 01:44:57.927343 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Aug 13 01:44:57.928319 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Aug 13 01:44:58.142065 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Aug 13 01:44:58.145610 systemd[1]: Started sshd@0-172.232.7.133:22-147.75.109.163:56748.service - OpenSSH per-connection server daemon (147.75.109.163:56748). Aug 13 01:44:58.442624 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:44:58.443557 systemd[1]: Reached target multi-user.target - Multi-User System. Aug 13 01:44:58.445036 systemd[1]: Startup finished in 2.866s (kernel) + 9.604s (initrd) + 5.165s (userspace) = 17.637s. Aug 13 01:44:58.450380 (kubelet)[1720]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:44:58.501727 sshd[1713]: Accepted publickey for core from 147.75.109.163 port 56748 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:44:58.503407 sshd-session[1713]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:44:58.510440 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Aug 13 01:44:58.514038 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Aug 13 01:44:58.523051 systemd-logind[1539]: New session 1 of user core. Aug 13 01:44:58.535729 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Aug 13 01:44:58.541169 systemd[1]: Starting user@500.service - User Manager for UID 500... Aug 13 01:44:58.556159 (systemd)[1727]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Aug 13 01:44:58.561014 systemd-logind[1539]: New session c1 of user core. Aug 13 01:44:58.692700 systemd[1727]: Queued start job for default target default.target. Aug 13 01:44:58.701935 systemd[1727]: Created slice app.slice - User Application Slice. Aug 13 01:44:58.701963 systemd[1727]: Reached target paths.target - Paths. Aug 13 01:44:58.702090 systemd[1727]: Reached target timers.target - Timers. Aug 13 01:44:58.704298 systemd[1727]: Starting dbus.socket - D-Bus User Message Bus Socket... Aug 13 01:44:58.725775 systemd[1727]: Listening on dbus.socket - D-Bus User Message Bus Socket. Aug 13 01:44:58.726673 systemd[1727]: Reached target sockets.target - Sockets. Aug 13 01:44:58.726725 systemd[1727]: Reached target basic.target - Basic System. Aug 13 01:44:58.726768 systemd[1727]: Reached target default.target - Main User Target. Aug 13 01:44:58.726798 systemd[1727]: Startup finished in 158ms. Aug 13 01:44:58.726946 systemd[1]: Started user@500.service - User Manager for UID 500. Aug 13 01:44:58.730986 systemd[1]: Started session-1.scope - Session 1 of User core. Aug 13 01:44:58.958939 kubelet[1720]: E0813 01:44:58.958821 1720 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:44:58.962582 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:44:58.962760 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:44:58.963172 systemd[1]: kubelet.service: Consumed 842ms CPU time, 263.2M memory peak. Aug 13 01:44:58.993046 systemd[1]: Started sshd@1-172.232.7.133:22-147.75.109.163:56754.service - OpenSSH per-connection server daemon (147.75.109.163:56754). Aug 13 01:44:59.322826 sshd[1744]: Accepted publickey for core from 147.75.109.163 port 56754 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:44:59.324365 sshd-session[1744]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:44:59.330502 systemd-logind[1539]: New session 2 of user core. Aug 13 01:44:59.335975 systemd[1]: Started session-2.scope - Session 2 of User core. Aug 13 01:44:59.565998 sshd[1746]: Connection closed by 147.75.109.163 port 56754 Aug 13 01:44:59.566558 sshd-session[1744]: pam_unix(sshd:session): session closed for user core Aug 13 01:44:59.571352 systemd[1]: sshd@1-172.232.7.133:22-147.75.109.163:56754.service: Deactivated successfully. Aug 13 01:44:59.573823 systemd[1]: session-2.scope: Deactivated successfully. Aug 13 01:44:59.575045 systemd-logind[1539]: Session 2 logged out. Waiting for processes to exit. Aug 13 01:44:59.576771 systemd-logind[1539]: Removed session 2. Aug 13 01:44:59.634066 systemd[1]: Started sshd@2-172.232.7.133:22-147.75.109.163:56768.service - OpenSSH per-connection server daemon (147.75.109.163:56768). Aug 13 01:44:59.990640 sshd[1752]: Accepted publickey for core from 147.75.109.163 port 56768 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:44:59.992716 sshd-session[1752]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:44:59.999327 systemd-logind[1539]: New session 3 of user core. Aug 13 01:45:00.007990 systemd[1]: Started session-3.scope - Session 3 of User core. Aug 13 01:45:00.237730 sshd[1754]: Connection closed by 147.75.109.163 port 56768 Aug 13 01:45:00.238299 sshd-session[1752]: pam_unix(sshd:session): session closed for user core Aug 13 01:45:00.243096 systemd[1]: sshd@2-172.232.7.133:22-147.75.109.163:56768.service: Deactivated successfully. Aug 13 01:45:00.245206 systemd[1]: session-3.scope: Deactivated successfully. Aug 13 01:45:00.245945 systemd-logind[1539]: Session 3 logged out. Waiting for processes to exit. Aug 13 01:45:00.247580 systemd-logind[1539]: Removed session 3. Aug 13 01:45:00.295960 systemd[1]: Started sshd@3-172.232.7.133:22-147.75.109.163:56784.service - OpenSSH per-connection server daemon (147.75.109.163:56784). Aug 13 01:45:00.639340 sshd[1760]: Accepted publickey for core from 147.75.109.163 port 56784 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:45:00.641560 sshd-session[1760]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:45:00.647179 systemd-logind[1539]: New session 4 of user core. Aug 13 01:45:00.655024 systemd[1]: Started session-4.scope - Session 4 of User core. Aug 13 01:45:00.883557 sshd[1762]: Connection closed by 147.75.109.163 port 56784 Aug 13 01:45:00.884284 sshd-session[1760]: pam_unix(sshd:session): session closed for user core Aug 13 01:45:00.888734 systemd-logind[1539]: Session 4 logged out. Waiting for processes to exit. Aug 13 01:45:00.892927 systemd[1]: sshd@3-172.232.7.133:22-147.75.109.163:56784.service: Deactivated successfully. Aug 13 01:45:00.895488 systemd[1]: session-4.scope: Deactivated successfully. Aug 13 01:45:00.897528 systemd-logind[1539]: Removed session 4. Aug 13 01:45:00.957949 systemd[1]: Started sshd@4-172.232.7.133:22-147.75.109.163:56800.service - OpenSSH per-connection server daemon (147.75.109.163:56800). Aug 13 01:45:01.300281 sshd[1768]: Accepted publickey for core from 147.75.109.163 port 56800 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:45:01.302106 sshd-session[1768]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:45:01.307230 systemd-logind[1539]: New session 5 of user core. Aug 13 01:45:01.314984 systemd[1]: Started session-5.scope - Session 5 of User core. Aug 13 01:45:01.505438 sudo[1771]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Aug 13 01:45:01.505726 sudo[1771]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:45:01.520214 sudo[1771]: pam_unix(sudo:session): session closed for user root Aug 13 01:45:01.570969 sshd[1770]: Connection closed by 147.75.109.163 port 56800 Aug 13 01:45:01.571476 sshd-session[1768]: pam_unix(sshd:session): session closed for user core Aug 13 01:45:01.575276 systemd[1]: sshd@4-172.232.7.133:22-147.75.109.163:56800.service: Deactivated successfully. Aug 13 01:45:01.577179 systemd[1]: session-5.scope: Deactivated successfully. Aug 13 01:45:01.578348 systemd-logind[1539]: Session 5 logged out. Waiting for processes to exit. Aug 13 01:45:01.580361 systemd-logind[1539]: Removed session 5. Aug 13 01:45:01.634429 systemd[1]: Started sshd@5-172.232.7.133:22-147.75.109.163:56812.service - OpenSSH per-connection server daemon (147.75.109.163:56812). Aug 13 01:45:01.981070 sshd[1777]: Accepted publickey for core from 147.75.109.163 port 56812 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:45:01.982912 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:45:01.989385 systemd-logind[1539]: New session 6 of user core. Aug 13 01:45:01.996005 systemd[1]: Started session-6.scope - Session 6 of User core. Aug 13 01:45:02.181786 sudo[1781]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Aug 13 01:45:02.182158 sudo[1781]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:45:02.187421 sudo[1781]: pam_unix(sudo:session): session closed for user root Aug 13 01:45:02.194220 sudo[1780]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Aug 13 01:45:02.194529 sudo[1780]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:45:02.205064 systemd[1]: Starting audit-rules.service - Load Audit Rules... Aug 13 01:45:02.250313 augenrules[1803]: No rules Aug 13 01:45:02.252366 systemd[1]: audit-rules.service: Deactivated successfully. Aug 13 01:45:02.252657 systemd[1]: Finished audit-rules.service - Load Audit Rules. Aug 13 01:45:02.254440 sudo[1780]: pam_unix(sudo:session): session closed for user root Aug 13 01:45:02.306175 sshd[1779]: Connection closed by 147.75.109.163 port 56812 Aug 13 01:45:02.306812 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Aug 13 01:45:02.312364 systemd[1]: sshd@5-172.232.7.133:22-147.75.109.163:56812.service: Deactivated successfully. Aug 13 01:45:02.314507 systemd[1]: session-6.scope: Deactivated successfully. Aug 13 01:45:02.315330 systemd-logind[1539]: Session 6 logged out. Waiting for processes to exit. Aug 13 01:45:02.316697 systemd-logind[1539]: Removed session 6. Aug 13 01:45:02.363804 systemd[1]: Started sshd@6-172.232.7.133:22-147.75.109.163:56826.service - OpenSSH per-connection server daemon (147.75.109.163:56826). Aug 13 01:45:02.716248 sshd[1812]: Accepted publickey for core from 147.75.109.163 port 56826 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:45:02.718440 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:45:02.727112 systemd-logind[1539]: New session 7 of user core. Aug 13 01:45:02.732974 systemd[1]: Started session-7.scope - Session 7 of User core. Aug 13 01:45:02.914164 sudo[1815]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Aug 13 01:45:02.914492 sudo[1815]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Aug 13 01:45:03.176383 systemd[1]: Starting docker.service - Docker Application Container Engine... Aug 13 01:45:03.198160 (dockerd)[1833]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Aug 13 01:45:03.375365 dockerd[1833]: time="2025-08-13T01:45:03.375319876Z" level=info msg="Starting up" Aug 13 01:45:03.376883 dockerd[1833]: time="2025-08-13T01:45:03.376661187Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Aug 13 01:45:03.398398 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport2333900439-merged.mount: Deactivated successfully. Aug 13 01:45:03.425568 dockerd[1833]: time="2025-08-13T01:45:03.425538502Z" level=info msg="Loading containers: start." Aug 13 01:45:03.435873 kernel: Initializing XFRM netlink socket Aug 13 01:45:03.647090 systemd-networkd[1470]: docker0: Link UP Aug 13 01:45:03.650129 dockerd[1833]: time="2025-08-13T01:45:03.650098414Z" level=info msg="Loading containers: done." Aug 13 01:45:03.661438 dockerd[1833]: time="2025-08-13T01:45:03.661364139Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Aug 13 01:45:03.661882 dockerd[1833]: time="2025-08-13T01:45:03.661599570Z" level=info msg="Docker daemon" commit=bbd0a17ccc67e48d4a69393287b7fcc4f0578683 containerd-snapshotter=false storage-driver=overlay2 version=28.0.1 Aug 13 01:45:03.661882 dockerd[1833]: time="2025-08-13T01:45:03.661702440Z" level=info msg="Initializing buildkit" Aug 13 01:45:03.663515 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck1451043621-merged.mount: Deactivated successfully. Aug 13 01:45:03.681025 dockerd[1833]: time="2025-08-13T01:45:03.680998269Z" level=info msg="Completed buildkit initialization" Aug 13 01:45:03.688455 dockerd[1833]: time="2025-08-13T01:45:03.688422363Z" level=info msg="Daemon has completed initialization" Aug 13 01:45:03.688631 systemd[1]: Started docker.service - Docker Application Container Engine. Aug 13 01:45:03.689038 dockerd[1833]: time="2025-08-13T01:45:03.688944603Z" level=info msg="API listen on /run/docker.sock" Aug 13 01:45:04.388497 containerd[1559]: time="2025-08-13T01:45:04.388448723Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\"" Aug 13 01:45:05.237872 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1153981267.mount: Deactivated successfully. Aug 13 01:45:06.249735 containerd[1559]: time="2025-08-13T01:45:06.249485743Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:06.251130 containerd[1559]: time="2025-08-13T01:45:06.250606583Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.7: active requests=0, bytes read=28799994" Aug 13 01:45:06.253129 containerd[1559]: time="2025-08-13T01:45:06.253097984Z" level=info msg="ImageCreate event name:\"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:06.255981 containerd[1559]: time="2025-08-13T01:45:06.255751726Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:06.256699 containerd[1559]: time="2025-08-13T01:45:06.256665066Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.7\" with image id \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:e04f6223d52f8041c46ef4545ccaf07894b1ca5851506a9142706d4206911f64\", size \"28796794\" in 1.868169643s" Aug 13 01:45:06.256743 containerd[1559]: time="2025-08-13T01:45:06.256702696Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.7\" returns image reference \"sha256:761ae2258f1825c2079bd41bcc1da2c9bda8b5e902aa147c14896491dfca0f16\"" Aug 13 01:45:06.257940 containerd[1559]: time="2025-08-13T01:45:06.257904267Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\"" Aug 13 01:45:07.720600 containerd[1559]: time="2025-08-13T01:45:07.720521818Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:07.721513 containerd[1559]: time="2025-08-13T01:45:07.721396858Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.7: active requests=0, bytes read=24783636" Aug 13 01:45:07.723520 containerd[1559]: time="2025-08-13T01:45:07.722369499Z" level=info msg="ImageCreate event name:\"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:07.724146 containerd[1559]: time="2025-08-13T01:45:07.724119889Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:07.725215 containerd[1559]: time="2025-08-13T01:45:07.725190550Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.7\" with image id \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:6c7f288ab0181e496606a43dbade954819af2b1e1c0552becf6903436e16ea75\", size \"26385470\" in 1.467153473s" Aug 13 01:45:07.725299 containerd[1559]: time="2025-08-13T01:45:07.725284680Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.7\" returns image reference \"sha256:87f922d0bde0db7ffcb2174ba37bdab8fdd169a41e1882fe5aa308bb57e44fda\"" Aug 13 01:45:07.726136 containerd[1559]: time="2025-08-13T01:45:07.726108520Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\"" Aug 13 01:45:08.963944 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Aug 13 01:45:08.967363 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:09.122882 containerd[1559]: time="2025-08-13T01:45:09.121792408Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:09.124010 containerd[1559]: time="2025-08-13T01:45:09.123970519Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.7: active requests=0, bytes read=19176921" Aug 13 01:45:09.124412 containerd[1559]: time="2025-08-13T01:45:09.124392509Z" level=info msg="ImageCreate event name:\"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:09.127360 containerd[1559]: time="2025-08-13T01:45:09.127312850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:09.129175 containerd[1559]: time="2025-08-13T01:45:09.129068951Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.7\" with image id \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:1c35a970b4450b4285531495be82cda1f6549952f70d6e3de8db57c20a3da4ce\", size \"20778773\" in 1.402936131s" Aug 13 01:45:09.129175 containerd[1559]: time="2025-08-13T01:45:09.129095971Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.7\" returns image reference \"sha256:36cc9c80994ebf29b8e1a366d7e736b273a6c6a60bacb5446944cc0953416245\"" Aug 13 01:45:09.129841 containerd[1559]: time="2025-08-13T01:45:09.129820832Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\"" Aug 13 01:45:09.183536 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:09.193173 (kubelet)[2104]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Aug 13 01:45:09.234438 kubelet[2104]: E0813 01:45:09.234366 2104 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Aug 13 01:45:09.239428 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Aug 13 01:45:09.239883 systemd[1]: kubelet.service: Failed with result 'exit-code'. Aug 13 01:45:09.240258 systemd[1]: kubelet.service: Consumed 210ms CPU time, 109.5M memory peak. Aug 13 01:45:10.305238 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2778311803.mount: Deactivated successfully. Aug 13 01:45:10.650700 containerd[1559]: time="2025-08-13T01:45:10.649996811Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:10.650700 containerd[1559]: time="2025-08-13T01:45:10.650552732Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.7: active requests=0, bytes read=30895380" Aug 13 01:45:10.651197 containerd[1559]: time="2025-08-13T01:45:10.651157842Z" level=info msg="ImageCreate event name:\"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:10.652845 containerd[1559]: time="2025-08-13T01:45:10.652341742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:10.652845 containerd[1559]: time="2025-08-13T01:45:10.652699133Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.7\" with image id \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\", repo tag \"registry.k8s.io/kube-proxy:v1.32.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:8d589a18b5424f77a784ef2f00feffac0ef210414100822f1c120f0d7221def3\", size \"30894399\" in 1.522845471s" Aug 13 01:45:10.652845 containerd[1559]: time="2025-08-13T01:45:10.652726923Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.7\" returns image reference \"sha256:d5bc66d8682fdab0735e869a3f77730df378af7fd2505c1f4d6374ad3dbd181c\"" Aug 13 01:45:10.653399 containerd[1559]: time="2025-08-13T01:45:10.653376213Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:45:11.405209 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount423462955.mount: Deactivated successfully. Aug 13 01:45:12.141284 containerd[1559]: time="2025-08-13T01:45:12.141233906Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:12.142176 containerd[1559]: time="2025-08-13T01:45:12.142132997Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 01:45:12.142705 containerd[1559]: time="2025-08-13T01:45:12.142677607Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:12.144837 containerd[1559]: time="2025-08-13T01:45:12.144813188Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:12.145753 containerd[1559]: time="2025-08-13T01:45:12.145728939Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.492323896s" Aug 13 01:45:12.145832 containerd[1559]: time="2025-08-13T01:45:12.145817379Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:45:12.146713 containerd[1559]: time="2025-08-13T01:45:12.146693229Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Aug 13 01:45:12.833009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1631157374.mount: Deactivated successfully. Aug 13 01:45:12.836998 containerd[1559]: time="2025-08-13T01:45:12.836377894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:45:12.836998 containerd[1559]: time="2025-08-13T01:45:12.836953434Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=321138" Aug 13 01:45:12.837230 containerd[1559]: time="2025-08-13T01:45:12.837203734Z" level=info msg="ImageCreate event name:\"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:45:12.838584 containerd[1559]: time="2025-08-13T01:45:12.838563855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Aug 13 01:45:12.840415 containerd[1559]: time="2025-08-13T01:45:12.840386936Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"320368\" in 693.668797ms" Aug 13 01:45:12.840451 containerd[1559]: time="2025-08-13T01:45:12.840417676Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136\"" Aug 13 01:45:12.841328 containerd[1559]: time="2025-08-13T01:45:12.841304456Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 01:45:13.558212 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4201746351.mount: Deactivated successfully. Aug 13 01:45:14.826326 containerd[1559]: time="2025-08-13T01:45:14.826242628Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:14.827469 containerd[1559]: time="2025-08-13T01:45:14.827324149Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=57551360" Aug 13 01:45:14.828112 containerd[1559]: time="2025-08-13T01:45:14.828083809Z" level=info msg="ImageCreate event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:14.830671 containerd[1559]: time="2025-08-13T01:45:14.830641610Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:14.831658 containerd[1559]: time="2025-08-13T01:45:14.831626141Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"57680541\" in 1.990294825s" Aug 13 01:45:14.831735 containerd[1559]: time="2025-08-13T01:45:14.831720361Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 01:45:17.014870 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:17.015015 systemd[1]: kubelet.service: Consumed 210ms CPU time, 109.5M memory peak. Aug 13 01:45:17.018234 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:17.044626 systemd[1]: Reload requested from client PID 2257 ('systemctl') (unit session-7.scope)... Aug 13 01:45:17.044640 systemd[1]: Reloading... Aug 13 01:45:17.198902 zram_generator::config[2304]: No configuration found. Aug 13 01:45:17.297634 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:45:17.411347 systemd[1]: Reloading finished in 366 ms. Aug 13 01:45:17.482498 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Aug 13 01:45:17.482595 systemd[1]: kubelet.service: Failed with result 'signal'. Aug 13 01:45:17.483142 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:17.483183 systemd[1]: kubelet.service: Consumed 143ms CPU time, 98.3M memory peak. Aug 13 01:45:17.484892 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:17.671889 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:17.683286 (kubelet)[2355]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:45:17.724487 kubelet[2355]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:45:17.724487 kubelet[2355]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:45:17.724487 kubelet[2355]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:45:17.724765 kubelet[2355]: I0813 01:45:17.724549 2355 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:45:17.896877 kubelet[2355]: I0813 01:45:17.896068 2355 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 01:45:17.896877 kubelet[2355]: I0813 01:45:17.896098 2355 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:45:17.896877 kubelet[2355]: I0813 01:45:17.896474 2355 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 01:45:17.925921 kubelet[2355]: I0813 01:45:17.925801 2355 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:45:17.927016 kubelet[2355]: E0813 01:45:17.926685 2355 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.232.7.133:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.232.7.133:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:45:17.935020 kubelet[2355]: I0813 01:45:17.934991 2355 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:45:17.938806 kubelet[2355]: I0813 01:45:17.938782 2355 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:45:17.939841 kubelet[2355]: I0813 01:45:17.939796 2355 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:45:17.940154 kubelet[2355]: I0813 01:45:17.939827 2355 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-7-133","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:45:17.940154 kubelet[2355]: I0813 01:45:17.940152 2355 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:45:17.940285 kubelet[2355]: I0813 01:45:17.940161 2355 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 01:45:17.940285 kubelet[2355]: I0813 01:45:17.940266 2355 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:45:17.943339 kubelet[2355]: I0813 01:45:17.943307 2355 kubelet.go:446] "Attempting to sync node with API server" Aug 13 01:45:17.943339 kubelet[2355]: I0813 01:45:17.943333 2355 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:45:17.943435 kubelet[2355]: I0813 01:45:17.943358 2355 kubelet.go:352] "Adding apiserver pod source" Aug 13 01:45:17.943435 kubelet[2355]: I0813 01:45:17.943368 2355 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:45:17.948090 kubelet[2355]: I0813 01:45:17.948051 2355 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:45:17.948374 kubelet[2355]: I0813 01:45:17.948350 2355 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:45:17.948998 kubelet[2355]: W0813 01:45:17.948973 2355 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Aug 13 01:45:17.950656 kubelet[2355]: I0813 01:45:17.950633 2355 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:45:17.950702 kubelet[2355]: I0813 01:45:17.950662 2355 server.go:1287] "Started kubelet" Aug 13 01:45:17.950808 kubelet[2355]: W0813 01:45:17.950769 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.232.7.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.232.7.133:6443: connect: connection refused Aug 13 01:45:17.950872 kubelet[2355]: E0813 01:45:17.950818 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.232.7.133:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.232.7.133:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:45:17.955746 kubelet[2355]: W0813 01:45:17.955708 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.232.7.133:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-7-133&limit=500&resourceVersion=0": dial tcp 172.232.7.133:6443: connect: connection refused Aug 13 01:45:17.955836 kubelet[2355]: E0813 01:45:17.955819 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.232.7.133:6443/api/v1/nodes?fieldSelector=metadata.name%3D172-232-7-133&limit=500&resourceVersion=0\": dial tcp 172.232.7.133:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:45:17.959672 kubelet[2355]: I0813 01:45:17.959010 2355 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:45:17.959672 kubelet[2355]: E0813 01:45:17.958375 2355 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.232.7.133:6443/api/v1/namespaces/default/events\": dial tcp 172.232.7.133:6443: connect: connection refused" event="&Event{ObjectMeta:{172-232-7-133.185b303ce8ab43e7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:172-232-7-133,UID:172-232-7-133,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:172-232-7-133,},FirstTimestamp:2025-08-13 01:45:17.950649319 +0000 UTC m=+0.262152322,LastTimestamp:2025-08-13 01:45:17.950649319 +0000 UTC m=+0.262152322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:172-232-7-133,}" Aug 13 01:45:17.960007 kubelet[2355]: I0813 01:45:17.959963 2355 server.go:479] "Adding debug handlers to kubelet server" Aug 13 01:45:17.962768 kubelet[2355]: I0813 01:45:17.962709 2355 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:45:17.963012 kubelet[2355]: I0813 01:45:17.962980 2355 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:45:17.964481 kubelet[2355]: E0813 01:45:17.964438 2355 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:45:17.965044 kubelet[2355]: I0813 01:45:17.965028 2355 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:45:17.965293 kubelet[2355]: I0813 01:45:17.965269 2355 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:45:17.967291 kubelet[2355]: E0813 01:45:17.967273 2355 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"172-232-7-133\" not found" Aug 13 01:45:17.968176 kubelet[2355]: I0813 01:45:17.967467 2355 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:45:17.968176 kubelet[2355]: I0813 01:45:17.967627 2355 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:45:17.968176 kubelet[2355]: I0813 01:45:17.967686 2355 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:45:17.968613 kubelet[2355]: I0813 01:45:17.968599 2355 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:45:17.968814 kubelet[2355]: I0813 01:45:17.968797 2355 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:45:17.969315 kubelet[2355]: W0813 01:45:17.969288 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.232.7.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.232.7.133:6443: connect: connection refused Aug 13 01:45:17.969404 kubelet[2355]: E0813 01:45:17.969380 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.232.7.133:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.232.7.133:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:45:17.970672 kubelet[2355]: E0813 01:45:17.970628 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.7.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-7-133?timeout=10s\": dial tcp 172.232.7.133:6443: connect: connection refused" interval="200ms" Aug 13 01:45:17.970934 kubelet[2355]: I0813 01:45:17.970920 2355 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:45:17.983955 kubelet[2355]: I0813 01:45:17.983916 2355 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:45:17.985085 kubelet[2355]: I0813 01:45:17.985068 2355 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:45:17.985143 kubelet[2355]: I0813 01:45:17.985134 2355 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 01:45:17.985207 kubelet[2355]: I0813 01:45:17.985197 2355 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:45:17.985256 kubelet[2355]: I0813 01:45:17.985246 2355 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 01:45:17.985344 kubelet[2355]: E0813 01:45:17.985328 2355 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:45:17.993068 kubelet[2355]: W0813 01:45:17.993023 2355 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.232.7.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.232.7.133:6443: connect: connection refused Aug 13 01:45:17.993192 kubelet[2355]: E0813 01:45:17.993154 2355 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.232.7.133:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.232.7.133:6443: connect: connection refused" logger="UnhandledError" Aug 13 01:45:17.998818 kubelet[2355]: I0813 01:45:17.998785 2355 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:45:17.998818 kubelet[2355]: I0813 01:45:17.998800 2355 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:45:17.998926 kubelet[2355]: I0813 01:45:17.998840 2355 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:45:18.000448 kubelet[2355]: I0813 01:45:18.000419 2355 policy_none.go:49] "None policy: Start" Aug 13 01:45:18.000448 kubelet[2355]: I0813 01:45:18.000441 2355 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:45:18.000509 kubelet[2355]: I0813 01:45:18.000459 2355 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:45:18.005793 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Aug 13 01:45:18.020338 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Aug 13 01:45:18.023779 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Aug 13 01:45:18.033781 kubelet[2355]: I0813 01:45:18.033706 2355 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:45:18.034094 kubelet[2355]: I0813 01:45:18.034079 2355 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:45:18.034391 kubelet[2355]: I0813 01:45:18.034281 2355 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:45:18.036030 kubelet[2355]: I0813 01:45:18.035749 2355 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:45:18.037462 kubelet[2355]: E0813 01:45:18.037405 2355 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:45:18.037948 kubelet[2355]: E0813 01:45:18.037530 2355 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"172-232-7-133\" not found" Aug 13 01:45:18.096910 systemd[1]: Created slice kubepods-burstable-podcf9151aed87d63c6b29b86efe4d2bcc0.slice - libcontainer container kubepods-burstable-podcf9151aed87d63c6b29b86efe4d2bcc0.slice. Aug 13 01:45:18.105174 kubelet[2355]: E0813 01:45:18.104928 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-133\" not found" node="172-232-7-133" Aug 13 01:45:18.108603 systemd[1]: Created slice kubepods-burstable-pod5f9ea1c169ca17f70f2b596c13773f1c.slice - libcontainer container kubepods-burstable-pod5f9ea1c169ca17f70f2b596c13773f1c.slice. Aug 13 01:45:18.119357 kubelet[2355]: E0813 01:45:18.119331 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-133\" not found" node="172-232-7-133" Aug 13 01:45:18.122173 systemd[1]: Created slice kubepods-burstable-pod60eac030dc4fd4fe6fb9e0753aaec45f.slice - libcontainer container kubepods-burstable-pod60eac030dc4fd4fe6fb9e0753aaec45f.slice. Aug 13 01:45:18.124160 kubelet[2355]: E0813 01:45:18.124136 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-133\" not found" node="172-232-7-133" Aug 13 01:45:18.136772 kubelet[2355]: I0813 01:45:18.136752 2355 kubelet_node_status.go:75] "Attempting to register node" node="172-232-7-133" Aug 13 01:45:18.137181 kubelet[2355]: E0813 01:45:18.137157 2355 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.7.133:6443/api/v1/nodes\": dial tcp 172.232.7.133:6443: connect: connection refused" node="172-232-7-133" Aug 13 01:45:18.171989 kubelet[2355]: E0813 01:45:18.171943 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.7.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-7-133?timeout=10s\": dial tcp 172.232.7.133:6443: connect: connection refused" interval="400ms" Aug 13 01:45:18.269113 kubelet[2355]: I0813 01:45:18.269082 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f9ea1c169ca17f70f2b596c13773f1c-k8s-certs\") pod \"kube-controller-manager-172-232-7-133\" (UID: \"5f9ea1c169ca17f70f2b596c13773f1c\") " pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:45:18.269113 kubelet[2355]: I0813 01:45:18.269114 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f9ea1c169ca17f70f2b596c13773f1c-kubeconfig\") pod \"kube-controller-manager-172-232-7-133\" (UID: \"5f9ea1c169ca17f70f2b596c13773f1c\") " pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:45:18.269353 kubelet[2355]: I0813 01:45:18.269131 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60eac030dc4fd4fe6fb9e0753aaec45f-kubeconfig\") pod \"kube-scheduler-172-232-7-133\" (UID: \"60eac030dc4fd4fe6fb9e0753aaec45f\") " pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:45:18.269353 kubelet[2355]: I0813 01:45:18.269145 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf9151aed87d63c6b29b86efe4d2bcc0-ca-certs\") pod \"kube-apiserver-172-232-7-133\" (UID: \"cf9151aed87d63c6b29b86efe4d2bcc0\") " pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:45:18.269353 kubelet[2355]: I0813 01:45:18.269161 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf9151aed87d63c6b29b86efe4d2bcc0-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-7-133\" (UID: \"cf9151aed87d63c6b29b86efe4d2bcc0\") " pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:45:18.269353 kubelet[2355]: I0813 01:45:18.269174 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f9ea1c169ca17f70f2b596c13773f1c-ca-certs\") pod \"kube-controller-manager-172-232-7-133\" (UID: \"5f9ea1c169ca17f70f2b596c13773f1c\") " pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:45:18.269353 kubelet[2355]: I0813 01:45:18.269187 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5f9ea1c169ca17f70f2b596c13773f1c-flexvolume-dir\") pod \"kube-controller-manager-172-232-7-133\" (UID: \"5f9ea1c169ca17f70f2b596c13773f1c\") " pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:45:18.269526 kubelet[2355]: I0813 01:45:18.269255 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f9ea1c169ca17f70f2b596c13773f1c-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-7-133\" (UID: \"5f9ea1c169ca17f70f2b596c13773f1c\") " pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:45:18.269526 kubelet[2355]: I0813 01:45:18.269275 2355 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf9151aed87d63c6b29b86efe4d2bcc0-k8s-certs\") pod \"kube-apiserver-172-232-7-133\" (UID: \"cf9151aed87d63c6b29b86efe4d2bcc0\") " pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:45:18.339245 kubelet[2355]: I0813 01:45:18.339225 2355 kubelet_node_status.go:75] "Attempting to register node" node="172-232-7-133" Aug 13 01:45:18.339722 kubelet[2355]: E0813 01:45:18.339684 2355 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.7.133:6443/api/v1/nodes\": dial tcp 172.232.7.133:6443: connect: connection refused" node="172-232-7-133" Aug 13 01:45:18.405689 kubelet[2355]: E0813 01:45:18.405657 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:18.406305 containerd[1559]: time="2025-08-13T01:45:18.406246097Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-7-133,Uid:cf9151aed87d63c6b29b86efe4d2bcc0,Namespace:kube-system,Attempt:0,}" Aug 13 01:45:18.424012 kubelet[2355]: E0813 01:45:18.423297 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:18.424108 containerd[1559]: time="2025-08-13T01:45:18.423577385Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-7-133,Uid:5f9ea1c169ca17f70f2b596c13773f1c,Namespace:kube-system,Attempt:0,}" Aug 13 01:45:18.425886 kubelet[2355]: E0813 01:45:18.425393 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:18.431120 containerd[1559]: time="2025-08-13T01:45:18.431077709Z" level=info msg="connecting to shim 867993db78ab5d0bf62647b61d34d9ed3d0f19de24f8cc71aa517bf63c71303f" address="unix:///run/containerd/s/522bef3a05ac26fe9534ee0564a4577527b3209a83f5e361f02ab8f13ee14733" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:45:18.432627 containerd[1559]: time="2025-08-13T01:45:18.432602270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-7-133,Uid:60eac030dc4fd4fe6fb9e0753aaec45f,Namespace:kube-system,Attempt:0,}" Aug 13 01:45:18.450943 containerd[1559]: time="2025-08-13T01:45:18.450810489Z" level=info msg="connecting to shim be4ea84125570b7f0bb4a4f1ad20524d1eb45d5fea7e7deb2dcf0568ac2cd0da" address="unix:///run/containerd/s/c178975aabbe13bd80e84d98fcaafc5c39c09f584c69623a1d257263eb761a60" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:45:18.473811 containerd[1559]: time="2025-08-13T01:45:18.473773630Z" level=info msg="connecting to shim 77300aca2d8184fb9e0635eb45faa4d319ff1a976e375c515c55461520c409cc" address="unix:///run/containerd/s/385a6ed53c8de413ad5356e4869a38641902f19aadfceff7706fdfd079ef8b06" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:45:18.488103 systemd[1]: Started cri-containerd-867993db78ab5d0bf62647b61d34d9ed3d0f19de24f8cc71aa517bf63c71303f.scope - libcontainer container 867993db78ab5d0bf62647b61d34d9ed3d0f19de24f8cc71aa517bf63c71303f. Aug 13 01:45:18.504101 systemd[1]: Started cri-containerd-be4ea84125570b7f0bb4a4f1ad20524d1eb45d5fea7e7deb2dcf0568ac2cd0da.scope - libcontainer container be4ea84125570b7f0bb4a4f1ad20524d1eb45d5fea7e7deb2dcf0568ac2cd0da. Aug 13 01:45:18.531031 systemd[1]: Started cri-containerd-77300aca2d8184fb9e0635eb45faa4d319ff1a976e375c515c55461520c409cc.scope - libcontainer container 77300aca2d8184fb9e0635eb45faa4d319ff1a976e375c515c55461520c409cc. Aug 13 01:45:18.564069 containerd[1559]: time="2025-08-13T01:45:18.563942416Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-172-232-7-133,Uid:cf9151aed87d63c6b29b86efe4d2bcc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"867993db78ab5d0bf62647b61d34d9ed3d0f19de24f8cc71aa517bf63c71303f\"" Aug 13 01:45:18.566050 kubelet[2355]: E0813 01:45:18.566004 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:18.569254 containerd[1559]: time="2025-08-13T01:45:18.569062938Z" level=info msg="CreateContainer within sandbox \"867993db78ab5d0bf62647b61d34d9ed3d0f19de24f8cc71aa517bf63c71303f\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Aug 13 01:45:18.574151 kubelet[2355]: E0813 01:45:18.574085 2355 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.232.7.133:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/172-232-7-133?timeout=10s\": dial tcp 172.232.7.133:6443: connect: connection refused" interval="800ms" Aug 13 01:45:18.585004 containerd[1559]: time="2025-08-13T01:45:18.584985566Z" level=info msg="Container 54f7b95c3df4933177d9333b747dad3f6096e144d14babc0ce2132e2c29c42df: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:18.590602 containerd[1559]: time="2025-08-13T01:45:18.590566769Z" level=info msg="CreateContainer within sandbox \"867993db78ab5d0bf62647b61d34d9ed3d0f19de24f8cc71aa517bf63c71303f\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"54f7b95c3df4933177d9333b747dad3f6096e144d14babc0ce2132e2c29c42df\"" Aug 13 01:45:18.592385 containerd[1559]: time="2025-08-13T01:45:18.592364850Z" level=info msg="StartContainer for \"54f7b95c3df4933177d9333b747dad3f6096e144d14babc0ce2132e2c29c42df\"" Aug 13 01:45:18.595077 containerd[1559]: time="2025-08-13T01:45:18.595055981Z" level=info msg="connecting to shim 54f7b95c3df4933177d9333b747dad3f6096e144d14babc0ce2132e2c29c42df" address="unix:///run/containerd/s/522bef3a05ac26fe9534ee0564a4577527b3209a83f5e361f02ab8f13ee14733" protocol=ttrpc version=3 Aug 13 01:45:18.606011 containerd[1559]: time="2025-08-13T01:45:18.605990557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-172-232-7-133,Uid:5f9ea1c169ca17f70f2b596c13773f1c,Namespace:kube-system,Attempt:0,} returns sandbox id \"be4ea84125570b7f0bb4a4f1ad20524d1eb45d5fea7e7deb2dcf0568ac2cd0da\"" Aug 13 01:45:18.609006 kubelet[2355]: E0813 01:45:18.608967 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:18.611453 containerd[1559]: time="2025-08-13T01:45:18.611288829Z" level=info msg="CreateContainer within sandbox \"be4ea84125570b7f0bb4a4f1ad20524d1eb45d5fea7e7deb2dcf0568ac2cd0da\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Aug 13 01:45:18.627719 containerd[1559]: time="2025-08-13T01:45:18.627661187Z" level=info msg="Container d81069ab95c9b72874dd3230d9d6fe82c86726f1d6cd87ae5923cd102861244e: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:18.635125 containerd[1559]: time="2025-08-13T01:45:18.635046191Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-172-232-7-133,Uid:60eac030dc4fd4fe6fb9e0753aaec45f,Namespace:kube-system,Attempt:0,} returns sandbox id \"77300aca2d8184fb9e0635eb45faa4d319ff1a976e375c515c55461520c409cc\"" Aug 13 01:45:18.635988 kubelet[2355]: E0813 01:45:18.635962 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:18.637162 containerd[1559]: time="2025-08-13T01:45:18.637127112Z" level=info msg="CreateContainer within sandbox \"be4ea84125570b7f0bb4a4f1ad20524d1eb45d5fea7e7deb2dcf0568ac2cd0da\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"d81069ab95c9b72874dd3230d9d6fe82c86726f1d6cd87ae5923cd102861244e\"" Aug 13 01:45:18.638678 containerd[1559]: time="2025-08-13T01:45:18.637472952Z" level=info msg="StartContainer for \"d81069ab95c9b72874dd3230d9d6fe82c86726f1d6cd87ae5923cd102861244e\"" Aug 13 01:45:18.638678 containerd[1559]: time="2025-08-13T01:45:18.637994903Z" level=info msg="CreateContainer within sandbox \"77300aca2d8184fb9e0635eb45faa4d319ff1a976e375c515c55461520c409cc\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Aug 13 01:45:18.638678 containerd[1559]: time="2025-08-13T01:45:18.638287103Z" level=info msg="connecting to shim d81069ab95c9b72874dd3230d9d6fe82c86726f1d6cd87ae5923cd102861244e" address="unix:///run/containerd/s/c178975aabbe13bd80e84d98fcaafc5c39c09f584c69623a1d257263eb761a60" protocol=ttrpc version=3 Aug 13 01:45:18.640184 systemd[1]: Started cri-containerd-54f7b95c3df4933177d9333b747dad3f6096e144d14babc0ce2132e2c29c42df.scope - libcontainer container 54f7b95c3df4933177d9333b747dad3f6096e144d14babc0ce2132e2c29c42df. Aug 13 01:45:18.648331 containerd[1559]: time="2025-08-13T01:45:18.648281528Z" level=info msg="Container 072f7d2d45b3804aec0ce1bfc6db7bb8c0cf5ff6a3a4eb92f172d274c987b1c4: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:18.671049 containerd[1559]: time="2025-08-13T01:45:18.670510139Z" level=info msg="CreateContainer within sandbox \"77300aca2d8184fb9e0635eb45faa4d319ff1a976e375c515c55461520c409cc\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"072f7d2d45b3804aec0ce1bfc6db7bb8c0cf5ff6a3a4eb92f172d274c987b1c4\"" Aug 13 01:45:18.672019 containerd[1559]: time="2025-08-13T01:45:18.671969590Z" level=info msg="StartContainer for \"072f7d2d45b3804aec0ce1bfc6db7bb8c0cf5ff6a3a4eb92f172d274c987b1c4\"" Aug 13 01:45:18.673194 containerd[1559]: time="2025-08-13T01:45:18.673163090Z" level=info msg="connecting to shim 072f7d2d45b3804aec0ce1bfc6db7bb8c0cf5ff6a3a4eb92f172d274c987b1c4" address="unix:///run/containerd/s/385a6ed53c8de413ad5356e4869a38641902f19aadfceff7706fdfd079ef8b06" protocol=ttrpc version=3 Aug 13 01:45:18.675151 systemd[1]: Started cri-containerd-d81069ab95c9b72874dd3230d9d6fe82c86726f1d6cd87ae5923cd102861244e.scope - libcontainer container d81069ab95c9b72874dd3230d9d6fe82c86726f1d6cd87ae5923cd102861244e. Aug 13 01:45:18.702086 systemd[1]: Started cri-containerd-072f7d2d45b3804aec0ce1bfc6db7bb8c0cf5ff6a3a4eb92f172d274c987b1c4.scope - libcontainer container 072f7d2d45b3804aec0ce1bfc6db7bb8c0cf5ff6a3a4eb92f172d274c987b1c4. Aug 13 01:45:18.726745 containerd[1559]: time="2025-08-13T01:45:18.726701057Z" level=info msg="StartContainer for \"54f7b95c3df4933177d9333b747dad3f6096e144d14babc0ce2132e2c29c42df\" returns successfully" Aug 13 01:45:18.743250 kubelet[2355]: I0813 01:45:18.743033 2355 kubelet_node_status.go:75] "Attempting to register node" node="172-232-7-133" Aug 13 01:45:18.744954 kubelet[2355]: E0813 01:45:18.744914 2355 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.232.7.133:6443/api/v1/nodes\": dial tcp 172.232.7.133:6443: connect: connection refused" node="172-232-7-133" Aug 13 01:45:18.784714 containerd[1559]: time="2025-08-13T01:45:18.783505955Z" level=info msg="StartContainer for \"d81069ab95c9b72874dd3230d9d6fe82c86726f1d6cd87ae5923cd102861244e\" returns successfully" Aug 13 01:45:18.821354 containerd[1559]: time="2025-08-13T01:45:18.821314204Z" level=info msg="StartContainer for \"072f7d2d45b3804aec0ce1bfc6db7bb8c0cf5ff6a3a4eb92f172d274c987b1c4\" returns successfully" Aug 13 01:45:19.004709 kubelet[2355]: E0813 01:45:19.004362 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-133\" not found" node="172-232-7-133" Aug 13 01:45:19.005404 kubelet[2355]: E0813 01:45:19.005390 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:19.007241 kubelet[2355]: E0813 01:45:19.007057 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-133\" not found" node="172-232-7-133" Aug 13 01:45:19.007241 kubelet[2355]: E0813 01:45:19.007142 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:19.010563 kubelet[2355]: E0813 01:45:19.010549 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-133\" not found" node="172-232-7-133" Aug 13 01:45:19.010825 kubelet[2355]: E0813 01:45:19.010812 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:19.547373 kubelet[2355]: I0813 01:45:19.547194 2355 kubelet_node_status.go:75] "Attempting to register node" node="172-232-7-133" Aug 13 01:45:19.947480 kubelet[2355]: I0813 01:45:19.947218 2355 apiserver.go:52] "Watching apiserver" Aug 13 01:45:19.977105 kubelet[2355]: E0813 01:45:19.977078 2355 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"172-232-7-133\" not found" node="172-232-7-133" Aug 13 01:45:20.013412 kubelet[2355]: E0813 01:45:20.013380 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-133\" not found" node="172-232-7-133" Aug 13 01:45:20.013908 kubelet[2355]: E0813 01:45:20.013485 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:20.013908 kubelet[2355]: E0813 01:45:20.013666 2355 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"172-232-7-133\" not found" node="172-232-7-133" Aug 13 01:45:20.013908 kubelet[2355]: E0813 01:45:20.013760 2355 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:20.068662 kubelet[2355]: I0813 01:45:20.068616 2355 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:45:20.085908 kubelet[2355]: I0813 01:45:20.085885 2355 kubelet_node_status.go:78] "Successfully registered node" node="172-232-7-133" Aug 13 01:45:20.085908 kubelet[2355]: E0813 01:45:20.085905 2355 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"172-232-7-133\": node \"172-232-7-133\" not found" Aug 13 01:45:20.171150 kubelet[2355]: I0813 01:45:20.171116 2355 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:45:20.175842 kubelet[2355]: E0813 01:45:20.175703 2355 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-172-232-7-133\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:45:20.175842 kubelet[2355]: I0813 01:45:20.175721 2355 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:45:20.177183 kubelet[2355]: E0813 01:45:20.177168 2355 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-172-232-7-133\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:45:20.177338 kubelet[2355]: I0813 01:45:20.177219 2355 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:45:20.178478 kubelet[2355]: E0813 01:45:20.178453 2355 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-172-232-7-133\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:45:22.120988 systemd[1]: Reload requested from client PID 2624 ('systemctl') (unit session-7.scope)... Aug 13 01:45:22.121006 systemd[1]: Reloading... Aug 13 01:45:22.220900 zram_generator::config[2668]: No configuration found. Aug 13 01:45:22.312067 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Aug 13 01:45:22.431908 systemd[1]: Reloading finished in 310 ms. Aug 13 01:45:22.452198 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:22.471819 systemd[1]: kubelet.service: Deactivated successfully. Aug 13 01:45:22.472122 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:22.472176 systemd[1]: kubelet.service: Consumed 639ms CPU time, 132M memory peak. Aug 13 01:45:22.473998 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Aug 13 01:45:22.654079 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Aug 13 01:45:22.664297 (kubelet)[2718]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Aug 13 01:45:22.726924 kubelet[2718]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:45:22.727245 kubelet[2718]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Aug 13 01:45:22.727300 kubelet[2718]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Aug 13 01:45:22.727428 kubelet[2718]: I0813 01:45:22.727397 2718 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Aug 13 01:45:22.734043 kubelet[2718]: I0813 01:45:22.734021 2718 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Aug 13 01:45:22.734102 kubelet[2718]: I0813 01:45:22.734093 2718 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Aug 13 01:45:22.734367 kubelet[2718]: I0813 01:45:22.734354 2718 server.go:954] "Client rotation is on, will bootstrap in background" Aug 13 01:45:22.736106 kubelet[2718]: I0813 01:45:22.736092 2718 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Aug 13 01:45:22.742121 kubelet[2718]: I0813 01:45:22.738903 2718 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Aug 13 01:45:22.748190 kubelet[2718]: I0813 01:45:22.748103 2718 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Aug 13 01:45:22.752110 kubelet[2718]: I0813 01:45:22.752087 2718 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Aug 13 01:45:22.752349 kubelet[2718]: I0813 01:45:22.752311 2718 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Aug 13 01:45:22.752504 kubelet[2718]: I0813 01:45:22.752337 2718 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"172-232-7-133","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Aug 13 01:45:22.752504 kubelet[2718]: I0813 01:45:22.752497 2718 topology_manager.go:138] "Creating topology manager with none policy" Aug 13 01:45:22.752504 kubelet[2718]: I0813 01:45:22.752506 2718 container_manager_linux.go:304] "Creating device plugin manager" Aug 13 01:45:22.752665 kubelet[2718]: I0813 01:45:22.752555 2718 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:45:22.752729 kubelet[2718]: I0813 01:45:22.752708 2718 kubelet.go:446] "Attempting to sync node with API server" Aug 13 01:45:22.752761 kubelet[2718]: I0813 01:45:22.752734 2718 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Aug 13 01:45:22.755887 kubelet[2718]: I0813 01:45:22.754906 2718 kubelet.go:352] "Adding apiserver pod source" Aug 13 01:45:22.755887 kubelet[2718]: I0813 01:45:22.754938 2718 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Aug 13 01:45:22.759983 kubelet[2718]: I0813 01:45:22.759945 2718 apiserver.go:52] "Watching apiserver" Aug 13 01:45:22.760098 kubelet[2718]: I0813 01:45:22.760084 2718 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.0.4" apiVersion="v1" Aug 13 01:45:22.760539 kubelet[2718]: I0813 01:45:22.760508 2718 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Aug 13 01:45:22.761076 kubelet[2718]: I0813 01:45:22.761065 2718 watchdog_linux.go:99] "Systemd watchdog is not enabled" Aug 13 01:45:22.761201 kubelet[2718]: I0813 01:45:22.761170 2718 server.go:1287] "Started kubelet" Aug 13 01:45:22.763740 kubelet[2718]: I0813 01:45:22.763729 2718 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Aug 13 01:45:22.767932 kubelet[2718]: E0813 01:45:22.767623 2718 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Aug 13 01:45:22.771151 kubelet[2718]: I0813 01:45:22.771138 2718 volume_manager.go:297] "Starting Kubelet Volume Manager" Aug 13 01:45:22.771415 kubelet[2718]: I0813 01:45:22.771382 2718 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Aug 13 01:45:22.773399 kubelet[2718]: I0813 01:45:22.773372 2718 server.go:479] "Adding debug handlers to kubelet server" Aug 13 01:45:22.773496 kubelet[2718]: I0813 01:45:22.773483 2718 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Aug 13 01:45:22.773631 kubelet[2718]: I0813 01:45:22.773620 2718 reconciler.go:26] "Reconciler: start to sync state" Aug 13 01:45:22.774265 kubelet[2718]: I0813 01:45:22.774222 2718 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Aug 13 01:45:22.774422 kubelet[2718]: I0813 01:45:22.774400 2718 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Aug 13 01:45:22.774573 kubelet[2718]: I0813 01:45:22.774550 2718 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Aug 13 01:45:22.775665 kubelet[2718]: I0813 01:45:22.775646 2718 factory.go:221] Registration of the systemd container factory successfully Aug 13 01:45:22.775747 kubelet[2718]: I0813 01:45:22.775720 2718 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Aug 13 01:45:22.776388 kubelet[2718]: I0813 01:45:22.776342 2718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Aug 13 01:45:22.778337 kubelet[2718]: I0813 01:45:22.778322 2718 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Aug 13 01:45:22.778463 kubelet[2718]: I0813 01:45:22.778445 2718 status_manager.go:227] "Starting to sync pod status with apiserver" Aug 13 01:45:22.778676 kubelet[2718]: I0813 01:45:22.778564 2718 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Aug 13 01:45:22.778676 kubelet[2718]: I0813 01:45:22.778603 2718 kubelet.go:2382] "Starting kubelet main sync loop" Aug 13 01:45:22.778753 kubelet[2718]: I0813 01:45:22.778744 2718 factory.go:221] Registration of the containerd container factory successfully Aug 13 01:45:22.779156 kubelet[2718]: E0813 01:45:22.778784 2718 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Aug 13 01:45:22.828074 kubelet[2718]: I0813 01:45:22.828031 2718 cpu_manager.go:221] "Starting CPU manager" policy="none" Aug 13 01:45:22.828074 kubelet[2718]: I0813 01:45:22.828047 2718 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Aug 13 01:45:22.828074 kubelet[2718]: I0813 01:45:22.828063 2718 state_mem.go:36] "Initialized new in-memory state store" Aug 13 01:45:22.828215 kubelet[2718]: I0813 01:45:22.828178 2718 state_mem.go:88] "Updated default CPUSet" cpuSet="" Aug 13 01:45:22.828283 kubelet[2718]: I0813 01:45:22.828193 2718 state_mem.go:96] "Updated CPUSet assignments" assignments={} Aug 13 01:45:22.828283 kubelet[2718]: I0813 01:45:22.828283 2718 policy_none.go:49] "None policy: Start" Aug 13 01:45:22.828350 kubelet[2718]: I0813 01:45:22.828339 2718 memory_manager.go:186] "Starting memorymanager" policy="None" Aug 13 01:45:22.828350 kubelet[2718]: I0813 01:45:22.828350 2718 state_mem.go:35] "Initializing new in-memory state store" Aug 13 01:45:22.828457 kubelet[2718]: I0813 01:45:22.828440 2718 state_mem.go:75] "Updated machine memory state" Aug 13 01:45:22.833138 kubelet[2718]: I0813 01:45:22.833113 2718 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Aug 13 01:45:22.833275 kubelet[2718]: I0813 01:45:22.833252 2718 eviction_manager.go:189] "Eviction manager: starting control loop" Aug 13 01:45:22.833302 kubelet[2718]: I0813 01:45:22.833269 2718 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Aug 13 01:45:22.836452 kubelet[2718]: I0813 01:45:22.835996 2718 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Aug 13 01:45:22.837812 kubelet[2718]: E0813 01:45:22.837788 2718 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Aug 13 01:45:22.880345 kubelet[2718]: I0813 01:45:22.880311 2718 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:45:22.880466 kubelet[2718]: I0813 01:45:22.880446 2718 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:45:22.880690 kubelet[2718]: I0813 01:45:22.880629 2718 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:45:22.943752 kubelet[2718]: I0813 01:45:22.943724 2718 kubelet_node_status.go:75] "Attempting to register node" node="172-232-7-133" Aug 13 01:45:22.951489 kubelet[2718]: I0813 01:45:22.951460 2718 kubelet_node_status.go:124] "Node was previously registered" node="172-232-7-133" Aug 13 01:45:22.951794 kubelet[2718]: I0813 01:45:22.951515 2718 kubelet_node_status.go:78] "Successfully registered node" node="172-232-7-133" Aug 13 01:45:22.974506 kubelet[2718]: I0813 01:45:22.974483 2718 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Aug 13 01:45:22.975539 kubelet[2718]: I0813 01:45:22.975516 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/5f9ea1c169ca17f70f2b596c13773f1c-k8s-certs\") pod \"kube-controller-manager-172-232-7-133\" (UID: \"5f9ea1c169ca17f70f2b596c13773f1c\") " pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:45:22.975634 kubelet[2718]: I0813 01:45:22.975543 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/5f9ea1c169ca17f70f2b596c13773f1c-kubeconfig\") pod \"kube-controller-manager-172-232-7-133\" (UID: \"5f9ea1c169ca17f70f2b596c13773f1c\") " pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:45:22.975634 kubelet[2718]: I0813 01:45:22.975562 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/cf9151aed87d63c6b29b86efe4d2bcc0-ca-certs\") pod \"kube-apiserver-172-232-7-133\" (UID: \"cf9151aed87d63c6b29b86efe4d2bcc0\") " pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:45:22.975634 kubelet[2718]: I0813 01:45:22.975577 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/cf9151aed87d63c6b29b86efe4d2bcc0-usr-share-ca-certificates\") pod \"kube-apiserver-172-232-7-133\" (UID: \"cf9151aed87d63c6b29b86efe4d2bcc0\") " pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:45:22.975634 kubelet[2718]: I0813 01:45:22.975597 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/5f9ea1c169ca17f70f2b596c13773f1c-ca-certs\") pod \"kube-controller-manager-172-232-7-133\" (UID: \"5f9ea1c169ca17f70f2b596c13773f1c\") " pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:45:22.975634 kubelet[2718]: I0813 01:45:22.975611 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/5f9ea1c169ca17f70f2b596c13773f1c-usr-share-ca-certificates\") pod \"kube-controller-manager-172-232-7-133\" (UID: \"5f9ea1c169ca17f70f2b596c13773f1c\") " pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:45:22.975841 kubelet[2718]: I0813 01:45:22.975626 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/60eac030dc4fd4fe6fb9e0753aaec45f-kubeconfig\") pod \"kube-scheduler-172-232-7-133\" (UID: \"60eac030dc4fd4fe6fb9e0753aaec45f\") " pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:45:22.975841 kubelet[2718]: I0813 01:45:22.975640 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/cf9151aed87d63c6b29b86efe4d2bcc0-k8s-certs\") pod \"kube-apiserver-172-232-7-133\" (UID: \"cf9151aed87d63c6b29b86efe4d2bcc0\") " pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:45:22.975841 kubelet[2718]: I0813 01:45:22.975659 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/5f9ea1c169ca17f70f2b596c13773f1c-flexvolume-dir\") pod \"kube-controller-manager-172-232-7-133\" (UID: \"5f9ea1c169ca17f70f2b596c13773f1c\") " pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:45:23.190217 kubelet[2718]: E0813 01:45:23.188697 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:23.190217 kubelet[2718]: E0813 01:45:23.188779 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:23.190217 kubelet[2718]: E0813 01:45:23.188921 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:23.268477 kubelet[2718]: I0813 01:45:23.268431 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-172-232-7-133" podStartSLOduration=1.268416046 podStartE2EDuration="1.268416046s" podCreationTimestamp="2025-08-13 01:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:45:23.257127901 +0000 UTC m=+0.587869675" watchObservedRunningTime="2025-08-13 01:45:23.268416046 +0000 UTC m=+0.599157810" Aug 13 01:45:23.277228 kubelet[2718]: I0813 01:45:23.277161 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-172-232-7-133" podStartSLOduration=1.277142441 podStartE2EDuration="1.277142441s" podCreationTimestamp="2025-08-13 01:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:45:23.269784447 +0000 UTC m=+0.600526211" watchObservedRunningTime="2025-08-13 01:45:23.277142441 +0000 UTC m=+0.607884205" Aug 13 01:45:23.278360 kubelet[2718]: I0813 01:45:23.278313 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-172-232-7-133" podStartSLOduration=1.2783067209999999 podStartE2EDuration="1.278306721s" podCreationTimestamp="2025-08-13 01:45:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:45:23.278220651 +0000 UTC m=+0.608962415" watchObservedRunningTime="2025-08-13 01:45:23.278306721 +0000 UTC m=+0.609048485" Aug 13 01:45:23.817877 kubelet[2718]: E0813 01:45:23.817137 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:23.819319 kubelet[2718]: E0813 01:45:23.819029 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:23.819319 kubelet[2718]: E0813 01:45:23.819281 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:24.818556 kubelet[2718]: E0813 01:45:24.818206 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:24.818556 kubelet[2718]: E0813 01:45:24.818414 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:26.916982 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Aug 13 01:45:27.178099 kubelet[2718]: I0813 01:45:27.178004 2718 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Aug 13 01:45:27.178749 containerd[1559]: time="2025-08-13T01:45:27.178720850Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Aug 13 01:45:27.179125 kubelet[2718]: I0813 01:45:27.178924 2718 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Aug 13 01:45:27.305478 kubelet[2718]: E0813 01:45:27.305454 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:27.858373 systemd[1]: Created slice kubepods-besteffort-pod7966db9b_8cb6_4de8_8a77_84490ff33845.slice - libcontainer container kubepods-besteffort-pod7966db9b_8cb6_4de8_8a77_84490ff33845.slice. Aug 13 01:45:27.911257 kubelet[2718]: I0813 01:45:27.911228 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/7966db9b-8cb6-4de8-8a77-84490ff33845-kube-proxy\") pod \"kube-proxy-fw2dv\" (UID: \"7966db9b-8cb6-4de8-8a77-84490ff33845\") " pod="kube-system/kube-proxy-fw2dv" Aug 13 01:45:27.911257 kubelet[2718]: I0813 01:45:27.911260 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7966db9b-8cb6-4de8-8a77-84490ff33845-xtables-lock\") pod \"kube-proxy-fw2dv\" (UID: \"7966db9b-8cb6-4de8-8a77-84490ff33845\") " pod="kube-system/kube-proxy-fw2dv" Aug 13 01:45:27.911424 kubelet[2718]: I0813 01:45:27.911279 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7966db9b-8cb6-4de8-8a77-84490ff33845-lib-modules\") pod \"kube-proxy-fw2dv\" (UID: \"7966db9b-8cb6-4de8-8a77-84490ff33845\") " pod="kube-system/kube-proxy-fw2dv" Aug 13 01:45:27.911424 kubelet[2718]: I0813 01:45:27.911293 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7c9jv\" (UniqueName: \"kubernetes.io/projected/7966db9b-8cb6-4de8-8a77-84490ff33845-kube-api-access-7c9jv\") pod \"kube-proxy-fw2dv\" (UID: \"7966db9b-8cb6-4de8-8a77-84490ff33845\") " pod="kube-system/kube-proxy-fw2dv" Aug 13 01:45:28.166527 kubelet[2718]: E0813 01:45:28.166046 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:28.166942 containerd[1559]: time="2025-08-13T01:45:28.166915904Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fw2dv,Uid:7966db9b-8cb6-4de8-8a77-84490ff33845,Namespace:kube-system,Attempt:0,}" Aug 13 01:45:28.191234 containerd[1559]: time="2025-08-13T01:45:28.191197976Z" level=info msg="connecting to shim 8d12ca4d3be570d9809489d13968c51eab5c04f97c3369b3f24b62fdfc6952bc" address="unix:///run/containerd/s/e2dcd37cf9ca43898666019b706d261c23108b7d5da0ac565f10bb0ca329c9cb" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:45:28.219976 systemd[1]: Started cri-containerd-8d12ca4d3be570d9809489d13968c51eab5c04f97c3369b3f24b62fdfc6952bc.scope - libcontainer container 8d12ca4d3be570d9809489d13968c51eab5c04f97c3369b3f24b62fdfc6952bc. Aug 13 01:45:28.246726 containerd[1559]: time="2025-08-13T01:45:28.246693164Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-fw2dv,Uid:7966db9b-8cb6-4de8-8a77-84490ff33845,Namespace:kube-system,Attempt:0,} returns sandbox id \"8d12ca4d3be570d9809489d13968c51eab5c04f97c3369b3f24b62fdfc6952bc\"" Aug 13 01:45:28.247516 kubelet[2718]: E0813 01:45:28.247486 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:28.250512 containerd[1559]: time="2025-08-13T01:45:28.250469155Z" level=info msg="CreateContainer within sandbox \"8d12ca4d3be570d9809489d13968c51eab5c04f97c3369b3f24b62fdfc6952bc\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Aug 13 01:45:28.260025 containerd[1559]: time="2025-08-13T01:45:28.259984150Z" level=info msg="Container df4c05f1cdb65e4117865a2215be1ab997ca1e8c0869e786ed771a63ca4b7cdf: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:28.265333 containerd[1559]: time="2025-08-13T01:45:28.265270823Z" level=info msg="CreateContainer within sandbox \"8d12ca4d3be570d9809489d13968c51eab5c04f97c3369b3f24b62fdfc6952bc\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"df4c05f1cdb65e4117865a2215be1ab997ca1e8c0869e786ed771a63ca4b7cdf\"" Aug 13 01:45:28.265914 containerd[1559]: time="2025-08-13T01:45:28.265741713Z" level=info msg="StartContainer for \"df4c05f1cdb65e4117865a2215be1ab997ca1e8c0869e786ed771a63ca4b7cdf\"" Aug 13 01:45:28.267246 containerd[1559]: time="2025-08-13T01:45:28.267213244Z" level=info msg="connecting to shim df4c05f1cdb65e4117865a2215be1ab997ca1e8c0869e786ed771a63ca4b7cdf" address="unix:///run/containerd/s/e2dcd37cf9ca43898666019b706d261c23108b7d5da0ac565f10bb0ca329c9cb" protocol=ttrpc version=3 Aug 13 01:45:28.282975 systemd[1]: Started cri-containerd-df4c05f1cdb65e4117865a2215be1ab997ca1e8c0869e786ed771a63ca4b7cdf.scope - libcontainer container df4c05f1cdb65e4117865a2215be1ab997ca1e8c0869e786ed771a63ca4b7cdf. Aug 13 01:45:28.343149 containerd[1559]: time="2025-08-13T01:45:28.343088782Z" level=info msg="StartContainer for \"df4c05f1cdb65e4117865a2215be1ab997ca1e8c0869e786ed771a63ca4b7cdf\" returns successfully" Aug 13 01:45:28.348221 systemd[1]: Created slice kubepods-besteffort-podd05cedbe_da49_4dae_84df_e86dd09b0e02.slice - libcontainer container kubepods-besteffort-podd05cedbe_da49_4dae_84df_e86dd09b0e02.slice. Aug 13 01:45:28.414722 kubelet[2718]: I0813 01:45:28.414683 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d05cedbe-da49-4dae-84df-e86dd09b0e02-var-lib-calico\") pod \"tigera-operator-747864d56d-xmx78\" (UID: \"d05cedbe-da49-4dae-84df-e86dd09b0e02\") " pod="tigera-operator/tigera-operator-747864d56d-xmx78" Aug 13 01:45:28.414845 kubelet[2718]: I0813 01:45:28.414725 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v7zvm\" (UniqueName: \"kubernetes.io/projected/d05cedbe-da49-4dae-84df-e86dd09b0e02-kube-api-access-v7zvm\") pod \"tigera-operator-747864d56d-xmx78\" (UID: \"d05cedbe-da49-4dae-84df-e86dd09b0e02\") " pod="tigera-operator/tigera-operator-747864d56d-xmx78" Aug 13 01:45:28.660142 containerd[1559]: time="2025-08-13T01:45:28.660107820Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-xmx78,Uid:d05cedbe-da49-4dae-84df-e86dd09b0e02,Namespace:tigera-operator,Attempt:0,}" Aug 13 01:45:28.675716 containerd[1559]: time="2025-08-13T01:45:28.675659378Z" level=info msg="connecting to shim 2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a" address="unix:///run/containerd/s/c225142e8a9b155b063807414c57c60dc7ef7c6f7c54ff09ad61b0bf1c2d97db" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:45:28.699011 systemd[1]: Started cri-containerd-2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a.scope - libcontainer container 2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a. Aug 13 01:45:28.751567 containerd[1559]: time="2025-08-13T01:45:28.751532816Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-747864d56d-xmx78,Uid:d05cedbe-da49-4dae-84df-e86dd09b0e02,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a\"" Aug 13 01:45:28.754174 containerd[1559]: time="2025-08-13T01:45:28.754129987Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\"" Aug 13 01:45:28.827184 kubelet[2718]: E0813 01:45:28.827116 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:29.456570 kubelet[2718]: E0813 01:45:29.456382 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:29.473947 kubelet[2718]: I0813 01:45:29.473629 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-fw2dv" podStartSLOduration=2.473404467 podStartE2EDuration="2.473404467s" podCreationTimestamp="2025-08-13 01:45:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-08-13 01:45:28.836353358 +0000 UTC m=+6.167095132" watchObservedRunningTime="2025-08-13 01:45:29.473404467 +0000 UTC m=+6.804146231" Aug 13 01:45:29.519930 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3921434802.mount: Deactivated successfully. Aug 13 01:45:29.829613 kubelet[2718]: E0813 01:45:29.829451 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:29.944094 containerd[1559]: time="2025-08-13T01:45:29.944053352Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:29.944939 containerd[1559]: time="2025-08-13T01:45:29.944713802Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.3: active requests=0, bytes read=25056543" Aug 13 01:45:29.945821 containerd[1559]: time="2025-08-13T01:45:29.945616302Z" level=info msg="ImageCreate event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:29.947371 containerd[1559]: time="2025-08-13T01:45:29.947351223Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:29.947994 containerd[1559]: time="2025-08-13T01:45:29.947968414Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.3\" with image id \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\", repo tag \"quay.io/tigera/operator:v1.38.3\", repo digest \"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\", size \"25052538\" in 1.193808607s" Aug 13 01:45:29.948035 containerd[1559]: time="2025-08-13T01:45:29.947997284Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.3\" returns image reference \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:45:29.950488 containerd[1559]: time="2025-08-13T01:45:29.950017755Z" level=info msg="CreateContainer within sandbox \"2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Aug 13 01:45:29.958393 containerd[1559]: time="2025-08-13T01:45:29.958366629Z" level=info msg="Container 791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:29.966579 containerd[1559]: time="2025-08-13T01:45:29.966552283Z" level=info msg="CreateContainer within sandbox \"2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc\"" Aug 13 01:45:29.967243 containerd[1559]: time="2025-08-13T01:45:29.967199953Z" level=info msg="StartContainer for \"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc\"" Aug 13 01:45:29.968748 containerd[1559]: time="2025-08-13T01:45:29.968724464Z" level=info msg="connecting to shim 791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc" address="unix:///run/containerd/s/c225142e8a9b155b063807414c57c60dc7ef7c6f7c54ff09ad61b0bf1c2d97db" protocol=ttrpc version=3 Aug 13 01:45:29.991975 systemd[1]: Started cri-containerd-791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc.scope - libcontainer container 791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc. Aug 13 01:45:30.027744 containerd[1559]: time="2025-08-13T01:45:30.027691525Z" level=info msg="StartContainer for \"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc\" returns successfully" Aug 13 01:45:30.844503 kubelet[2718]: I0813 01:45:30.844446 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-747864d56d-xmx78" podStartSLOduration=1.648159452 podStartE2EDuration="2.84441036s" podCreationTimestamp="2025-08-13 01:45:28 +0000 UTC" firstStartedPulling="2025-08-13 01:45:28.752663126 +0000 UTC m=+6.083404890" lastFinishedPulling="2025-08-13 01:45:29.948914034 +0000 UTC m=+7.279655798" observedRunningTime="2025-08-13 01:45:30.844199704 +0000 UTC m=+8.174941478" watchObservedRunningTime="2025-08-13 01:45:30.84441036 +0000 UTC m=+8.175152134" Aug 13 01:45:32.173497 kubelet[2718]: E0813 01:45:32.173079 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:32.838418 kubelet[2718]: E0813 01:45:32.838392 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:33.839163 kubelet[2718]: E0813 01:45:33.839119 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:35.363638 sudo[1815]: pam_unix(sudo:session): session closed for user root Aug 13 01:45:35.414314 sshd[1814]: Connection closed by 147.75.109.163 port 56826 Aug 13 01:45:35.416080 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Aug 13 01:45:35.419191 systemd[1]: sshd@6-172.232.7.133:22-147.75.109.163:56826.service: Deactivated successfully. Aug 13 01:45:35.423155 systemd[1]: session-7.scope: Deactivated successfully. Aug 13 01:45:35.424631 systemd[1]: session-7.scope: Consumed 4.010s CPU time, 225.9M memory peak. Aug 13 01:45:35.426658 systemd-logind[1539]: Session 7 logged out. Waiting for processes to exit. Aug 13 01:45:35.429613 systemd-logind[1539]: Removed session 7. Aug 13 01:45:37.317880 kubelet[2718]: E0813 01:45:37.317759 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:38.420452 systemd[1]: Created slice kubepods-besteffort-pod9f2cd7cd_8c0e_44a8_92f9_85cacf6b8773.slice - libcontainer container kubepods-besteffort-pod9f2cd7cd_8c0e_44a8_92f9_85cacf6b8773.slice. Aug 13 01:45:38.478487 kubelet[2718]: I0813 01:45:38.478416 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/9f2cd7cd-8c0e-44a8-92f9-85cacf6b8773-typha-certs\") pod \"calico-typha-b5b9867b4-p6jwz\" (UID: \"9f2cd7cd-8c0e-44a8-92f9-85cacf6b8773\") " pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:45:38.479241 kubelet[2718]: I0813 01:45:38.478903 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z9m7l\" (UniqueName: \"kubernetes.io/projected/9f2cd7cd-8c0e-44a8-92f9-85cacf6b8773-kube-api-access-z9m7l\") pod \"calico-typha-b5b9867b4-p6jwz\" (UID: \"9f2cd7cd-8c0e-44a8-92f9-85cacf6b8773\") " pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:45:38.479241 kubelet[2718]: I0813 01:45:38.478928 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9f2cd7cd-8c0e-44a8-92f9-85cacf6b8773-tigera-ca-bundle\") pod \"calico-typha-b5b9867b4-p6jwz\" (UID: \"9f2cd7cd-8c0e-44a8-92f9-85cacf6b8773\") " pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:45:38.728888 kubelet[2718]: E0813 01:45:38.728632 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:38.731369 containerd[1559]: time="2025-08-13T01:45:38.731083094Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b5b9867b4-p6jwz,Uid:9f2cd7cd-8c0e-44a8-92f9-85cacf6b8773,Namespace:calico-system,Attempt:0,}" Aug 13 01:45:38.769148 containerd[1559]: time="2025-08-13T01:45:38.768981951Z" level=info msg="connecting to shim cfa298bfa01c63f2814313bc2ca2902a5a814af99803fdce63b44b336bdbb6f3" address="unix:///run/containerd/s/4d23ad213cf1d944cbff18718adcce8368709f163dc93a6701ece4be78e3813c" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:45:38.779833 systemd[1]: Created slice kubepods-besteffort-pod9adfb11c_9977_45e9_b78f_00f4995e46c5.slice - libcontainer container kubepods-besteffort-pod9adfb11c_9977_45e9_b78f_00f4995e46c5.slice. Aug 13 01:45:38.822598 systemd[1]: Started cri-containerd-cfa298bfa01c63f2814313bc2ca2902a5a814af99803fdce63b44b336bdbb6f3.scope - libcontainer container cfa298bfa01c63f2814313bc2ca2902a5a814af99803fdce63b44b336bdbb6f3. Aug 13 01:45:38.885257 containerd[1559]: time="2025-08-13T01:45:38.885204896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-b5b9867b4-p6jwz,Uid:9f2cd7cd-8c0e-44a8-92f9-85cacf6b8773,Namespace:calico-system,Attempt:0,} returns sandbox id \"cfa298bfa01c63f2814313bc2ca2902a5a814af99803fdce63b44b336bdbb6f3\"" Aug 13 01:45:38.886301 kubelet[2718]: E0813 01:45:38.886252 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:38.888053 containerd[1559]: time="2025-08-13T01:45:38.887758419Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\"" Aug 13 01:45:38.891349 kubelet[2718]: I0813 01:45:38.891327 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/9adfb11c-9977-45e9-b78f-00f4995e46c5-flexvol-driver-host\") pod \"calico-node-qgskr\" (UID: \"9adfb11c-9977-45e9-b78f-00f4995e46c5\") " pod="calico-system/calico-node-qgskr" Aug 13 01:45:38.891486 kubelet[2718]: I0813 01:45:38.891470 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/9adfb11c-9977-45e9-b78f-00f4995e46c5-policysync\") pod \"calico-node-qgskr\" (UID: \"9adfb11c-9977-45e9-b78f-00f4995e46c5\") " pod="calico-system/calico-node-qgskr" Aug 13 01:45:38.891977 kubelet[2718]: I0813 01:45:38.891712 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9adfb11c-9977-45e9-b78f-00f4995e46c5-tigera-ca-bundle\") pod \"calico-node-qgskr\" (UID: \"9adfb11c-9977-45e9-b78f-00f4995e46c5\") " pod="calico-system/calico-node-qgskr" Aug 13 01:45:38.891977 kubelet[2718]: I0813 01:45:38.891736 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/9adfb11c-9977-45e9-b78f-00f4995e46c5-cni-log-dir\") pod \"calico-node-qgskr\" (UID: \"9adfb11c-9977-45e9-b78f-00f4995e46c5\") " pod="calico-system/calico-node-qgskr" Aug 13 01:45:38.891977 kubelet[2718]: I0813 01:45:38.891750 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/9adfb11c-9977-45e9-b78f-00f4995e46c5-cni-net-dir\") pod \"calico-node-qgskr\" (UID: \"9adfb11c-9977-45e9-b78f-00f4995e46c5\") " pod="calico-system/calico-node-qgskr" Aug 13 01:45:38.891977 kubelet[2718]: I0813 01:45:38.891791 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9adfb11c-9977-45e9-b78f-00f4995e46c5-lib-modules\") pod \"calico-node-qgskr\" (UID: \"9adfb11c-9977-45e9-b78f-00f4995e46c5\") " pod="calico-system/calico-node-qgskr" Aug 13 01:45:38.891977 kubelet[2718]: I0813 01:45:38.891805 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/9adfb11c-9977-45e9-b78f-00f4995e46c5-var-run-calico\") pod \"calico-node-qgskr\" (UID: \"9adfb11c-9977-45e9-b78f-00f4995e46c5\") " pod="calico-system/calico-node-qgskr" Aug 13 01:45:38.892329 kubelet[2718]: I0813 01:45:38.891819 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/9adfb11c-9977-45e9-b78f-00f4995e46c5-node-certs\") pod \"calico-node-qgskr\" (UID: \"9adfb11c-9977-45e9-b78f-00f4995e46c5\") " pod="calico-system/calico-node-qgskr" Aug 13 01:45:38.892329 kubelet[2718]: I0813 01:45:38.891836 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/9adfb11c-9977-45e9-b78f-00f4995e46c5-cni-bin-dir\") pod \"calico-node-qgskr\" (UID: \"9adfb11c-9977-45e9-b78f-00f4995e46c5\") " pod="calico-system/calico-node-qgskr" Aug 13 01:45:38.892329 kubelet[2718]: I0813 01:45:38.891892 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2s4z7\" (UniqueName: \"kubernetes.io/projected/9adfb11c-9977-45e9-b78f-00f4995e46c5-kube-api-access-2s4z7\") pod \"calico-node-qgskr\" (UID: \"9adfb11c-9977-45e9-b78f-00f4995e46c5\") " pod="calico-system/calico-node-qgskr" Aug 13 01:45:38.892329 kubelet[2718]: I0813 01:45:38.891989 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/9adfb11c-9977-45e9-b78f-00f4995e46c5-var-lib-calico\") pod \"calico-node-qgskr\" (UID: \"9adfb11c-9977-45e9-b78f-00f4995e46c5\") " pod="calico-system/calico-node-qgskr" Aug 13 01:45:38.892329 kubelet[2718]: I0813 01:45:38.892022 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9adfb11c-9977-45e9-b78f-00f4995e46c5-xtables-lock\") pod \"calico-node-qgskr\" (UID: \"9adfb11c-9977-45e9-b78f-00f4995e46c5\") " pod="calico-system/calico-node-qgskr" Aug 13 01:45:38.993777 kubelet[2718]: E0813 01:45:38.993324 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:38.993777 kubelet[2718]: W0813 01:45:38.993342 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:38.993777 kubelet[2718]: E0813 01:45:38.993370 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:38.994674 kubelet[2718]: E0813 01:45:38.994661 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:38.994746 kubelet[2718]: W0813 01:45:38.994735 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:38.994826 kubelet[2718]: E0813 01:45:38.994814 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:38.995110 kubelet[2718]: E0813 01:45:38.995083 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:38.995168 kubelet[2718]: W0813 01:45:38.995157 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:38.995248 kubelet[2718]: E0813 01:45:38.995237 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:38.995495 kubelet[2718]: E0813 01:45:38.995484 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:38.995563 kubelet[2718]: W0813 01:45:38.995553 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:38.995609 kubelet[2718]: E0813 01:45:38.995599 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:38.997530 kubelet[2718]: E0813 01:45:38.997504 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:38.997530 kubelet[2718]: W0813 01:45:38.997516 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:38.997680 kubelet[2718]: E0813 01:45:38.997583 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:38.997926 kubelet[2718]: E0813 01:45:38.997890 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:38.997926 kubelet[2718]: W0813 01:45:38.997901 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:38.998273 kubelet[2718]: E0813 01:45:38.998261 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:38.999584 kubelet[2718]: W0813 01:45:38.999551 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:38.999584 kubelet[2718]: E0813 01:45:38.999581 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:38.999836 kubelet[2718]: E0813 01:45:38.998667 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:38.999976 kubelet[2718]: E0813 01:45:38.999946 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:38.999976 kubelet[2718]: W0813 01:45:38.999964 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:38.999976 kubelet[2718]: E0813 01:45:38.999974 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.000187 kubelet[2718]: E0813 01:45:39.000169 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.000187 kubelet[2718]: W0813 01:45:39.000182 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.000326 kubelet[2718]: E0813 01:45:39.000235 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.000363 kubelet[2718]: E0813 01:45:39.000353 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.000363 kubelet[2718]: W0813 01:45:39.000361 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.001071 kubelet[2718]: E0813 01:45:39.000378 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.001385 kubelet[2718]: E0813 01:45:39.001211 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.001385 kubelet[2718]: W0813 01:45:39.001220 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.001385 kubelet[2718]: E0813 01:45:39.001229 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.008063 kubelet[2718]: E0813 01:45:39.008045 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.008063 kubelet[2718]: W0813 01:45:39.008059 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.008063 kubelet[2718]: E0813 01:45:39.008068 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.059880 kubelet[2718]: E0813 01:45:39.059329 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:45:39.093914 kubelet[2718]: E0813 01:45:39.093885 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.093914 kubelet[2718]: W0813 01:45:39.093907 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.093982 kubelet[2718]: E0813 01:45:39.093921 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.094195 containerd[1559]: time="2025-08-13T01:45:39.094158386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qgskr,Uid:9adfb11c-9977-45e9-b78f-00f4995e46c5,Namespace:calico-system,Attempt:0,}" Aug 13 01:45:39.094442 kubelet[2718]: E0813 01:45:39.094417 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.094442 kubelet[2718]: W0813 01:45:39.094434 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.094581 kubelet[2718]: E0813 01:45:39.094559 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.094938 kubelet[2718]: E0813 01:45:39.094917 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.094938 kubelet[2718]: W0813 01:45:39.094932 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.095000 kubelet[2718]: E0813 01:45:39.094971 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.095478 kubelet[2718]: E0813 01:45:39.095437 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.095517 kubelet[2718]: W0813 01:45:39.095483 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.095517 kubelet[2718]: E0813 01:45:39.095494 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.096175 kubelet[2718]: E0813 01:45:39.096021 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.096175 kubelet[2718]: W0813 01:45:39.096172 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.096244 kubelet[2718]: E0813 01:45:39.096182 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.096927 kubelet[2718]: E0813 01:45:39.096904 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.096927 kubelet[2718]: W0813 01:45:39.096920 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.096927 kubelet[2718]: E0813 01:45:39.096929 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.097221 kubelet[2718]: E0813 01:45:39.097146 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.097221 kubelet[2718]: W0813 01:45:39.097157 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.097221 kubelet[2718]: E0813 01:45:39.097192 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.097484 kubelet[2718]: E0813 01:45:39.097464 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.097568 kubelet[2718]: W0813 01:45:39.097546 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.097568 kubelet[2718]: E0813 01:45:39.097563 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.098069 kubelet[2718]: E0813 01:45:39.098051 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.098069 kubelet[2718]: W0813 01:45:39.098065 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.098132 kubelet[2718]: E0813 01:45:39.098080 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.098724 kubelet[2718]: E0813 01:45:39.098625 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.098724 kubelet[2718]: W0813 01:45:39.098638 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.098724 kubelet[2718]: E0813 01:45:39.098646 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.099701 kubelet[2718]: E0813 01:45:39.099670 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.099748 kubelet[2718]: W0813 01:45:39.099708 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.099748 kubelet[2718]: E0813 01:45:39.099718 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.100804 kubelet[2718]: E0813 01:45:39.100707 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.101080 kubelet[2718]: W0813 01:45:39.100849 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.101176 kubelet[2718]: E0813 01:45:39.101162 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.101822 kubelet[2718]: E0813 01:45:39.101720 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.101822 kubelet[2718]: W0813 01:45:39.101731 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.101822 kubelet[2718]: E0813 01:45:39.101741 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.102103 kubelet[2718]: E0813 01:45:39.102045 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.102103 kubelet[2718]: W0813 01:45:39.102057 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.102103 kubelet[2718]: E0813 01:45:39.102067 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.102787 kubelet[2718]: E0813 01:45:39.102726 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.102787 kubelet[2718]: W0813 01:45:39.102737 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.102787 kubelet[2718]: E0813 01:45:39.102746 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.103359 kubelet[2718]: E0813 01:45:39.103263 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.103359 kubelet[2718]: W0813 01:45:39.103275 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.103359 kubelet[2718]: E0813 01:45:39.103296 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.104270 kubelet[2718]: E0813 01:45:39.104188 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.104270 kubelet[2718]: W0813 01:45:39.104201 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.104270 kubelet[2718]: E0813 01:45:39.104210 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.106008 kubelet[2718]: E0813 01:45:39.105979 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.106008 kubelet[2718]: W0813 01:45:39.105999 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.106077 kubelet[2718]: E0813 01:45:39.106011 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.106217 kubelet[2718]: E0813 01:45:39.106192 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.106217 kubelet[2718]: W0813 01:45:39.106207 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.106217 kubelet[2718]: E0813 01:45:39.106215 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.106390 kubelet[2718]: E0813 01:45:39.106366 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.106390 kubelet[2718]: W0813 01:45:39.106381 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.106390 kubelet[2718]: E0813 01:45:39.106389 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.106896 kubelet[2718]: E0813 01:45:39.106650 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.107155 kubelet[2718]: W0813 01:45:39.107132 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.107155 kubelet[2718]: E0813 01:45:39.107152 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.107212 kubelet[2718]: I0813 01:45:39.107186 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/9674627f-b072-4139-b18d-fdf07891e1e2-varrun\") pod \"csi-node-driver-dbqt2\" (UID: \"9674627f-b072-4139-b18d-fdf07891e1e2\") " pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:45:39.108020 kubelet[2718]: E0813 01:45:39.107992 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.108106 kubelet[2718]: W0813 01:45:39.108084 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.108214 kubelet[2718]: E0813 01:45:39.108159 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.108214 kubelet[2718]: I0813 01:45:39.108184 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m2lnb\" (UniqueName: \"kubernetes.io/projected/9674627f-b072-4139-b18d-fdf07891e1e2-kube-api-access-m2lnb\") pod \"csi-node-driver-dbqt2\" (UID: \"9674627f-b072-4139-b18d-fdf07891e1e2\") " pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:45:39.108525 kubelet[2718]: E0813 01:45:39.108502 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.108525 kubelet[2718]: W0813 01:45:39.108513 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.110044 kubelet[2718]: E0813 01:45:39.108606 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.110110 kubelet[2718]: E0813 01:45:39.110099 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.110171 kubelet[2718]: W0813 01:45:39.110155 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.110296 kubelet[2718]: E0813 01:45:39.110283 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.115952 kubelet[2718]: E0813 01:45:39.114015 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.115952 kubelet[2718]: W0813 01:45:39.114032 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.115952 kubelet[2718]: E0813 01:45:39.114119 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.115952 kubelet[2718]: I0813 01:45:39.114140 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9674627f-b072-4139-b18d-fdf07891e1e2-kubelet-dir\") pod \"csi-node-driver-dbqt2\" (UID: \"9674627f-b072-4139-b18d-fdf07891e1e2\") " pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:45:39.115952 kubelet[2718]: E0813 01:45:39.114293 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.115952 kubelet[2718]: W0813 01:45:39.114301 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.115952 kubelet[2718]: E0813 01:45:39.114322 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.115952 kubelet[2718]: E0813 01:45:39.114491 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.115952 kubelet[2718]: W0813 01:45:39.114498 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.116210 kubelet[2718]: E0813 01:45:39.114518 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.116210 kubelet[2718]: E0813 01:45:39.115044 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.116210 kubelet[2718]: W0813 01:45:39.115053 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.116210 kubelet[2718]: E0813 01:45:39.115076 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.116210 kubelet[2718]: I0813 01:45:39.115091 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/9674627f-b072-4139-b18d-fdf07891e1e2-registration-dir\") pod \"csi-node-driver-dbqt2\" (UID: \"9674627f-b072-4139-b18d-fdf07891e1e2\") " pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:45:39.116210 kubelet[2718]: E0813 01:45:39.115282 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.116210 kubelet[2718]: W0813 01:45:39.115290 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.116210 kubelet[2718]: E0813 01:45:39.115311 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.116363 kubelet[2718]: I0813 01:45:39.115325 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/9674627f-b072-4139-b18d-fdf07891e1e2-socket-dir\") pod \"csi-node-driver-dbqt2\" (UID: \"9674627f-b072-4139-b18d-fdf07891e1e2\") " pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:45:39.117242 kubelet[2718]: E0813 01:45:39.116522 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.117242 kubelet[2718]: W0813 01:45:39.116533 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.117242 kubelet[2718]: E0813 01:45:39.116613 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.117242 kubelet[2718]: E0813 01:45:39.116730 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.117242 kubelet[2718]: W0813 01:45:39.116737 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.117242 kubelet[2718]: E0813 01:45:39.116813 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.117242 kubelet[2718]: E0813 01:45:39.117108 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.117242 kubelet[2718]: W0813 01:45:39.117116 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.117242 kubelet[2718]: E0813 01:45:39.117192 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.118030 kubelet[2718]: E0813 01:45:39.117991 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.118030 kubelet[2718]: W0813 01:45:39.118007 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.118030 kubelet[2718]: E0813 01:45:39.118016 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.118316 kubelet[2718]: E0813 01:45:39.118305 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.118365 kubelet[2718]: W0813 01:45:39.118355 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.118406 kubelet[2718]: E0813 01:45:39.118397 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.120052 kubelet[2718]: E0813 01:45:39.120014 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.120052 kubelet[2718]: W0813 01:45:39.120026 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.120052 kubelet[2718]: E0813 01:45:39.120035 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.140072 containerd[1559]: time="2025-08-13T01:45:39.140014519Z" level=info msg="connecting to shim cf8808596e9944d86c6c06b036bdf15dc271c9e20b2942d501808df66f92f821" address="unix:///run/containerd/s/622e69d3c921a9a0485f0e4b2ee545d9fd40dfe3138fc5878b4d17fe298e3d44" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:45:39.183004 systemd[1]: Started cri-containerd-cf8808596e9944d86c6c06b036bdf15dc271c9e20b2942d501808df66f92f821.scope - libcontainer container cf8808596e9944d86c6c06b036bdf15dc271c9e20b2942d501808df66f92f821. Aug 13 01:45:39.216082 kubelet[2718]: E0813 01:45:39.216063 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.216232 kubelet[2718]: W0813 01:45:39.216218 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.216288 kubelet[2718]: E0813 01:45:39.216277 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.216510 kubelet[2718]: E0813 01:45:39.216488 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.216643 kubelet[2718]: W0813 01:45:39.216578 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.216643 kubelet[2718]: E0813 01:45:39.216596 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.217954 kubelet[2718]: E0813 01:45:39.217942 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.218093 kubelet[2718]: W0813 01:45:39.218009 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.218093 kubelet[2718]: E0813 01:45:39.218038 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.218373 kubelet[2718]: E0813 01:45:39.218303 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.218373 kubelet[2718]: W0813 01:45:39.218313 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.218490 kubelet[2718]: E0813 01:45:39.218478 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.218605 kubelet[2718]: E0813 01:45:39.218584 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.218605 kubelet[2718]: W0813 01:45:39.218593 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.218754 kubelet[2718]: E0813 01:45:39.218734 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.218941 kubelet[2718]: E0813 01:45:39.218901 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.218941 kubelet[2718]: W0813 01:45:39.218912 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.219095 kubelet[2718]: E0813 01:45:39.219071 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.219364 kubelet[2718]: E0813 01:45:39.219293 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.219364 kubelet[2718]: W0813 01:45:39.219304 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.219364 kubelet[2718]: E0813 01:45:39.219321 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.219651 kubelet[2718]: E0813 01:45:39.219640 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.219754 kubelet[2718]: W0813 01:45:39.219699 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.219754 kubelet[2718]: E0813 01:45:39.219720 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.220013 kubelet[2718]: E0813 01:45:39.219961 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.220013 kubelet[2718]: W0813 01:45:39.219971 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.220073 kubelet[2718]: E0813 01:45:39.220032 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.220333 kubelet[2718]: E0813 01:45:39.220281 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.220333 kubelet[2718]: W0813 01:45:39.220291 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.220333 kubelet[2718]: E0813 01:45:39.220319 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.220596 kubelet[2718]: E0813 01:45:39.220582 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.220809 kubelet[2718]: W0813 01:45:39.220646 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.220809 kubelet[2718]: E0813 01:45:39.220677 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.221905 kubelet[2718]: E0813 01:45:39.221147 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.221905 kubelet[2718]: W0813 01:45:39.221158 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.221905 kubelet[2718]: E0813 01:45:39.221197 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.222258 kubelet[2718]: E0813 01:45:39.222226 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.222310 kubelet[2718]: W0813 01:45:39.222238 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.222545 kubelet[2718]: E0813 01:45:39.222498 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.222669 kubelet[2718]: E0813 01:45:39.222659 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.222725 kubelet[2718]: W0813 01:45:39.222710 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.222800 kubelet[2718]: E0813 01:45:39.222789 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.223044 kubelet[2718]: E0813 01:45:39.223022 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.223044 kubelet[2718]: W0813 01:45:39.223032 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.223200 kubelet[2718]: E0813 01:45:39.223180 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.223741 kubelet[2718]: E0813 01:45:39.223717 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.223741 kubelet[2718]: W0813 01:45:39.223728 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.223985 kubelet[2718]: E0813 01:45:39.223964 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.225090 kubelet[2718]: E0813 01:45:39.225064 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.225090 kubelet[2718]: W0813 01:45:39.225075 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.225317 kubelet[2718]: E0813 01:45:39.225282 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.225425 kubelet[2718]: E0813 01:45:39.225405 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.225425 kubelet[2718]: W0813 01:45:39.225413 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.225536 kubelet[2718]: E0813 01:45:39.225507 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.225691 kubelet[2718]: E0813 01:45:39.225670 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.225691 kubelet[2718]: W0813 01:45:39.225679 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.225837 kubelet[2718]: E0813 01:45:39.225820 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.226038 kubelet[2718]: E0813 01:45:39.226017 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.226038 kubelet[2718]: W0813 01:45:39.226026 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.226166 kubelet[2718]: E0813 01:45:39.226155 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.226378 kubelet[2718]: E0813 01:45:39.226357 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.226378 kubelet[2718]: W0813 01:45:39.226367 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.226546 kubelet[2718]: E0813 01:45:39.226448 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.228007 kubelet[2718]: E0813 01:45:39.227995 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.228111 kubelet[2718]: W0813 01:45:39.228052 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.228111 kubelet[2718]: E0813 01:45:39.228065 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.228414 kubelet[2718]: E0813 01:45:39.228390 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.228414 kubelet[2718]: W0813 01:45:39.228412 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.228479 kubelet[2718]: E0813 01:45:39.228434 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.228904 kubelet[2718]: E0813 01:45:39.228688 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.228904 kubelet[2718]: W0813 01:45:39.228699 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.228904 kubelet[2718]: E0813 01:45:39.228708 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.229985 kubelet[2718]: E0813 01:45:39.229962 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.229985 kubelet[2718]: W0813 01:45:39.229980 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.230054 kubelet[2718]: E0813 01:45:39.229991 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.245463 kubelet[2718]: E0813 01:45:39.245351 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:39.246051 kubelet[2718]: W0813 01:45:39.245908 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:39.246114 kubelet[2718]: E0813 01:45:39.246102 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:39.289795 containerd[1559]: time="2025-08-13T01:45:39.289750880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-qgskr,Uid:9adfb11c-9977-45e9-b78f-00f4995e46c5,Namespace:calico-system,Attempt:0,} returns sandbox id \"cf8808596e9944d86c6c06b036bdf15dc271c9e20b2942d501808df66f92f821\"" Aug 13 01:45:39.865613 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount225966754.mount: Deactivated successfully. Aug 13 01:45:40.658936 containerd[1559]: time="2025-08-13T01:45:40.658794156Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:40.660513 containerd[1559]: time="2025-08-13T01:45:40.660474194Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.2: active requests=0, bytes read=35233364" Aug 13 01:45:40.661492 containerd[1559]: time="2025-08-13T01:45:40.661466382Z" level=info msg="ImageCreate event name:\"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:40.663068 containerd[1559]: time="2025-08-13T01:45:40.663019712Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:40.663542 containerd[1559]: time="2025-08-13T01:45:40.663505975Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.2\" with image id \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:da29d745efe5eb7d25f765d3aa439f3fe60710a458efe39c285e58b02bd961af\", size \"35233218\" in 1.77550488s" Aug 13 01:45:40.663596 containerd[1559]: time="2025-08-13T01:45:40.663541985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.2\" returns image reference \"sha256:b3baa600c7ff9cd50dc12f2529ef263aaa346dbeca13c77c6553d661fd216b54\"" Aug 13 01:45:40.664761 containerd[1559]: time="2025-08-13T01:45:40.664740979Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\"" Aug 13 01:45:40.682396 containerd[1559]: time="2025-08-13T01:45:40.682372390Z" level=info msg="CreateContainer within sandbox \"cfa298bfa01c63f2814313bc2ca2902a5a814af99803fdce63b44b336bdbb6f3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Aug 13 01:45:40.690082 containerd[1559]: time="2025-08-13T01:45:40.689408009Z" level=info msg="Container 58faca2b388dd49391b71a9a8e39a338df8505373c8086610067d0e7f31e1aa8: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:40.694506 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1981448810.mount: Deactivated successfully. Aug 13 01:45:40.696939 containerd[1559]: time="2025-08-13T01:45:40.696908891Z" level=info msg="CreateContainer within sandbox \"cfa298bfa01c63f2814313bc2ca2902a5a814af99803fdce63b44b336bdbb6f3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"58faca2b388dd49391b71a9a8e39a338df8505373c8086610067d0e7f31e1aa8\"" Aug 13 01:45:40.698386 containerd[1559]: time="2025-08-13T01:45:40.698367952Z" level=info msg="StartContainer for \"58faca2b388dd49391b71a9a8e39a338df8505373c8086610067d0e7f31e1aa8\"" Aug 13 01:45:40.700246 containerd[1559]: time="2025-08-13T01:45:40.700201499Z" level=info msg="connecting to shim 58faca2b388dd49391b71a9a8e39a338df8505373c8086610067d0e7f31e1aa8" address="unix:///run/containerd/s/4d23ad213cf1d944cbff18718adcce8368709f163dc93a6701ece4be78e3813c" protocol=ttrpc version=3 Aug 13 01:45:40.723988 systemd[1]: Started cri-containerd-58faca2b388dd49391b71a9a8e39a338df8505373c8086610067d0e7f31e1aa8.scope - libcontainer container 58faca2b388dd49391b71a9a8e39a338df8505373c8086610067d0e7f31e1aa8. Aug 13 01:45:40.776354 containerd[1559]: time="2025-08-13T01:45:40.776319801Z" level=info msg="StartContainer for \"58faca2b388dd49391b71a9a8e39a338df8505373c8086610067d0e7f31e1aa8\" returns successfully" Aug 13 01:45:40.781228 kubelet[2718]: E0813 01:45:40.781186 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:45:40.863704 kubelet[2718]: E0813 01:45:40.863675 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:40.920301 kubelet[2718]: E0813 01:45:40.920200 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.921048 kubelet[2718]: W0813 01:45:40.921004 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.922919 kubelet[2718]: E0813 01:45:40.922900 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.923234 kubelet[2718]: E0813 01:45:40.923222 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.923337 kubelet[2718]: W0813 01:45:40.923283 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.923337 kubelet[2718]: E0813 01:45:40.923297 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.923618 kubelet[2718]: E0813 01:45:40.923562 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.923618 kubelet[2718]: W0813 01:45:40.923572 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.923618 kubelet[2718]: E0813 01:45:40.923581 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.923944 kubelet[2718]: E0813 01:45:40.923881 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.923944 kubelet[2718]: W0813 01:45:40.923892 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.923944 kubelet[2718]: E0813 01:45:40.923900 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.924225 kubelet[2718]: E0813 01:45:40.924213 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.924315 kubelet[2718]: W0813 01:45:40.924268 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.924315 kubelet[2718]: E0813 01:45:40.924281 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.924640 kubelet[2718]: E0813 01:45:40.924585 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.924640 kubelet[2718]: W0813 01:45:40.924595 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.924640 kubelet[2718]: E0813 01:45:40.924604 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.925068 kubelet[2718]: E0813 01:45:40.924906 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.925068 kubelet[2718]: W0813 01:45:40.925007 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.925068 kubelet[2718]: E0813 01:45:40.925017 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.925888 kubelet[2718]: E0813 01:45:40.925515 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.925888 kubelet[2718]: W0813 01:45:40.925540 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.925888 kubelet[2718]: E0813 01:45:40.925550 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.926171 kubelet[2718]: E0813 01:45:40.926119 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.926171 kubelet[2718]: W0813 01:45:40.926130 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.926171 kubelet[2718]: E0813 01:45:40.926138 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.926794 kubelet[2718]: E0813 01:45:40.926760 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.926794 kubelet[2718]: W0813 01:45:40.926791 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.926889 kubelet[2718]: E0813 01:45:40.926816 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.927193 kubelet[2718]: E0813 01:45:40.927170 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.927193 kubelet[2718]: W0813 01:45:40.927185 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.927193 kubelet[2718]: E0813 01:45:40.927193 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.927616 kubelet[2718]: E0813 01:45:40.927590 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.927616 kubelet[2718]: W0813 01:45:40.927605 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.927616 kubelet[2718]: E0813 01:45:40.927613 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.928021 kubelet[2718]: E0813 01:45:40.927996 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.928021 kubelet[2718]: W0813 01:45:40.928011 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.928021 kubelet[2718]: E0813 01:45:40.928019 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.928333 kubelet[2718]: E0813 01:45:40.928309 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.928333 kubelet[2718]: W0813 01:45:40.928324 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.928333 kubelet[2718]: E0813 01:45:40.928332 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.928602 kubelet[2718]: E0813 01:45:40.928579 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.928602 kubelet[2718]: W0813 01:45:40.928594 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.928602 kubelet[2718]: E0813 01:45:40.928602 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.934196 kubelet[2718]: E0813 01:45:40.934168 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.934196 kubelet[2718]: W0813 01:45:40.934186 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.934196 kubelet[2718]: E0813 01:45:40.934195 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.934466 kubelet[2718]: E0813 01:45:40.934441 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.934466 kubelet[2718]: W0813 01:45:40.934457 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.934519 kubelet[2718]: E0813 01:45:40.934478 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.934815 kubelet[2718]: E0813 01:45:40.934788 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.934815 kubelet[2718]: W0813 01:45:40.934803 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.934896 kubelet[2718]: E0813 01:45:40.934820 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.935948 kubelet[2718]: E0813 01:45:40.935926 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.935948 kubelet[2718]: W0813 01:45:40.935941 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.936016 kubelet[2718]: E0813 01:45:40.935964 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.936220 kubelet[2718]: E0813 01:45:40.936152 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.936220 kubelet[2718]: W0813 01:45:40.936163 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.936282 kubelet[2718]: E0813 01:45:40.936243 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.936417 kubelet[2718]: E0813 01:45:40.936350 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.936417 kubelet[2718]: W0813 01:45:40.936363 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.936582 kubelet[2718]: E0813 01:45:40.936492 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.936709 kubelet[2718]: E0813 01:45:40.936686 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.936709 kubelet[2718]: W0813 01:45:40.936701 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.936960 kubelet[2718]: E0813 01:45:40.936784 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.936960 kubelet[2718]: E0813 01:45:40.936934 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.936960 kubelet[2718]: W0813 01:45:40.936941 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.936960 kubelet[2718]: E0813 01:45:40.936952 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.937269 kubelet[2718]: E0813 01:45:40.937244 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.937269 kubelet[2718]: W0813 01:45:40.937258 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.937322 kubelet[2718]: E0813 01:45:40.937294 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.938271 kubelet[2718]: E0813 01:45:40.938249 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.938271 kubelet[2718]: W0813 01:45:40.938262 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.938271 kubelet[2718]: E0813 01:45:40.938275 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.938681 kubelet[2718]: E0813 01:45:40.938634 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.938681 kubelet[2718]: W0813 01:45:40.938671 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.938792 kubelet[2718]: E0813 01:45:40.938770 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.939110 kubelet[2718]: E0813 01:45:40.939087 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.939110 kubelet[2718]: W0813 01:45:40.939102 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.939204 kubelet[2718]: E0813 01:45:40.939183 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.941408 kubelet[2718]: E0813 01:45:40.941247 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.941408 kubelet[2718]: W0813 01:45:40.941263 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.941408 kubelet[2718]: E0813 01:45:40.941303 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.942064 kubelet[2718]: E0813 01:45:40.942050 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.943289 kubelet[2718]: W0813 01:45:40.943141 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.943411 kubelet[2718]: E0813 01:45:40.943399 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.943604 kubelet[2718]: W0813 01:45:40.943590 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.943681 kubelet[2718]: E0813 01:45:40.943669 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.944886 kubelet[2718]: E0813 01:45:40.944468 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.945829 kubelet[2718]: E0813 01:45:40.945106 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.945829 kubelet[2718]: W0813 01:45:40.945654 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.945829 kubelet[2718]: E0813 01:45:40.945667 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.946149 kubelet[2718]: E0813 01:45:40.946045 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.946149 kubelet[2718]: W0813 01:45:40.946060 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.946149 kubelet[2718]: E0813 01:45:40.946069 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:40.946813 kubelet[2718]: E0813 01:45:40.946799 2718 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Aug 13 01:45:40.946926 kubelet[2718]: W0813 01:45:40.946886 2718 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Aug 13 01:45:40.946926 kubelet[2718]: E0813 01:45:40.946901 2718 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Aug 13 01:45:41.413014 containerd[1559]: time="2025-08-13T01:45:41.412956133Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:41.413561 containerd[1559]: time="2025-08-13T01:45:41.413536187Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2: active requests=0, bytes read=4446956" Aug 13 01:45:41.414182 containerd[1559]: time="2025-08-13T01:45:41.414131349Z" level=info msg="ImageCreate event name:\"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:41.415373 containerd[1559]: time="2025-08-13T01:45:41.415348294Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:41.416241 containerd[1559]: time="2025-08-13T01:45:41.415893628Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" with image id \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:972be127eaecd7d1a2d5393b8d14f1ae8f88550bee83e0519e9590c7e15eb41b\", size \"5939619\" in 750.203541ms" Aug 13 01:45:41.416241 containerd[1559]: time="2025-08-13T01:45:41.415940237Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.2\" returns image reference \"sha256:639615519fa6f7bc4b4756066ba9780068fd291eacc36c120f6c555e62f2b00e\"" Aug 13 01:45:41.418666 containerd[1559]: time="2025-08-13T01:45:41.418626584Z" level=info msg="CreateContainer within sandbox \"cf8808596e9944d86c6c06b036bdf15dc271c9e20b2942d501808df66f92f821\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Aug 13 01:45:41.425251 containerd[1559]: time="2025-08-13T01:45:41.425214644Z" level=info msg="Container d3371c98dab10353f9b24b83c5f00584e60f47e52018587e00a849bd276cfd28: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:41.441873 containerd[1559]: time="2025-08-13T01:45:41.441559217Z" level=info msg="CreateContainer within sandbox \"cf8808596e9944d86c6c06b036bdf15dc271c9e20b2942d501808df66f92f821\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d3371c98dab10353f9b24b83c5f00584e60f47e52018587e00a849bd276cfd28\"" Aug 13 01:45:41.443362 containerd[1559]: time="2025-08-13T01:45:41.443336894Z" level=info msg="StartContainer for \"d3371c98dab10353f9b24b83c5f00584e60f47e52018587e00a849bd276cfd28\"" Aug 13 01:45:41.444932 containerd[1559]: time="2025-08-13T01:45:41.444888606Z" level=info msg="connecting to shim d3371c98dab10353f9b24b83c5f00584e60f47e52018587e00a849bd276cfd28" address="unix:///run/containerd/s/622e69d3c921a9a0485f0e4b2ee545d9fd40dfe3138fc5878b4d17fe298e3d44" protocol=ttrpc version=3 Aug 13 01:45:41.467004 systemd[1]: Started cri-containerd-d3371c98dab10353f9b24b83c5f00584e60f47e52018587e00a849bd276cfd28.scope - libcontainer container d3371c98dab10353f9b24b83c5f00584e60f47e52018587e00a849bd276cfd28. Aug 13 01:45:41.516599 containerd[1559]: time="2025-08-13T01:45:41.516558126Z" level=info msg="StartContainer for \"d3371c98dab10353f9b24b83c5f00584e60f47e52018587e00a849bd276cfd28\" returns successfully" Aug 13 01:45:41.529418 systemd[1]: cri-containerd-d3371c98dab10353f9b24b83c5f00584e60f47e52018587e00a849bd276cfd28.scope: Deactivated successfully. Aug 13 01:45:41.531465 containerd[1559]: time="2025-08-13T01:45:41.531427665Z" level=info msg="received exit event container_id:\"d3371c98dab10353f9b24b83c5f00584e60f47e52018587e00a849bd276cfd28\" id:\"d3371c98dab10353f9b24b83c5f00584e60f47e52018587e00a849bd276cfd28\" pid:3396 exited_at:{seconds:1755049541 nanos:530743073}" Aug 13 01:45:41.532104 containerd[1559]: time="2025-08-13T01:45:41.532070708Z" level=info msg="TaskExit event in podsandbox handler container_id:\"d3371c98dab10353f9b24b83c5f00584e60f47e52018587e00a849bd276cfd28\" id:\"d3371c98dab10353f9b24b83c5f00584e60f47e52018587e00a849bd276cfd28\" pid:3396 exited_at:{seconds:1755049541 nanos:530743073}" Aug 13 01:45:41.808229 update_engine[1542]: I20250813 01:45:41.808156 1542 update_attempter.cc:509] Updating boot flags... Aug 13 01:45:41.884881 kubelet[2718]: I0813 01:45:41.883348 2718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:45:41.884881 kubelet[2718]: E0813 01:45:41.883603 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:41.892936 containerd[1559]: time="2025-08-13T01:45:41.892690860Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\"" Aug 13 01:45:41.923742 kubelet[2718]: I0813 01:45:41.923322 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-b5b9867b4-p6jwz" podStartSLOduration=2.1460909519999998 podStartE2EDuration="3.923310398s" podCreationTimestamp="2025-08-13 01:45:38 +0000 UTC" firstStartedPulling="2025-08-13 01:45:38.887416144 +0000 UTC m=+16.218157918" lastFinishedPulling="2025-08-13 01:45:40.6646356 +0000 UTC m=+17.995377364" observedRunningTime="2025-08-13 01:45:40.877790424 +0000 UTC m=+18.208532208" watchObservedRunningTime="2025-08-13 01:45:41.923310398 +0000 UTC m=+19.254052172" Aug 13 01:45:42.780613 kubelet[2718]: E0813 01:45:42.779470 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:45:43.774143 containerd[1559]: time="2025-08-13T01:45:43.774043506Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:43.775195 containerd[1559]: time="2025-08-13T01:45:43.775016425Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.2: active requests=0, bytes read=70436221" Aug 13 01:45:43.775912 containerd[1559]: time="2025-08-13T01:45:43.775836137Z" level=info msg="ImageCreate event name:\"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:43.779680 containerd[1559]: time="2025-08-13T01:45:43.779652877Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:45:43.780290 containerd[1559]: time="2025-08-13T01:45:43.780217940Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.2\" with image id \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:50686775cc60acb78bd92a66fa2d84e1700b2d8e43a718fbadbf35e59baefb4d\", size \"71928924\" in 1.887496452s" Aug 13 01:45:43.780290 containerd[1559]: time="2025-08-13T01:45:43.780240350Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.2\" returns image reference \"sha256:77a357d0d33e3016e61153f7d2b7de72371579c4aaeb767fb7ef0af606fe1630\"" Aug 13 01:45:43.782452 containerd[1559]: time="2025-08-13T01:45:43.782429867Z" level=info msg="CreateContainer within sandbox \"cf8808596e9944d86c6c06b036bdf15dc271c9e20b2942d501808df66f92f821\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Aug 13 01:45:43.790515 containerd[1559]: time="2025-08-13T01:45:43.788492213Z" level=info msg="Container e1ffa803e2080b8112af836cb46b273f69b34665ba26bb9821625310bd6e7a5f: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:45:43.802835 containerd[1559]: time="2025-08-13T01:45:43.802812911Z" level=info msg="CreateContainer within sandbox \"cf8808596e9944d86c6c06b036bdf15dc271c9e20b2942d501808df66f92f821\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"e1ffa803e2080b8112af836cb46b273f69b34665ba26bb9821625310bd6e7a5f\"" Aug 13 01:45:43.803436 containerd[1559]: time="2025-08-13T01:45:43.803356805Z" level=info msg="StartContainer for \"e1ffa803e2080b8112af836cb46b273f69b34665ba26bb9821625310bd6e7a5f\"" Aug 13 01:45:43.813378 containerd[1559]: time="2025-08-13T01:45:43.813316260Z" level=info msg="connecting to shim e1ffa803e2080b8112af836cb46b273f69b34665ba26bb9821625310bd6e7a5f" address="unix:///run/containerd/s/622e69d3c921a9a0485f0e4b2ee545d9fd40dfe3138fc5878b4d17fe298e3d44" protocol=ttrpc version=3 Aug 13 01:45:43.839982 systemd[1]: Started cri-containerd-e1ffa803e2080b8112af836cb46b273f69b34665ba26bb9821625310bd6e7a5f.scope - libcontainer container e1ffa803e2080b8112af836cb46b273f69b34665ba26bb9821625310bd6e7a5f. Aug 13 01:45:43.891406 containerd[1559]: time="2025-08-13T01:45:43.891366482Z" level=info msg="StartContainer for \"e1ffa803e2080b8112af836cb46b273f69b34665ba26bb9821625310bd6e7a5f\" returns successfully" Aug 13 01:45:44.431161 containerd[1559]: time="2025-08-13T01:45:44.431111404Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Aug 13 01:45:44.433873 systemd[1]: cri-containerd-e1ffa803e2080b8112af836cb46b273f69b34665ba26bb9821625310bd6e7a5f.scope: Deactivated successfully. Aug 13 01:45:44.434712 systemd[1]: cri-containerd-e1ffa803e2080b8112af836cb46b273f69b34665ba26bb9821625310bd6e7a5f.scope: Consumed 550ms CPU time, 194.6M memory peak, 171.2M written to disk. Aug 13 01:45:44.436878 containerd[1559]: time="2025-08-13T01:45:44.436824138Z" level=info msg="received exit event container_id:\"e1ffa803e2080b8112af836cb46b273f69b34665ba26bb9821625310bd6e7a5f\" id:\"e1ffa803e2080b8112af836cb46b273f69b34665ba26bb9821625310bd6e7a5f\" pid:3475 exited_at:{seconds:1755049544 nanos:436580560}" Aug 13 01:45:44.437116 containerd[1559]: time="2025-08-13T01:45:44.437058435Z" level=info msg="TaskExit event in podsandbox handler container_id:\"e1ffa803e2080b8112af836cb46b273f69b34665ba26bb9821625310bd6e7a5f\" id:\"e1ffa803e2080b8112af836cb46b273f69b34665ba26bb9821625310bd6e7a5f\" pid:3475 exited_at:{seconds:1755049544 nanos:436580560}" Aug 13 01:45:44.459346 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-e1ffa803e2080b8112af836cb46b273f69b34665ba26bb9821625310bd6e7a5f-rootfs.mount: Deactivated successfully. Aug 13 01:45:44.503357 kubelet[2718]: I0813 01:45:44.503332 2718 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Aug 13 01:45:44.541721 systemd[1]: Created slice kubepods-burstable-poddafdbb28_0754_4303_98bd_08c77ee94f1a.slice - libcontainer container kubepods-burstable-poddafdbb28_0754_4303_98bd_08c77ee94f1a.slice. Aug 13 01:45:44.554748 systemd[1]: Created slice kubepods-besteffort-podaea9f7c8_25a2_45fe_b92d_6e256fd0e960.slice - libcontainer container kubepods-besteffort-podaea9f7c8_25a2_45fe_b92d_6e256fd0e960.slice. Aug 13 01:45:44.566101 systemd[1]: Created slice kubepods-besteffort-podc79e958f_d144_4c10_a249_44ba255575ec.slice - libcontainer container kubepods-besteffort-podc79e958f_d144_4c10_a249_44ba255575ec.slice. Aug 13 01:45:44.567795 kubelet[2718]: I0813 01:45:44.566545 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/35b780d0-9cdb-470f-8c65-ede949b6d595-tigera-ca-bundle\") pod \"calico-kube-controllers-7f9448c8f5-ck2sf\" (UID: \"35b780d0-9cdb-470f-8c65-ede949b6d595\") " pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:45:44.567795 kubelet[2718]: I0813 01:45:44.566569 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zn4z6\" (UniqueName: \"kubernetes.io/projected/dafdbb28-0754-4303-98bd-08c77ee94f1a-kube-api-access-zn4z6\") pod \"coredns-668d6bf9bc-dfjz8\" (UID: \"dafdbb28-0754-4303-98bd-08c77ee94f1a\") " pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:45:44.567795 kubelet[2718]: I0813 01:45:44.566585 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f67cd91-765b-4114-aaf7-bffde98c7853-whisker-ca-bundle\") pod \"whisker-69f479c99d-bwt8v\" (UID: \"6f67cd91-765b-4114-aaf7-bffde98c7853\") " pod="calico-system/whisker-69f479c99d-bwt8v" Aug 13 01:45:44.567795 kubelet[2718]: I0813 01:45:44.566599 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sjrsx\" (UniqueName: \"kubernetes.io/projected/c79e958f-d144-4c10-a249-44ba255575ec-kube-api-access-sjrsx\") pod \"goldmane-768f4c5c69-txf4m\" (UID: \"c79e958f-d144-4c10-a249-44ba255575ec\") " pod="calico-system/goldmane-768f4c5c69-txf4m" Aug 13 01:45:44.567795 kubelet[2718]: I0813 01:45:44.566613 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/46ae53ba-59c0-4daa-8742-9c72b0d04e0c-calico-apiserver-certs\") pod \"calico-apiserver-5669b4977d-swwjw\" (UID: \"46ae53ba-59c0-4daa-8742-9c72b0d04e0c\") " pod="calico-apiserver/calico-apiserver-5669b4977d-swwjw" Aug 13 01:45:44.568648 kubelet[2718]: I0813 01:45:44.566625 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c79e958f-d144-4c10-a249-44ba255575ec-goldmane-ca-bundle\") pod \"goldmane-768f4c5c69-txf4m\" (UID: \"c79e958f-d144-4c10-a249-44ba255575ec\") " pod="calico-system/goldmane-768f4c5c69-txf4m" Aug 13 01:45:44.568648 kubelet[2718]: I0813 01:45:44.566638 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c79e958f-d144-4c10-a249-44ba255575ec-goldmane-key-pair\") pod \"goldmane-768f4c5c69-txf4m\" (UID: \"c79e958f-d144-4c10-a249-44ba255575ec\") " pod="calico-system/goldmane-768f4c5c69-txf4m" Aug 13 01:45:44.568648 kubelet[2718]: I0813 01:45:44.566652 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/dafdbb28-0754-4303-98bd-08c77ee94f1a-config-volume\") pod \"coredns-668d6bf9bc-dfjz8\" (UID: \"dafdbb28-0754-4303-98bd-08c77ee94f1a\") " pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:45:44.568648 kubelet[2718]: I0813 01:45:44.566666 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0ba3c042-02d2-446d-bb82-0965919f2962-config-volume\") pod \"coredns-668d6bf9bc-j47vf\" (UID: \"0ba3c042-02d2-446d-bb82-0965919f2962\") " pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:45:44.568648 kubelet[2718]: I0813 01:45:44.566680 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6whmp\" (UniqueName: \"kubernetes.io/projected/0ba3c042-02d2-446d-bb82-0965919f2962-kube-api-access-6whmp\") pod \"coredns-668d6bf9bc-j47vf\" (UID: \"0ba3c042-02d2-446d-bb82-0965919f2962\") " pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:45:44.568779 kubelet[2718]: I0813 01:45:44.566694 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h69jw\" (UniqueName: \"kubernetes.io/projected/46ae53ba-59c0-4daa-8742-9c72b0d04e0c-kube-api-access-h69jw\") pod \"calico-apiserver-5669b4977d-swwjw\" (UID: \"46ae53ba-59c0-4daa-8742-9c72b0d04e0c\") " pod="calico-apiserver/calico-apiserver-5669b4977d-swwjw" Aug 13 01:45:44.568779 kubelet[2718]: I0813 01:45:44.566711 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5fvk7\" (UniqueName: \"kubernetes.io/projected/aea9f7c8-25a2-45fe-b92d-6e256fd0e960-kube-api-access-5fvk7\") pod \"calico-apiserver-5669b4977d-tcs4h\" (UID: \"aea9f7c8-25a2-45fe-b92d-6e256fd0e960\") " pod="calico-apiserver/calico-apiserver-5669b4977d-tcs4h" Aug 13 01:45:44.568779 kubelet[2718]: I0813 01:45:44.566727 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aea9f7c8-25a2-45fe-b92d-6e256fd0e960-calico-apiserver-certs\") pod \"calico-apiserver-5669b4977d-tcs4h\" (UID: \"aea9f7c8-25a2-45fe-b92d-6e256fd0e960\") " pod="calico-apiserver/calico-apiserver-5669b4977d-tcs4h" Aug 13 01:45:44.568779 kubelet[2718]: I0813 01:45:44.566740 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrxd9\" (UniqueName: \"kubernetes.io/projected/35b780d0-9cdb-470f-8c65-ede949b6d595-kube-api-access-zrxd9\") pod \"calico-kube-controllers-7f9448c8f5-ck2sf\" (UID: \"35b780d0-9cdb-470f-8c65-ede949b6d595\") " pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:45:44.568779 kubelet[2718]: I0813 01:45:44.566753 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c79e958f-d144-4c10-a249-44ba255575ec-config\") pod \"goldmane-768f4c5c69-txf4m\" (UID: \"c79e958f-d144-4c10-a249-44ba255575ec\") " pod="calico-system/goldmane-768f4c5c69-txf4m" Aug 13 01:45:44.577343 kubelet[2718]: I0813 01:45:44.566767 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6f67cd91-765b-4114-aaf7-bffde98c7853-whisker-backend-key-pair\") pod \"whisker-69f479c99d-bwt8v\" (UID: \"6f67cd91-765b-4114-aaf7-bffde98c7853\") " pod="calico-system/whisker-69f479c99d-bwt8v" Aug 13 01:45:44.577343 kubelet[2718]: I0813 01:45:44.566787 2718 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9cgvf\" (UniqueName: \"kubernetes.io/projected/6f67cd91-765b-4114-aaf7-bffde98c7853-kube-api-access-9cgvf\") pod \"whisker-69f479c99d-bwt8v\" (UID: \"6f67cd91-765b-4114-aaf7-bffde98c7853\") " pod="calico-system/whisker-69f479c99d-bwt8v" Aug 13 01:45:44.575420 systemd[1]: Created slice kubepods-besteffort-pod35b780d0_9cdb_470f_8c65_ede949b6d595.slice - libcontainer container kubepods-besteffort-pod35b780d0_9cdb_470f_8c65_ede949b6d595.slice. Aug 13 01:45:44.586887 systemd[1]: Created slice kubepods-besteffort-pod46ae53ba_59c0_4daa_8742_9c72b0d04e0c.slice - libcontainer container kubepods-besteffort-pod46ae53ba_59c0_4daa_8742_9c72b0d04e0c.slice. Aug 13 01:45:44.594383 systemd[1]: Created slice kubepods-besteffort-pod6f67cd91_765b_4114_aaf7_bffde98c7853.slice - libcontainer container kubepods-besteffort-pod6f67cd91_765b_4114_aaf7_bffde98c7853.slice. Aug 13 01:45:44.601925 systemd[1]: Created slice kubepods-burstable-pod0ba3c042_02d2_446d_bb82_0965919f2962.slice - libcontainer container kubepods-burstable-pod0ba3c042_02d2_446d_bb82_0965919f2962.slice. Aug 13 01:45:44.804559 systemd[1]: Created slice kubepods-besteffort-pod9674627f_b072_4139_b18d_fdf07891e1e2.slice - libcontainer container kubepods-besteffort-pod9674627f_b072_4139_b18d_fdf07891e1e2.slice. Aug 13 01:45:44.808343 containerd[1559]: time="2025-08-13T01:45:44.808298695Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,}" Aug 13 01:45:44.850450 kubelet[2718]: E0813 01:45:44.850217 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:44.851116 containerd[1559]: time="2025-08-13T01:45:44.851070471Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,}" Aug 13 01:45:44.860519 containerd[1559]: time="2025-08-13T01:45:44.860486718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5669b4977d-tcs4h,Uid:aea9f7c8-25a2-45fe-b92d-6e256fd0e960,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:45:44.880882 containerd[1559]: time="2025-08-13T01:45:44.880447390Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,}" Aug 13 01:45:44.881138 containerd[1559]: time="2025-08-13T01:45:44.881098243Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-txf4m,Uid:c79e958f-d144-4c10-a249-44ba255575ec,Namespace:calico-system,Attempt:0,}" Aug 13 01:45:44.895741 containerd[1559]: time="2025-08-13T01:45:44.895705258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5669b4977d-swwjw,Uid:46ae53ba-59c0-4daa-8742-9c72b0d04e0c,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:45:44.901306 containerd[1559]: time="2025-08-13T01:45:44.901281753Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69f479c99d-bwt8v,Uid:6f67cd91-765b-4114-aaf7-bffde98c7853,Namespace:calico-system,Attempt:0,}" Aug 13 01:45:44.905576 kubelet[2718]: E0813 01:45:44.905557 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:44.911465 containerd[1559]: time="2025-08-13T01:45:44.911414572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,}" Aug 13 01:45:44.913817 containerd[1559]: time="2025-08-13T01:45:44.913548981Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:45:44.966652 containerd[1559]: time="2025-08-13T01:45:44.965967972Z" level=error msg="Failed to destroy network for sandbox \"a42272ae6922becdf4d09314d99a15021a50cbf158c70a357c8ea95826703dad\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:44.971175 containerd[1559]: time="2025-08-13T01:45:44.969684655Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a42272ae6922becdf4d09314d99a15021a50cbf158c70a357c8ea95826703dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:44.971301 kubelet[2718]: E0813 01:45:44.969912 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a42272ae6922becdf4d09314d99a15021a50cbf158c70a357c8ea95826703dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:44.971301 kubelet[2718]: E0813 01:45:44.969966 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a42272ae6922becdf4d09314d99a15021a50cbf158c70a357c8ea95826703dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:45:44.971301 kubelet[2718]: E0813 01:45:44.969985 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a42272ae6922becdf4d09314d99a15021a50cbf158c70a357c8ea95826703dad\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:45:44.971452 kubelet[2718]: E0813 01:45:44.970019 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a42272ae6922becdf4d09314d99a15021a50cbf158c70a357c8ea95826703dad\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:45:45.050573 containerd[1559]: time="2025-08-13T01:45:45.050531406Z" level=error msg="Failed to destroy network for sandbox \"4b1081e481a0eca7b2d99f4078f8f5cd2b3b5941a9d880e4fbc56d857a074e1f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.052087 containerd[1559]: time="2025-08-13T01:45:45.052051462Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5669b4977d-tcs4h,Uid:aea9f7c8-25a2-45fe-b92d-6e256fd0e960,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b1081e481a0eca7b2d99f4078f8f5cd2b3b5941a9d880e4fbc56d857a074e1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.052424 kubelet[2718]: E0813 01:45:45.052391 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b1081e481a0eca7b2d99f4078f8f5cd2b3b5941a9d880e4fbc56d857a074e1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.052839 kubelet[2718]: E0813 01:45:45.052816 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b1081e481a0eca7b2d99f4078f8f5cd2b3b5941a9d880e4fbc56d857a074e1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5669b4977d-tcs4h" Aug 13 01:45:45.052992 kubelet[2718]: E0813 01:45:45.052972 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4b1081e481a0eca7b2d99f4078f8f5cd2b3b5941a9d880e4fbc56d857a074e1f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5669b4977d-tcs4h" Aug 13 01:45:45.053722 kubelet[2718]: E0813 01:45:45.053094 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5669b4977d-tcs4h_calico-apiserver(aea9f7c8-25a2-45fe-b92d-6e256fd0e960)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5669b4977d-tcs4h_calico-apiserver(aea9f7c8-25a2-45fe-b92d-6e256fd0e960)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4b1081e481a0eca7b2d99f4078f8f5cd2b3b5941a9d880e4fbc56d857a074e1f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5669b4977d-tcs4h" podUID="aea9f7c8-25a2-45fe-b92d-6e256fd0e960" Aug 13 01:45:45.116539 containerd[1559]: time="2025-08-13T01:45:45.115940919Z" level=error msg="Failed to destroy network for sandbox \"8d34f6d56c93d6ab99bbe89bc310f3ba6c303eace04181cdf04abbeede40f6e5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.116726 containerd[1559]: time="2025-08-13T01:45:45.116691493Z" level=error msg="Failed to destroy network for sandbox \"a01d039736d64604c40e6d4c1061e4638d14d3b2ded545b1dc47dfb78437e2aa\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.117829 containerd[1559]: time="2025-08-13T01:45:45.117283247Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d34f6d56c93d6ab99bbe89bc310f3ba6c303eace04181cdf04abbeede40f6e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.117829 containerd[1559]: time="2025-08-13T01:45:45.117744963Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a01d039736d64604c40e6d4c1061e4638d14d3b2ded545b1dc47dfb78437e2aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.119082 kubelet[2718]: E0813 01:45:45.118078 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a01d039736d64604c40e6d4c1061e4638d14d3b2ded545b1dc47dfb78437e2aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.119082 kubelet[2718]: E0813 01:45:45.118132 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a01d039736d64604c40e6d4c1061e4638d14d3b2ded545b1dc47dfb78437e2aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:45:45.119082 kubelet[2718]: E0813 01:45:45.118150 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a01d039736d64604c40e6d4c1061e4638d14d3b2ded545b1dc47dfb78437e2aa\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:45:45.119195 kubelet[2718]: E0813 01:45:45.118184 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a01d039736d64604c40e6d4c1061e4638d14d3b2ded545b1dc47dfb78437e2aa\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:45:45.119195 kubelet[2718]: E0813 01:45:45.118972 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d34f6d56c93d6ab99bbe89bc310f3ba6c303eace04181cdf04abbeede40f6e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.119195 kubelet[2718]: E0813 01:45:45.118999 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d34f6d56c93d6ab99bbe89bc310f3ba6c303eace04181cdf04abbeede40f6e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:45:45.119286 kubelet[2718]: E0813 01:45:45.119028 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8d34f6d56c93d6ab99bbe89bc310f3ba6c303eace04181cdf04abbeede40f6e5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:45:45.119286 kubelet[2718]: E0813 01:45:45.119051 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8d34f6d56c93d6ab99bbe89bc310f3ba6c303eace04181cdf04abbeede40f6e5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dfjz8" podUID="dafdbb28-0754-4303-98bd-08c77ee94f1a" Aug 13 01:45:45.127200 containerd[1559]: time="2025-08-13T01:45:45.127164006Z" level=error msg="Failed to destroy network for sandbox \"4dd17133786128dad7dd22cb9406f1c6503f8ed0cac335ff305342b6bfd77451\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.128136 containerd[1559]: time="2025-08-13T01:45:45.127919748Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-txf4m,Uid:c79e958f-d144-4c10-a249-44ba255575ec,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dd17133786128dad7dd22cb9406f1c6503f8ed0cac335ff305342b6bfd77451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.128736 kubelet[2718]: E0813 01:45:45.128443 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dd17133786128dad7dd22cb9406f1c6503f8ed0cac335ff305342b6bfd77451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.129094 kubelet[2718]: E0813 01:45:45.128977 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dd17133786128dad7dd22cb9406f1c6503f8ed0cac335ff305342b6bfd77451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-txf4m" Aug 13 01:45:45.129282 kubelet[2718]: E0813 01:45:45.129210 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4dd17133786128dad7dd22cb9406f1c6503f8ed0cac335ff305342b6bfd77451\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-txf4m" Aug 13 01:45:45.129686 kubelet[2718]: E0813 01:45:45.129440 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-txf4m_calico-system(c79e958f-d144-4c10-a249-44ba255575ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-txf4m_calico-system(c79e958f-d144-4c10-a249-44ba255575ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4dd17133786128dad7dd22cb9406f1c6503f8ed0cac335ff305342b6bfd77451\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-txf4m" podUID="c79e958f-d144-4c10-a249-44ba255575ec" Aug 13 01:45:45.132647 containerd[1559]: time="2025-08-13T01:45:45.132589016Z" level=error msg="Failed to destroy network for sandbox \"f003af37c44202d6f3ad0ae40babc118df339086af77c1df2476138588370b5d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.134470 containerd[1559]: time="2025-08-13T01:45:45.134236671Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f003af37c44202d6f3ad0ae40babc118df339086af77c1df2476138588370b5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.135063 kubelet[2718]: E0813 01:45:45.134982 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f003af37c44202d6f3ad0ae40babc118df339086af77c1df2476138588370b5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.135418 kubelet[2718]: E0813 01:45:45.135361 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f003af37c44202d6f3ad0ae40babc118df339086af77c1df2476138588370b5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:45:45.135418 kubelet[2718]: E0813 01:45:45.135387 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f003af37c44202d6f3ad0ae40babc118df339086af77c1df2476138588370b5d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:45:45.135638 kubelet[2718]: E0813 01:45:45.135526 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f003af37c44202d6f3ad0ae40babc118df339086af77c1df2476138588370b5d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j47vf" podUID="0ba3c042-02d2-446d-bb82-0965919f2962" Aug 13 01:45:45.140768 containerd[1559]: time="2025-08-13T01:45:45.140550792Z" level=error msg="Failed to destroy network for sandbox \"a6e60dfd52a1c30fa549837bf61f42e2eaf50e4864a8468927ec2aae27a30b20\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.142691 containerd[1559]: time="2025-08-13T01:45:45.142650123Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5669b4977d-swwjw,Uid:46ae53ba-59c0-4daa-8742-9c72b0d04e0c,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6e60dfd52a1c30fa549837bf61f42e2eaf50e4864a8468927ec2aae27a30b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.143847 kubelet[2718]: E0813 01:45:45.143529 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6e60dfd52a1c30fa549837bf61f42e2eaf50e4864a8468927ec2aae27a30b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.143847 kubelet[2718]: E0813 01:45:45.143704 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6e60dfd52a1c30fa549837bf61f42e2eaf50e4864a8468927ec2aae27a30b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5669b4977d-swwjw" Aug 13 01:45:45.143847 kubelet[2718]: E0813 01:45:45.143724 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a6e60dfd52a1c30fa549837bf61f42e2eaf50e4864a8468927ec2aae27a30b20\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5669b4977d-swwjw" Aug 13 01:45:45.144111 kubelet[2718]: E0813 01:45:45.143797 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5669b4977d-swwjw_calico-apiserver(46ae53ba-59c0-4daa-8742-9c72b0d04e0c)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5669b4977d-swwjw_calico-apiserver(46ae53ba-59c0-4daa-8742-9c72b0d04e0c)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a6e60dfd52a1c30fa549837bf61f42e2eaf50e4864a8468927ec2aae27a30b20\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5669b4977d-swwjw" podUID="46ae53ba-59c0-4daa-8742-9c72b0d04e0c" Aug 13 01:45:45.146591 containerd[1559]: time="2025-08-13T01:45:45.146449808Z" level=error msg="Failed to destroy network for sandbox \"ca4e7fe383ee8f07ba75cdff86f7de75d37ca9279b46ecd90de442164b91c8ab\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.147347 containerd[1559]: time="2025-08-13T01:45:45.147317439Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69f479c99d-bwt8v,Uid:6f67cd91-765b-4114-aaf7-bffde98c7853,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca4e7fe383ee8f07ba75cdff86f7de75d37ca9279b46ecd90de442164b91c8ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.147541 kubelet[2718]: E0813 01:45:45.147505 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca4e7fe383ee8f07ba75cdff86f7de75d37ca9279b46ecd90de442164b91c8ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:45.147585 kubelet[2718]: E0813 01:45:45.147561 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca4e7fe383ee8f07ba75cdff86f7de75d37ca9279b46ecd90de442164b91c8ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69f479c99d-bwt8v" Aug 13 01:45:45.147647 kubelet[2718]: E0813 01:45:45.147582 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ca4e7fe383ee8f07ba75cdff86f7de75d37ca9279b46ecd90de442164b91c8ab\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69f479c99d-bwt8v" Aug 13 01:45:45.147647 kubelet[2718]: E0813 01:45:45.147625 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69f479c99d-bwt8v_calico-system(6f67cd91-765b-4114-aaf7-bffde98c7853)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69f479c99d-bwt8v_calico-system(6f67cd91-765b-4114-aaf7-bffde98c7853)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ca4e7fe383ee8f07ba75cdff86f7de75d37ca9279b46ecd90de442164b91c8ab\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69f479c99d-bwt8v" podUID="6f67cd91-765b-4114-aaf7-bffde98c7853" Aug 13 01:45:45.793788 systemd[1]: run-netns-cni\x2dc12b3bd3\x2dd380\x2d6f50\x2d633e\x2d6988d9885e1d.mount: Deactivated successfully. Aug 13 01:45:45.793993 systemd[1]: run-netns-cni\x2d77ab873a\x2d3d98\x2d9263\x2d2e2a\x2d3031ab709d86.mount: Deactivated successfully. Aug 13 01:45:45.794070 systemd[1]: run-netns-cni\x2da8b24216\x2d67e9\x2d1a02\x2d9d09\x2db9a130853f62.mount: Deactivated successfully. Aug 13 01:45:45.794134 systemd[1]: run-netns-cni\x2de4ac0584\x2df422\x2d3d93\x2d8212\x2d50054f48972b.mount: Deactivated successfully. Aug 13 01:45:47.448296 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2356513677.mount: Deactivated successfully. Aug 13 01:45:47.451405 containerd[1559]: time="2025-08-13T01:45:47.451342739Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2356513677: write /var/lib/containerd/tmpmounts/containerd-mount2356513677/usr/bin/calico-node: no space left on device" Aug 13 01:45:47.452177 containerd[1559]: time="2025-08-13T01:45:47.451428838Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:45:47.452209 kubelet[2718]: E0813 01:45:47.451566 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2356513677: write /var/lib/containerd/tmpmounts/containerd-mount2356513677/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:45:47.452209 kubelet[2718]: E0813 01:45:47.451623 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2356513677: write /var/lib/containerd/tmpmounts/containerd-mount2356513677/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:45:47.453136 kubelet[2718]: E0813 01:45:47.453055 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2s4z7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-qgskr_calico-system(9adfb11c-9977-45e9-b78f-00f4995e46c5): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2356513677: write /var/lib/containerd/tmpmounts/containerd-mount2356513677/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:45:47.454991 kubelet[2718]: E0813 01:45:47.454935 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2356513677: write /var/lib/containerd/tmpmounts/containerd-mount2356513677/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-qgskr" podUID="9adfb11c-9977-45e9-b78f-00f4995e46c5" Aug 13 01:45:47.915770 kubelet[2718]: E0813 01:45:47.915703 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount2356513677: write /var/lib/containerd/tmpmounts/containerd-mount2356513677/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-qgskr" podUID="9adfb11c-9977-45e9-b78f-00f4995e46c5" Aug 13 01:45:52.931107 kubelet[2718]: I0813 01:45:52.931035 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:45:52.931107 kubelet[2718]: I0813 01:45:52.931074 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:45:52.933664 kubelet[2718]: I0813 01:45:52.933632 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:45:52.946440 kubelet[2718]: I0813 01:45:52.946415 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:45:52.946542 kubelet[2718]: I0813 01:45:52.946498 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-5669b4977d-swwjw","calico-system/goldmane-768f4c5c69-txf4m","calico-apiserver/calico-apiserver-5669b4977d-tcs4h","calico-system/whisker-69f479c99d-bwt8v","calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","kube-system/coredns-668d6bf9bc-dfjz8","kube-system/coredns-668d6bf9bc-j47vf","calico-system/calico-node-qgskr","calico-system/csi-node-driver-dbqt2","tigera-operator/tigera-operator-747864d56d-xmx78","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:45:52.953050 kubelet[2718]: I0813 01:45:52.953020 2718 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-5669b4977d-swwjw" Aug 13 01:45:52.953050 kubelet[2718]: I0813 01:45:52.953046 2718 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-5669b4977d-swwjw"] Aug 13 01:45:52.977944 kubelet[2718]: I0813 01:45:52.977850 2718 kubelet.go:2351] "Pod admission denied" podUID="c203a083-89c6-4c3a-bfd4-08282edc40e2" pod="calico-apiserver/calico-apiserver-5669b4977d-9vdt8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:45:53.009539 kubelet[2718]: I0813 01:45:53.009422 2718 kubelet.go:2351] "Pod admission denied" podUID="55aaa196-ddf5-4fa9-9eb8-4c05769371d7" pod="calico-apiserver/calico-apiserver-5669b4977d-rk7gc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:45:53.019386 kubelet[2718]: I0813 01:45:53.019332 2718 status_manager.go:890] "Failed to get status for pod" podUID="55aaa196-ddf5-4fa9-9eb8-4c05769371d7" pod="calico-apiserver/calico-apiserver-5669b4977d-rk7gc" err="pods \"calico-apiserver-5669b4977d-rk7gc\" is forbidden: User \"system:node:172-232-7-133\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-apiserver\": no relationship found between node '172-232-7-133' and this object" Aug 13 01:45:53.023885 kubelet[2718]: I0813 01:45:53.023381 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h69jw\" (UniqueName: \"kubernetes.io/projected/46ae53ba-59c0-4daa-8742-9c72b0d04e0c-kube-api-access-h69jw\") pod \"46ae53ba-59c0-4daa-8742-9c72b0d04e0c\" (UID: \"46ae53ba-59c0-4daa-8742-9c72b0d04e0c\") " Aug 13 01:45:53.023885 kubelet[2718]: I0813 01:45:53.023458 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/46ae53ba-59c0-4daa-8742-9c72b0d04e0c-calico-apiserver-certs\") pod \"46ae53ba-59c0-4daa-8742-9c72b0d04e0c\" (UID: \"46ae53ba-59c0-4daa-8742-9c72b0d04e0c\") " Aug 13 01:45:53.033138 kubelet[2718]: I0813 01:45:53.033077 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46ae53ba-59c0-4daa-8742-9c72b0d04e0c-kube-api-access-h69jw" (OuterVolumeSpecName: "kube-api-access-h69jw") pod "46ae53ba-59c0-4daa-8742-9c72b0d04e0c" (UID: "46ae53ba-59c0-4daa-8742-9c72b0d04e0c"). InnerVolumeSpecName "kube-api-access-h69jw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:45:53.036083 systemd[1]: var-lib-kubelet-pods-46ae53ba\x2d59c0\x2d4daa\x2d8742\x2d9c72b0d04e0c-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dh69jw.mount: Deactivated successfully. Aug 13 01:45:53.043901 kubelet[2718]: I0813 01:45:53.041286 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46ae53ba-59c0-4daa-8742-9c72b0d04e0c-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "46ae53ba-59c0-4daa-8742-9c72b0d04e0c" (UID: "46ae53ba-59c0-4daa-8742-9c72b0d04e0c"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:45:53.044000 systemd[1]: var-lib-kubelet-pods-46ae53ba\x2d59c0\x2d4daa\x2d8742\x2d9c72b0d04e0c-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:45:53.124722 kubelet[2718]: I0813 01:45:53.124659 2718 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-h69jw\" (UniqueName: \"kubernetes.io/projected/46ae53ba-59c0-4daa-8742-9c72b0d04e0c-kube-api-access-h69jw\") on node \"172-232-7-133\" DevicePath \"\"" Aug 13 01:45:53.124992 kubelet[2718]: I0813 01:45:53.124970 2718 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/46ae53ba-59c0-4daa-8742-9c72b0d04e0c-calico-apiserver-certs\") on node \"172-232-7-133\" DevicePath \"\"" Aug 13 01:45:53.934407 systemd[1]: Removed slice kubepods-besteffort-pod46ae53ba_59c0_4daa_8742_9c72b0d04e0c.slice - libcontainer container kubepods-besteffort-pod46ae53ba_59c0_4daa_8742_9c72b0d04e0c.slice. Aug 13 01:45:53.953839 kubelet[2718]: I0813 01:45:53.953789 2718 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-5669b4977d-swwjw"] Aug 13 01:45:53.976624 kubelet[2718]: I0813 01:45:53.975738 2718 kubelet.go:2351] "Pod admission denied" podUID="274dddbd-9a14-434b-be64-07f3e7f40852" pod="calico-apiserver/calico-apiserver-5669b4977d-sdmc4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:45:54.844030 kubelet[2718]: I0813 01:45:54.843704 2718 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Aug 13 01:45:54.844399 kubelet[2718]: E0813 01:45:54.844383 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:54.928592 kubelet[2718]: E0813 01:45:54.928561 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:55.781126 containerd[1559]: time="2025-08-13T01:45:55.781061529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,}" Aug 13 01:45:55.782431 containerd[1559]: time="2025-08-13T01:45:55.781429618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69f479c99d-bwt8v,Uid:6f67cd91-765b-4114-aaf7-bffde98c7853,Namespace:calico-system,Attempt:0,}" Aug 13 01:45:55.782431 containerd[1559]: time="2025-08-13T01:45:55.781583527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-txf4m,Uid:c79e958f-d144-4c10-a249-44ba255575ec,Namespace:calico-system,Attempt:0,}" Aug 13 01:45:55.882300 containerd[1559]: time="2025-08-13T01:45:55.882162572Z" level=error msg="Failed to destroy network for sandbox \"94e595b1680e3792efffdd725ef4e2c4f60948dcfbd9e490eef89255b50b381a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:55.885481 containerd[1559]: time="2025-08-13T01:45:55.885453667Z" level=error msg="Failed to destroy network for sandbox \"aec47665fb6701c6850d3aa100b9a9ad1ef2659a6dae85ba7bb497d21e1dd51f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:55.885474 systemd[1]: run-netns-cni\x2d448c3e7d\x2df7d0\x2d9201\x2d1b35\x2d634f54f7d1da.mount: Deactivated successfully. Aug 13 01:45:55.888063 containerd[1559]: time="2025-08-13T01:45:55.887971305Z" level=error msg="Failed to destroy network for sandbox \"36d5f23c8d63527ed85078425aebae35865ce39d2314cc3a34b81f2ba80a757f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:55.888423 containerd[1559]: time="2025-08-13T01:45:55.888249944Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e595b1680e3792efffdd725ef4e2c4f60948dcfbd9e490eef89255b50b381a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:55.888577 kubelet[2718]: E0813 01:45:55.888523 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e595b1680e3792efffdd725ef4e2c4f60948dcfbd9e490eef89255b50b381a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:55.888827 kubelet[2718]: E0813 01:45:55.888591 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e595b1680e3792efffdd725ef4e2c4f60948dcfbd9e490eef89255b50b381a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:45:55.888827 kubelet[2718]: E0813 01:45:55.888615 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"94e595b1680e3792efffdd725ef4e2c4f60948dcfbd9e490eef89255b50b381a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:45:55.888827 kubelet[2718]: E0813 01:45:55.888654 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"94e595b1680e3792efffdd725ef4e2c4f60948dcfbd9e490eef89255b50b381a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:45:55.890949 containerd[1559]: time="2025-08-13T01:45:55.890919451Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-txf4m,Uid:c79e958f-d144-4c10-a249-44ba255575ec,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"36d5f23c8d63527ed85078425aebae35865ce39d2314cc3a34b81f2ba80a757f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:55.891223 kubelet[2718]: E0813 01:45:55.891176 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36d5f23c8d63527ed85078425aebae35865ce39d2314cc3a34b81f2ba80a757f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:55.891301 kubelet[2718]: E0813 01:45:55.891231 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36d5f23c8d63527ed85078425aebae35865ce39d2314cc3a34b81f2ba80a757f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-txf4m" Aug 13 01:45:55.891301 kubelet[2718]: E0813 01:45:55.891245 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"36d5f23c8d63527ed85078425aebae35865ce39d2314cc3a34b81f2ba80a757f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-txf4m" Aug 13 01:45:55.891301 kubelet[2718]: E0813 01:45:55.891270 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-txf4m_calico-system(c79e958f-d144-4c10-a249-44ba255575ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-txf4m_calico-system(c79e958f-d144-4c10-a249-44ba255575ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"36d5f23c8d63527ed85078425aebae35865ce39d2314cc3a34b81f2ba80a757f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-txf4m" podUID="c79e958f-d144-4c10-a249-44ba255575ec" Aug 13 01:45:55.891745 systemd[1]: run-netns-cni\x2d1ae5e9b5\x2d1d07\x2d72b1\x2d662a\x2d0c72ef83ecb2.mount: Deactivated successfully. Aug 13 01:45:55.892219 systemd[1]: run-netns-cni\x2d716e3e9b\x2dbeaa\x2d380b\x2d7468\x2dfd713953217c.mount: Deactivated successfully. Aug 13 01:45:55.892814 containerd[1559]: time="2025-08-13T01:45:55.891825917Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69f479c99d-bwt8v,Uid:6f67cd91-765b-4114-aaf7-bffde98c7853,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"aec47665fb6701c6850d3aa100b9a9ad1ef2659a6dae85ba7bb497d21e1dd51f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:55.893130 kubelet[2718]: E0813 01:45:55.892971 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aec47665fb6701c6850d3aa100b9a9ad1ef2659a6dae85ba7bb497d21e1dd51f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:55.893130 kubelet[2718]: E0813 01:45:55.893000 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aec47665fb6701c6850d3aa100b9a9ad1ef2659a6dae85ba7bb497d21e1dd51f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69f479c99d-bwt8v" Aug 13 01:45:55.893130 kubelet[2718]: E0813 01:45:55.893015 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"aec47665fb6701c6850d3aa100b9a9ad1ef2659a6dae85ba7bb497d21e1dd51f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69f479c99d-bwt8v" Aug 13 01:45:55.893130 kubelet[2718]: E0813 01:45:55.893038 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69f479c99d-bwt8v_calico-system(6f67cd91-765b-4114-aaf7-bffde98c7853)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69f479c99d-bwt8v_calico-system(6f67cd91-765b-4114-aaf7-bffde98c7853)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"aec47665fb6701c6850d3aa100b9a9ad1ef2659a6dae85ba7bb497d21e1dd51f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69f479c99d-bwt8v" podUID="6f67cd91-765b-4114-aaf7-bffde98c7853" Aug 13 01:45:56.781000 containerd[1559]: time="2025-08-13T01:45:56.780938458Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5669b4977d-tcs4h,Uid:aea9f7c8-25a2-45fe-b92d-6e256fd0e960,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:45:56.846441 containerd[1559]: time="2025-08-13T01:45:56.846374197Z" level=error msg="Failed to destroy network for sandbox \"312b20d651acb0824855f52dd68fd2f64c243d317f683b02cadfe6869f607e24\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:56.849721 systemd[1]: run-netns-cni\x2d2dbc1600\x2dee07\x2dc794\x2d5580\x2dddef2a13a719.mount: Deactivated successfully. Aug 13 01:45:56.850628 containerd[1559]: time="2025-08-13T01:45:56.850510079Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5669b4977d-tcs4h,Uid:aea9f7c8-25a2-45fe-b92d-6e256fd0e960,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"312b20d651acb0824855f52dd68fd2f64c243d317f683b02cadfe6869f607e24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:56.851407 kubelet[2718]: E0813 01:45:56.851368 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"312b20d651acb0824855f52dd68fd2f64c243d317f683b02cadfe6869f607e24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:56.851539 kubelet[2718]: E0813 01:45:56.851423 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"312b20d651acb0824855f52dd68fd2f64c243d317f683b02cadfe6869f607e24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5669b4977d-tcs4h" Aug 13 01:45:56.851539 kubelet[2718]: E0813 01:45:56.851444 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"312b20d651acb0824855f52dd68fd2f64c243d317f683b02cadfe6869f607e24\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5669b4977d-tcs4h" Aug 13 01:45:56.851539 kubelet[2718]: E0813 01:45:56.851484 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5669b4977d-tcs4h_calico-apiserver(aea9f7c8-25a2-45fe-b92d-6e256fd0e960)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5669b4977d-tcs4h_calico-apiserver(aea9f7c8-25a2-45fe-b92d-6e256fd0e960)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"312b20d651acb0824855f52dd68fd2f64c243d317f683b02cadfe6869f607e24\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5669b4977d-tcs4h" podUID="aea9f7c8-25a2-45fe-b92d-6e256fd0e960" Aug 13 01:45:59.781603 kubelet[2718]: E0813 01:45:59.779398 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:59.781603 kubelet[2718]: E0813 01:45:59.779964 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:45:59.786190 containerd[1559]: time="2025-08-13T01:45:59.781118796Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,}" Aug 13 01:45:59.786190 containerd[1559]: time="2025-08-13T01:45:59.781404715Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,}" Aug 13 01:45:59.786190 containerd[1559]: time="2025-08-13T01:45:59.781486745Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,}" Aug 13 01:45:59.896221 containerd[1559]: time="2025-08-13T01:45:59.895275192Z" level=error msg="Failed to destroy network for sandbox \"b057c7a21872a229b344751c17ea25b9137694c6dbf1d9fcccb8c48e7f4a284a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:59.902341 containerd[1559]: time="2025-08-13T01:45:59.899404537Z" level=error msg="Failed to destroy network for sandbox \"153eefbb1f5fcf24d7a602f651eed3e7d194900741167ed00ac4b0d1537d7e13\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:59.899946 systemd[1]: run-netns-cni\x2d8deb4068\x2d9792\x2d41bf\x2d744c\x2dc1a0f5f56380.mount: Deactivated successfully. Aug 13 01:45:59.906036 containerd[1559]: time="2025-08-13T01:45:59.903498933Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"153eefbb1f5fcf24d7a602f651eed3e7d194900741167ed00ac4b0d1537d7e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:59.906036 containerd[1559]: time="2025-08-13T01:45:59.904532560Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b057c7a21872a229b344751c17ea25b9137694c6dbf1d9fcccb8c48e7f4a284a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:59.906183 kubelet[2718]: E0813 01:45:59.904421 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"153eefbb1f5fcf24d7a602f651eed3e7d194900741167ed00ac4b0d1537d7e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:59.906183 kubelet[2718]: E0813 01:45:59.904560 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"153eefbb1f5fcf24d7a602f651eed3e7d194900741167ed00ac4b0d1537d7e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:45:59.906183 kubelet[2718]: E0813 01:45:59.904580 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"153eefbb1f5fcf24d7a602f651eed3e7d194900741167ed00ac4b0d1537d7e13\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:45:59.906183 kubelet[2718]: E0813 01:45:59.905238 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b057c7a21872a229b344751c17ea25b9137694c6dbf1d9fcccb8c48e7f4a284a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:59.906183 kubelet[2718]: E0813 01:45:59.905262 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b057c7a21872a229b344751c17ea25b9137694c6dbf1d9fcccb8c48e7f4a284a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:45:59.906183 kubelet[2718]: E0813 01:45:59.905276 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b057c7a21872a229b344751c17ea25b9137694c6dbf1d9fcccb8c48e7f4a284a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:45:59.906183 kubelet[2718]: E0813 01:45:59.905319 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"153eefbb1f5fcf24d7a602f651eed3e7d194900741167ed00ac4b0d1537d7e13\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:45:59.903781 systemd[1]: run-netns-cni\x2da1ba3511\x2d8856\x2d166b\x2dedbc\x2dbe18680cbb62.mount: Deactivated successfully. Aug 13 01:45:59.908477 kubelet[2718]: E0813 01:45:59.906473 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b057c7a21872a229b344751c17ea25b9137694c6dbf1d9fcccb8c48e7f4a284a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j47vf" podUID="0ba3c042-02d2-446d-bb82-0965919f2962" Aug 13 01:45:59.918903 containerd[1559]: time="2025-08-13T01:45:59.918350372Z" level=error msg="Failed to destroy network for sandbox \"7d1fe689493714d739d5e07756dc0b3b9ffbba2deec12f51149a4c4a0d10b96d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:59.920266 containerd[1559]: time="2025-08-13T01:45:59.920231096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d1fe689493714d739d5e07756dc0b3b9ffbba2deec12f51149a4c4a0d10b96d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:59.920560 kubelet[2718]: E0813 01:45:59.920512 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d1fe689493714d739d5e07756dc0b3b9ffbba2deec12f51149a4c4a0d10b96d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:45:59.920609 kubelet[2718]: E0813 01:45:59.920595 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d1fe689493714d739d5e07756dc0b3b9ffbba2deec12f51149a4c4a0d10b96d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:45:59.920641 kubelet[2718]: E0813 01:45:59.920623 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7d1fe689493714d739d5e07756dc0b3b9ffbba2deec12f51149a4c4a0d10b96d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:45:59.921329 kubelet[2718]: E0813 01:45:59.920934 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7d1fe689493714d739d5e07756dc0b3b9ffbba2deec12f51149a4c4a0d10b96d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dfjz8" podUID="dafdbb28-0754-4303-98bd-08c77ee94f1a" Aug 13 01:46:00.787065 systemd[1]: run-netns-cni\x2d287d2c9f\x2d451c\x2d67b8\x2d1839\x2ddd38f8108ef0.mount: Deactivated successfully. Aug 13 01:46:01.790716 containerd[1559]: time="2025-08-13T01:46:01.790248887Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:46:03.797969 containerd[1559]: time="2025-08-13T01:46:03.797887294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3689701265: write /var/lib/containerd/tmpmounts/containerd-mount3689701265/usr/bin/calico-node: no space left on device" Aug 13 01:46:03.799410 containerd[1559]: time="2025-08-13T01:46:03.797984034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:46:03.798273 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3689701265.mount: Deactivated successfully. Aug 13 01:46:03.800415 kubelet[2718]: E0813 01:46:03.798132 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3689701265: write /var/lib/containerd/tmpmounts/containerd-mount3689701265/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:46:03.800415 kubelet[2718]: E0813 01:46:03.798199 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3689701265: write /var/lib/containerd/tmpmounts/containerd-mount3689701265/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:46:03.800957 kubelet[2718]: E0813 01:46:03.798394 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2s4z7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-qgskr_calico-system(9adfb11c-9977-45e9-b78f-00f4995e46c5): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3689701265: write /var/lib/containerd/tmpmounts/containerd-mount3689701265/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:46:03.801071 kubelet[2718]: E0813 01:46:03.800193 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3689701265: write /var/lib/containerd/tmpmounts/containerd-mount3689701265/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-qgskr" podUID="9adfb11c-9977-45e9-b78f-00f4995e46c5" Aug 13 01:46:04.005604 kubelet[2718]: I0813 01:46:04.005550 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:04.005604 kubelet[2718]: I0813 01:46:04.005596 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:46:04.009279 kubelet[2718]: I0813 01:46:04.009239 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:46:04.027997 kubelet[2718]: I0813 01:46:04.027975 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:04.028093 kubelet[2718]: I0813 01:46:04.028066 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/whisker-69f479c99d-bwt8v","calico-system/goldmane-768f4c5c69-txf4m","calico-apiserver/calico-apiserver-5669b4977d-tcs4h","kube-system/coredns-668d6bf9bc-dfjz8","kube-system/coredns-668d6bf9bc-j47vf","calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","calico-system/calico-node-qgskr","calico-system/csi-node-driver-dbqt2","tigera-operator/tigera-operator-747864d56d-xmx78","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:46:04.034013 kubelet[2718]: I0813 01:46:04.033984 2718 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-system/whisker-69f479c99d-bwt8v" Aug 13 01:46:04.034013 kubelet[2718]: I0813 01:46:04.034007 2718 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/whisker-69f479c99d-bwt8v"] Aug 13 01:46:04.098399 kubelet[2718]: I0813 01:46:04.097936 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f67cd91-765b-4114-aaf7-bffde98c7853-whisker-ca-bundle\") pod \"6f67cd91-765b-4114-aaf7-bffde98c7853\" (UID: \"6f67cd91-765b-4114-aaf7-bffde98c7853\") " Aug 13 01:46:04.098399 kubelet[2718]: I0813 01:46:04.097979 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6f67cd91-765b-4114-aaf7-bffde98c7853-whisker-backend-key-pair\") pod \"6f67cd91-765b-4114-aaf7-bffde98c7853\" (UID: \"6f67cd91-765b-4114-aaf7-bffde98c7853\") " Aug 13 01:46:04.098399 kubelet[2718]: I0813 01:46:04.098001 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9cgvf\" (UniqueName: \"kubernetes.io/projected/6f67cd91-765b-4114-aaf7-bffde98c7853-kube-api-access-9cgvf\") pod \"6f67cd91-765b-4114-aaf7-bffde98c7853\" (UID: \"6f67cd91-765b-4114-aaf7-bffde98c7853\") " Aug 13 01:46:04.100082 kubelet[2718]: I0813 01:46:04.099505 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6f67cd91-765b-4114-aaf7-bffde98c7853-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "6f67cd91-765b-4114-aaf7-bffde98c7853" (UID: "6f67cd91-765b-4114-aaf7-bffde98c7853"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:46:04.103692 systemd[1]: var-lib-kubelet-pods-6f67cd91\x2d765b\x2d4114\x2daaf7\x2dbffde98c7853-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9cgvf.mount: Deactivated successfully. Aug 13 01:46:04.104266 kubelet[2718]: I0813 01:46:04.104228 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6f67cd91-765b-4114-aaf7-bffde98c7853-kube-api-access-9cgvf" (OuterVolumeSpecName: "kube-api-access-9cgvf") pod "6f67cd91-765b-4114-aaf7-bffde98c7853" (UID: "6f67cd91-765b-4114-aaf7-bffde98c7853"). InnerVolumeSpecName "kube-api-access-9cgvf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:46:04.108566 systemd[1]: var-lib-kubelet-pods-6f67cd91\x2d765b\x2d4114\x2daaf7\x2dbffde98c7853-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:46:04.108957 kubelet[2718]: I0813 01:46:04.108917 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6f67cd91-765b-4114-aaf7-bffde98c7853-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "6f67cd91-765b-4114-aaf7-bffde98c7853" (UID: "6f67cd91-765b-4114-aaf7-bffde98c7853"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:46:04.198918 kubelet[2718]: I0813 01:46:04.198839 2718 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6f67cd91-765b-4114-aaf7-bffde98c7853-whisker-ca-bundle\") on node \"172-232-7-133\" DevicePath \"\"" Aug 13 01:46:04.198918 kubelet[2718]: I0813 01:46:04.198887 2718 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/6f67cd91-765b-4114-aaf7-bffde98c7853-whisker-backend-key-pair\") on node \"172-232-7-133\" DevicePath \"\"" Aug 13 01:46:04.198918 kubelet[2718]: I0813 01:46:04.198922 2718 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9cgvf\" (UniqueName: \"kubernetes.io/projected/6f67cd91-765b-4114-aaf7-bffde98c7853-kube-api-access-9cgvf\") on node \"172-232-7-133\" DevicePath \"\"" Aug 13 01:46:04.790131 systemd[1]: Removed slice kubepods-besteffort-pod6f67cd91_765b_4114_aaf7_bffde98c7853.slice - libcontainer container kubepods-besteffort-pod6f67cd91_765b_4114_aaf7_bffde98c7853.slice. Aug 13 01:46:05.035193 kubelet[2718]: I0813 01:46:05.035125 2718 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-system/whisker-69f479c99d-bwt8v"] Aug 13 01:46:06.779885 containerd[1559]: time="2025-08-13T01:46:06.779809052Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-txf4m,Uid:c79e958f-d144-4c10-a249-44ba255575ec,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:06.875195 containerd[1559]: time="2025-08-13T01:46:06.875058301Z" level=error msg="Failed to destroy network for sandbox \"a185575f40984b0b7ca941a82cf711daeefbac5b7df2a663af9a4385e72a35fc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:06.878125 containerd[1559]: time="2025-08-13T01:46:06.878075204Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-768f4c5c69-txf4m,Uid:c79e958f-d144-4c10-a249-44ba255575ec,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a185575f40984b0b7ca941a82cf711daeefbac5b7df2a663af9a4385e72a35fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:06.878644 kubelet[2718]: E0813 01:46:06.878530 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a185575f40984b0b7ca941a82cf711daeefbac5b7df2a663af9a4385e72a35fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:06.878644 kubelet[2718]: E0813 01:46:06.878591 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a185575f40984b0b7ca941a82cf711daeefbac5b7df2a663af9a4385e72a35fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-txf4m" Aug 13 01:46:06.878644 kubelet[2718]: E0813 01:46:06.878616 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a185575f40984b0b7ca941a82cf711daeefbac5b7df2a663af9a4385e72a35fc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-768f4c5c69-txf4m" Aug 13 01:46:06.879811 kubelet[2718]: E0813 01:46:06.879259 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-768f4c5c69-txf4m_calico-system(c79e958f-d144-4c10-a249-44ba255575ec)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-768f4c5c69-txf4m_calico-system(c79e958f-d144-4c10-a249-44ba255575ec)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a185575f40984b0b7ca941a82cf711daeefbac5b7df2a663af9a4385e72a35fc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-768f4c5c69-txf4m" podUID="c79e958f-d144-4c10-a249-44ba255575ec" Aug 13 01:46:06.882439 systemd[1]: run-netns-cni\x2dd22b6aca\x2d370f\x2d9c99\x2d1836\x2d0d3f4e405307.mount: Deactivated successfully. Aug 13 01:46:07.780478 containerd[1559]: time="2025-08-13T01:46:07.780432956Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5669b4977d-tcs4h,Uid:aea9f7c8-25a2-45fe-b92d-6e256fd0e960,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:46:07.825158 containerd[1559]: time="2025-08-13T01:46:07.825095593Z" level=error msg="Failed to destroy network for sandbox \"886912a7fbdd97f0b9757f4c740e9c38ebf229e682d1e7b6945f94da01b66c93\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:07.827244 systemd[1]: run-netns-cni\x2d256f00d2\x2de1bd\x2dee51\x2ded82\x2d3f7e733bf06c.mount: Deactivated successfully. Aug 13 01:46:07.828425 containerd[1559]: time="2025-08-13T01:46:07.828383777Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5669b4977d-tcs4h,Uid:aea9f7c8-25a2-45fe-b92d-6e256fd0e960,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"886912a7fbdd97f0b9757f4c740e9c38ebf229e682d1e7b6945f94da01b66c93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:07.829066 kubelet[2718]: E0813 01:46:07.828577 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"886912a7fbdd97f0b9757f4c740e9c38ebf229e682d1e7b6945f94da01b66c93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:07.829066 kubelet[2718]: E0813 01:46:07.828621 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"886912a7fbdd97f0b9757f4c740e9c38ebf229e682d1e7b6945f94da01b66c93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5669b4977d-tcs4h" Aug 13 01:46:07.829066 kubelet[2718]: E0813 01:46:07.828643 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"886912a7fbdd97f0b9757f4c740e9c38ebf229e682d1e7b6945f94da01b66c93\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5669b4977d-tcs4h" Aug 13 01:46:07.829066 kubelet[2718]: E0813 01:46:07.828690 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5669b4977d-tcs4h_calico-apiserver(aea9f7c8-25a2-45fe-b92d-6e256fd0e960)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5669b4977d-tcs4h_calico-apiserver(aea9f7c8-25a2-45fe-b92d-6e256fd0e960)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"886912a7fbdd97f0b9757f4c740e9c38ebf229e682d1e7b6945f94da01b66c93\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5669b4977d-tcs4h" podUID="aea9f7c8-25a2-45fe-b92d-6e256fd0e960" Aug 13 01:46:09.783983 containerd[1559]: time="2025-08-13T01:46:09.783297912Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:09.831285 containerd[1559]: time="2025-08-13T01:46:09.831221267Z" level=error msg="Failed to destroy network for sandbox \"800d3269bc61637b8a539bca41d93f009d635bfbf64ec82d3833187364a18fe5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:09.834111 containerd[1559]: time="2025-08-13T01:46:09.833965562Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"800d3269bc61637b8a539bca41d93f009d635bfbf64ec82d3833187364a18fe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:09.834700 kubelet[2718]: E0813 01:46:09.834198 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800d3269bc61637b8a539bca41d93f009d635bfbf64ec82d3833187364a18fe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:09.834700 kubelet[2718]: E0813 01:46:09.834244 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800d3269bc61637b8a539bca41d93f009d635bfbf64ec82d3833187364a18fe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:46:09.834700 kubelet[2718]: E0813 01:46:09.834264 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"800d3269bc61637b8a539bca41d93f009d635bfbf64ec82d3833187364a18fe5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:46:09.834700 kubelet[2718]: E0813 01:46:09.834301 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"800d3269bc61637b8a539bca41d93f009d635bfbf64ec82d3833187364a18fe5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:46:09.834405 systemd[1]: run-netns-cni\x2db12fb846\x2de5bc\x2da696\x2d7951\x2d854544ef58c0.mount: Deactivated successfully. Aug 13 01:46:11.780772 kubelet[2718]: E0813 01:46:11.780730 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:46:11.781937 containerd[1559]: time="2025-08-13T01:46:11.781907330Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:11.830245 containerd[1559]: time="2025-08-13T01:46:11.830190716Z" level=error msg="Failed to destroy network for sandbox \"fb3a6de8895bee68e41ae2775c61133e8b1acbc54fff687403f7421473800e0e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:11.833108 systemd[1]: run-netns-cni\x2d3c6c61de\x2d0199\x2dbab4\x2dee20\x2d4afcf9f4de24.mount: Deactivated successfully. Aug 13 01:46:11.833435 containerd[1559]: time="2025-08-13T01:46:11.833404102Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb3a6de8895bee68e41ae2775c61133e8b1acbc54fff687403f7421473800e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:11.834991 kubelet[2718]: E0813 01:46:11.834283 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb3a6de8895bee68e41ae2775c61133e8b1acbc54fff687403f7421473800e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:11.834991 kubelet[2718]: E0813 01:46:11.834346 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb3a6de8895bee68e41ae2775c61133e8b1acbc54fff687403f7421473800e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:46:11.834991 kubelet[2718]: E0813 01:46:11.834366 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fb3a6de8895bee68e41ae2775c61133e8b1acbc54fff687403f7421473800e0e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:46:11.834991 kubelet[2718]: E0813 01:46:11.834446 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fb3a6de8895bee68e41ae2775c61133e8b1acbc54fff687403f7421473800e0e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j47vf" podUID="0ba3c042-02d2-446d-bb82-0965919f2962" Aug 13 01:46:12.781390 kubelet[2718]: E0813 01:46:12.780512 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:46:12.782270 containerd[1559]: time="2025-08-13T01:46:12.781438987Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:12.834467 containerd[1559]: time="2025-08-13T01:46:12.834417432Z" level=error msg="Failed to destroy network for sandbox \"484491535ef5a25e24137110576f58f5ceb840f65d59aa5063121f74b6979030\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:12.840088 systemd[1]: run-netns-cni\x2d7ccaf07f\x2d65f5\x2d7d47\x2d0fbf\x2d88299942d284.mount: Deactivated successfully. Aug 13 01:46:12.841216 containerd[1559]: time="2025-08-13T01:46:12.841081555Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"484491535ef5a25e24137110576f58f5ceb840f65d59aa5063121f74b6979030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:12.841422 kubelet[2718]: E0813 01:46:12.841390 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"484491535ef5a25e24137110576f58f5ceb840f65d59aa5063121f74b6979030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:12.841481 kubelet[2718]: E0813 01:46:12.841467 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"484491535ef5a25e24137110576f58f5ceb840f65d59aa5063121f74b6979030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:46:12.842545 kubelet[2718]: E0813 01:46:12.841491 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"484491535ef5a25e24137110576f58f5ceb840f65d59aa5063121f74b6979030\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:46:12.842545 kubelet[2718]: E0813 01:46:12.841638 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"484491535ef5a25e24137110576f58f5ceb840f65d59aa5063121f74b6979030\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dfjz8" podUID="dafdbb28-0754-4303-98bd-08c77ee94f1a" Aug 13 01:46:14.780927 containerd[1559]: time="2025-08-13T01:46:14.780289865Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:14.855239 containerd[1559]: time="2025-08-13T01:46:14.853456663Z" level=error msg="Failed to destroy network for sandbox \"820bfe08213bb5d243cda5ffab27c9bf8558d5430006eb92422c5abdbac4e9a1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:14.863414 containerd[1559]: time="2025-08-13T01:46:14.856029620Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"820bfe08213bb5d243cda5ffab27c9bf8558d5430006eb92422c5abdbac4e9a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:14.862144 systemd[1]: run-netns-cni\x2d7f9efdc6\x2df2f5\x2d5958\x2d8087\x2d702f700376af.mount: Deactivated successfully. Aug 13 01:46:14.864705 kubelet[2718]: E0813 01:46:14.857110 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"820bfe08213bb5d243cda5ffab27c9bf8558d5430006eb92422c5abdbac4e9a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:14.864705 kubelet[2718]: E0813 01:46:14.857198 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"820bfe08213bb5d243cda5ffab27c9bf8558d5430006eb92422c5abdbac4e9a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:46:14.864705 kubelet[2718]: E0813 01:46:14.857219 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"820bfe08213bb5d243cda5ffab27c9bf8558d5430006eb92422c5abdbac4e9a1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:46:14.864705 kubelet[2718]: E0813 01:46:14.857278 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"820bfe08213bb5d243cda5ffab27c9bf8558d5430006eb92422c5abdbac4e9a1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:46:15.066577 kubelet[2718]: I0813 01:46:15.066480 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:15.066577 kubelet[2718]: I0813 01:46:15.066517 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:46:15.068719 kubelet[2718]: I0813 01:46:15.068699 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:46:15.079185 kubelet[2718]: I0813 01:46:15.079170 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:15.079266 kubelet[2718]: I0813 01:46:15.079237 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/goldmane-768f4c5c69-txf4m","calico-apiserver/calico-apiserver-5669b4977d-tcs4h","kube-system/coredns-668d6bf9bc-j47vf","kube-system/coredns-668d6bf9bc-dfjz8","calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","calico-system/calico-node-qgskr","calico-system/csi-node-driver-dbqt2","tigera-operator/tigera-operator-747864d56d-xmx78","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:46:15.084492 kubelet[2718]: I0813 01:46:15.084475 2718 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-system/goldmane-768f4c5c69-txf4m" Aug 13 01:46:15.085137 kubelet[2718]: I0813 01:46:15.085124 2718 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-system/goldmane-768f4c5c69-txf4m"] Aug 13 01:46:15.169181 kubelet[2718]: I0813 01:46:15.169151 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c79e958f-d144-4c10-a249-44ba255575ec-goldmane-ca-bundle\") pod \"c79e958f-d144-4c10-a249-44ba255575ec\" (UID: \"c79e958f-d144-4c10-a249-44ba255575ec\") " Aug 13 01:46:15.170145 kubelet[2718]: I0813 01:46:15.169188 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c79e958f-d144-4c10-a249-44ba255575ec-config\") pod \"c79e958f-d144-4c10-a249-44ba255575ec\" (UID: \"c79e958f-d144-4c10-a249-44ba255575ec\") " Aug 13 01:46:15.170145 kubelet[2718]: I0813 01:46:15.169219 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sjrsx\" (UniqueName: \"kubernetes.io/projected/c79e958f-d144-4c10-a249-44ba255575ec-kube-api-access-sjrsx\") pod \"c79e958f-d144-4c10-a249-44ba255575ec\" (UID: \"c79e958f-d144-4c10-a249-44ba255575ec\") " Aug 13 01:46:15.170145 kubelet[2718]: I0813 01:46:15.169242 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c79e958f-d144-4c10-a249-44ba255575ec-goldmane-key-pair\") pod \"c79e958f-d144-4c10-a249-44ba255575ec\" (UID: \"c79e958f-d144-4c10-a249-44ba255575ec\") " Aug 13 01:46:15.170145 kubelet[2718]: I0813 01:46:15.169564 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c79e958f-d144-4c10-a249-44ba255575ec-goldmane-ca-bundle" (OuterVolumeSpecName: "goldmane-ca-bundle") pod "c79e958f-d144-4c10-a249-44ba255575ec" (UID: "c79e958f-d144-4c10-a249-44ba255575ec"). InnerVolumeSpecName "goldmane-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:46:15.170145 kubelet[2718]: I0813 01:46:15.169811 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c79e958f-d144-4c10-a249-44ba255575ec-config" (OuterVolumeSpecName: "config") pod "c79e958f-d144-4c10-a249-44ba255575ec" (UID: "c79e958f-d144-4c10-a249-44ba255575ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Aug 13 01:46:15.173650 kubelet[2718]: I0813 01:46:15.173624 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c79e958f-d144-4c10-a249-44ba255575ec-goldmane-key-pair" (OuterVolumeSpecName: "goldmane-key-pair") pod "c79e958f-d144-4c10-a249-44ba255575ec" (UID: "c79e958f-d144-4c10-a249-44ba255575ec"). InnerVolumeSpecName "goldmane-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:46:15.175349 kubelet[2718]: I0813 01:46:15.175318 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c79e958f-d144-4c10-a249-44ba255575ec-kube-api-access-sjrsx" (OuterVolumeSpecName: "kube-api-access-sjrsx") pod "c79e958f-d144-4c10-a249-44ba255575ec" (UID: "c79e958f-d144-4c10-a249-44ba255575ec"). InnerVolumeSpecName "kube-api-access-sjrsx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:46:15.175392 systemd[1]: var-lib-kubelet-pods-c79e958f\x2dd144\x2d4c10\x2da249\x2d44ba255575ec-volumes-kubernetes.io\x7esecret-goldmane\x2dkey\x2dpair.mount: Deactivated successfully. Aug 13 01:46:15.178585 systemd[1]: var-lib-kubelet-pods-c79e958f\x2dd144\x2d4c10\x2da249\x2d44ba255575ec-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dsjrsx.mount: Deactivated successfully. Aug 13 01:46:15.269653 kubelet[2718]: I0813 01:46:15.269632 2718 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sjrsx\" (UniqueName: \"kubernetes.io/projected/c79e958f-d144-4c10-a249-44ba255575ec-kube-api-access-sjrsx\") on node \"172-232-7-133\" DevicePath \"\"" Aug 13 01:46:15.269653 kubelet[2718]: I0813 01:46:15.269653 2718 reconciler_common.go:299] "Volume detached for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/c79e958f-d144-4c10-a249-44ba255575ec-goldmane-key-pair\") on node \"172-232-7-133\" DevicePath \"\"" Aug 13 01:46:15.269771 kubelet[2718]: I0813 01:46:15.269664 2718 reconciler_common.go:299] "Volume detached for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c79e958f-d144-4c10-a249-44ba255575ec-goldmane-ca-bundle\") on node \"172-232-7-133\" DevicePath \"\"" Aug 13 01:46:15.269771 kubelet[2718]: I0813 01:46:15.269672 2718 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c79e958f-d144-4c10-a249-44ba255575ec-config\") on node \"172-232-7-133\" DevicePath \"\"" Aug 13 01:46:15.982202 systemd[1]: Removed slice kubepods-besteffort-podc79e958f_d144_4c10_a249_44ba255575ec.slice - libcontainer container kubepods-besteffort-podc79e958f_d144_4c10_a249_44ba255575ec.slice. Aug 13 01:46:16.085873 kubelet[2718]: I0813 01:46:16.085649 2718 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-system/goldmane-768f4c5c69-txf4m"] Aug 13 01:46:16.781936 kubelet[2718]: E0813 01:46:16.781588 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount3689701265: write /var/lib/containerd/tmpmounts/containerd-mount3689701265/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-qgskr" podUID="9adfb11c-9977-45e9-b78f-00f4995e46c5" Aug 13 01:46:18.781349 containerd[1559]: time="2025-08-13T01:46:18.781307563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5669b4977d-tcs4h,Uid:aea9f7c8-25a2-45fe-b92d-6e256fd0e960,Namespace:calico-apiserver,Attempt:0,}" Aug 13 01:46:18.831324 containerd[1559]: time="2025-08-13T01:46:18.831268600Z" level=error msg="Failed to destroy network for sandbox \"8ab45f736e018fbd7bf8b014354d20c31ac7815bea8cbce3aa9957cc33f704c4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:18.833341 systemd[1]: run-netns-cni\x2dadaa84bd\x2d1848\x2dc9e4\x2d180f\x2d6e992caf0c96.mount: Deactivated successfully. Aug 13 01:46:18.835282 containerd[1559]: time="2025-08-13T01:46:18.835226778Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5669b4977d-tcs4h,Uid:aea9f7c8-25a2-45fe-b92d-6e256fd0e960,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab45f736e018fbd7bf8b014354d20c31ac7815bea8cbce3aa9957cc33f704c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:18.835749 kubelet[2718]: E0813 01:46:18.835707 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab45f736e018fbd7bf8b014354d20c31ac7815bea8cbce3aa9957cc33f704c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:18.837028 kubelet[2718]: E0813 01:46:18.835773 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab45f736e018fbd7bf8b014354d20c31ac7815bea8cbce3aa9957cc33f704c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5669b4977d-tcs4h" Aug 13 01:46:18.837028 kubelet[2718]: E0813 01:46:18.835800 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8ab45f736e018fbd7bf8b014354d20c31ac7815bea8cbce3aa9957cc33f704c4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5669b4977d-tcs4h" Aug 13 01:46:18.837028 kubelet[2718]: E0813 01:46:18.835846 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5669b4977d-tcs4h_calico-apiserver(aea9f7c8-25a2-45fe-b92d-6e256fd0e960)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5669b4977d-tcs4h_calico-apiserver(aea9f7c8-25a2-45fe-b92d-6e256fd0e960)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8ab45f736e018fbd7bf8b014354d20c31ac7815bea8cbce3aa9957cc33f704c4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5669b4977d-tcs4h" podUID="aea9f7c8-25a2-45fe-b92d-6e256fd0e960" Aug 13 01:46:21.780419 containerd[1559]: time="2025-08-13T01:46:21.780377282Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:21.832578 containerd[1559]: time="2025-08-13T01:46:21.832522448Z" level=error msg="Failed to destroy network for sandbox \"ffc0c92c46bb2f805d4a6dce19aebca687ee15384751aa516659d5af5b36d71f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:21.834888 systemd[1]: run-netns-cni\x2d815e73c3\x2d8ff2\x2dbfa4\x2da0c9\x2d4483e6cf2804.mount: Deactivated successfully. Aug 13 01:46:21.835888 containerd[1559]: time="2025-08-13T01:46:21.835618117Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffc0c92c46bb2f805d4a6dce19aebca687ee15384751aa516659d5af5b36d71f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:21.836291 kubelet[2718]: E0813 01:46:21.836251 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffc0c92c46bb2f805d4a6dce19aebca687ee15384751aa516659d5af5b36d71f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:21.838139 kubelet[2718]: E0813 01:46:21.836299 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffc0c92c46bb2f805d4a6dce19aebca687ee15384751aa516659d5af5b36d71f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:46:21.838139 kubelet[2718]: E0813 01:46:21.836321 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ffc0c92c46bb2f805d4a6dce19aebca687ee15384751aa516659d5af5b36d71f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:46:21.838139 kubelet[2718]: E0813 01:46:21.836359 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ffc0c92c46bb2f805d4a6dce19aebca687ee15384751aa516659d5af5b36d71f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:46:23.779992 kubelet[2718]: E0813 01:46:23.779567 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:46:23.779992 kubelet[2718]: E0813 01:46:23.779686 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:46:23.780551 containerd[1559]: time="2025-08-13T01:46:23.780102609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:23.780756 containerd[1559]: time="2025-08-13T01:46:23.780546249Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:23.847926 containerd[1559]: time="2025-08-13T01:46:23.845044197Z" level=error msg="Failed to destroy network for sandbox \"9e8065318a5cd8ecc31861b014ae4a91463a6290f2c5055c639d14a595136307\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:23.848089 systemd[1]: run-netns-cni\x2d5f5a6a47\x2db0de\x2d85ff\x2d4a7b\x2db0cd404cff8e.mount: Deactivated successfully. Aug 13 01:46:23.850095 containerd[1559]: time="2025-08-13T01:46:23.850065155Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e8065318a5cd8ecc31861b014ae4a91463a6290f2c5055c639d14a595136307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:23.850613 kubelet[2718]: E0813 01:46:23.850471 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e8065318a5cd8ecc31861b014ae4a91463a6290f2c5055c639d14a595136307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:23.850613 kubelet[2718]: E0813 01:46:23.850518 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e8065318a5cd8ecc31861b014ae4a91463a6290f2c5055c639d14a595136307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:46:23.850613 kubelet[2718]: E0813 01:46:23.850538 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9e8065318a5cd8ecc31861b014ae4a91463a6290f2c5055c639d14a595136307\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:46:23.850613 kubelet[2718]: E0813 01:46:23.850577 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9e8065318a5cd8ecc31861b014ae4a91463a6290f2c5055c639d14a595136307\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j47vf" podUID="0ba3c042-02d2-446d-bb82-0965919f2962" Aug 13 01:46:23.851001 containerd[1559]: time="2025-08-13T01:46:23.850922555Z" level=error msg="Failed to destroy network for sandbox \"93dc1b92c01f30912149ff99d3dde95603c143b0658f06b3c32dff5b43e7759d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:23.853756 containerd[1559]: time="2025-08-13T01:46:23.853383494Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"93dc1b92c01f30912149ff99d3dde95603c143b0658f06b3c32dff5b43e7759d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:23.854290 systemd[1]: run-netns-cni\x2d0d8cba5a\x2dac97\x2dd323\x2d45c2\x2d12605fa33b1c.mount: Deactivated successfully. Aug 13 01:46:23.854400 kubelet[2718]: E0813 01:46:23.854381 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93dc1b92c01f30912149ff99d3dde95603c143b0658f06b3c32dff5b43e7759d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:23.854428 kubelet[2718]: E0813 01:46:23.854411 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93dc1b92c01f30912149ff99d3dde95603c143b0658f06b3c32dff5b43e7759d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:46:23.854543 kubelet[2718]: E0813 01:46:23.854428 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"93dc1b92c01f30912149ff99d3dde95603c143b0658f06b3c32dff5b43e7759d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:46:23.854543 kubelet[2718]: E0813 01:46:23.854455 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"93dc1b92c01f30912149ff99d3dde95603c143b0658f06b3c32dff5b43e7759d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dfjz8" podUID="dafdbb28-0754-4303-98bd-08c77ee94f1a" Aug 13 01:46:26.112324 kubelet[2718]: I0813 01:46:26.112239 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:26.112324 kubelet[2718]: I0813 01:46:26.112277 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:46:26.115646 kubelet[2718]: I0813 01:46:26.115622 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:46:26.127644 kubelet[2718]: I0813 01:46:26.127603 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:26.127776 kubelet[2718]: I0813 01:46:26.127687 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-apiserver/calico-apiserver-5669b4977d-tcs4h","kube-system/coredns-668d6bf9bc-j47vf","kube-system/coredns-668d6bf9bc-dfjz8","calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","calico-system/csi-node-driver-dbqt2","calico-system/calico-node-qgskr","tigera-operator/tigera-operator-747864d56d-xmx78","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:46:26.133549 kubelet[2718]: I0813 01:46:26.133527 2718 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="calico-apiserver/calico-apiserver-5669b4977d-tcs4h" Aug 13 01:46:26.133549 kubelet[2718]: I0813 01:46:26.133546 2718 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["calico-apiserver/calico-apiserver-5669b4977d-tcs4h"] Aug 13 01:46:26.237983 kubelet[2718]: I0813 01:46:26.237927 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5fvk7\" (UniqueName: \"kubernetes.io/projected/aea9f7c8-25a2-45fe-b92d-6e256fd0e960-kube-api-access-5fvk7\") pod \"aea9f7c8-25a2-45fe-b92d-6e256fd0e960\" (UID: \"aea9f7c8-25a2-45fe-b92d-6e256fd0e960\") " Aug 13 01:46:26.238481 kubelet[2718]: I0813 01:46:26.238457 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aea9f7c8-25a2-45fe-b92d-6e256fd0e960-calico-apiserver-certs\") pod \"aea9f7c8-25a2-45fe-b92d-6e256fd0e960\" (UID: \"aea9f7c8-25a2-45fe-b92d-6e256fd0e960\") " Aug 13 01:46:26.243367 kubelet[2718]: I0813 01:46:26.243341 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aea9f7c8-25a2-45fe-b92d-6e256fd0e960-calico-apiserver-certs" (OuterVolumeSpecName: "calico-apiserver-certs") pod "aea9f7c8-25a2-45fe-b92d-6e256fd0e960" (UID: "aea9f7c8-25a2-45fe-b92d-6e256fd0e960"). InnerVolumeSpecName "calico-apiserver-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Aug 13 01:46:26.244763 systemd[1]: var-lib-kubelet-pods-aea9f7c8\x2d25a2\x2d45fe\x2db92d\x2d6e256fd0e960-volumes-kubernetes.io\x7esecret-calico\x2dapiserver\x2dcerts.mount: Deactivated successfully. Aug 13 01:46:26.246266 kubelet[2718]: I0813 01:46:26.246188 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aea9f7c8-25a2-45fe-b92d-6e256fd0e960-kube-api-access-5fvk7" (OuterVolumeSpecName: "kube-api-access-5fvk7") pod "aea9f7c8-25a2-45fe-b92d-6e256fd0e960" (UID: "aea9f7c8-25a2-45fe-b92d-6e256fd0e960"). InnerVolumeSpecName "kube-api-access-5fvk7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:46:26.249284 systemd[1]: var-lib-kubelet-pods-aea9f7c8\x2d25a2\x2d45fe\x2db92d\x2d6e256fd0e960-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d5fvk7.mount: Deactivated successfully. Aug 13 01:46:26.339399 kubelet[2718]: I0813 01:46:26.339357 2718 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5fvk7\" (UniqueName: \"kubernetes.io/projected/aea9f7c8-25a2-45fe-b92d-6e256fd0e960-kube-api-access-5fvk7\") on node \"172-232-7-133\" DevicePath \"\"" Aug 13 01:46:26.339399 kubelet[2718]: I0813 01:46:26.339380 2718 reconciler_common.go:299] "Volume detached for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/aea9f7c8-25a2-45fe-b92d-6e256fd0e960-calico-apiserver-certs\") on node \"172-232-7-133\" DevicePath \"\"" Aug 13 01:46:26.780592 containerd[1559]: time="2025-08-13T01:46:26.780319076Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:26.795580 systemd[1]: Removed slice kubepods-besteffort-podaea9f7c8_25a2_45fe_b92d_6e256fd0e960.slice - libcontainer container kubepods-besteffort-podaea9f7c8_25a2_45fe_b92d_6e256fd0e960.slice. Aug 13 01:46:26.836711 containerd[1559]: time="2025-08-13T01:46:26.836665736Z" level=error msg="Failed to destroy network for sandbox \"bc379211dbe2857d5da412e8933e81d27ef05c886e0ef6adadb591a1a55201fd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:26.841492 containerd[1559]: time="2025-08-13T01:46:26.840952695Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc379211dbe2857d5da412e8933e81d27ef05c886e0ef6adadb591a1a55201fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:26.841438 systemd[1]: run-netns-cni\x2d11237c88\x2d2d8a\x2da53f\x2dbd67\x2de34ab9d3d95c.mount: Deactivated successfully. Aug 13 01:46:26.842050 kubelet[2718]: E0813 01:46:26.841938 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc379211dbe2857d5da412e8933e81d27ef05c886e0ef6adadb591a1a55201fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:26.842645 kubelet[2718]: E0813 01:46:26.842225 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc379211dbe2857d5da412e8933e81d27ef05c886e0ef6adadb591a1a55201fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:46:26.842645 kubelet[2718]: E0813 01:46:26.842248 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bc379211dbe2857d5da412e8933e81d27ef05c886e0ef6adadb591a1a55201fd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:46:26.842645 kubelet[2718]: E0813 01:46:26.842296 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bc379211dbe2857d5da412e8933e81d27ef05c886e0ef6adadb591a1a55201fd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:46:27.134581 kubelet[2718]: I0813 01:46:27.134459 2718 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["calico-apiserver/calico-apiserver-5669b4977d-tcs4h"] Aug 13 01:46:27.149128 kubelet[2718]: I0813 01:46:27.149109 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:27.149203 kubelet[2718]: I0813 01:46:27.149142 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:46:27.152094 kubelet[2718]: I0813 01:46:27.151553 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:46:27.162017 kubelet[2718]: I0813 01:46:27.162002 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:27.162163 kubelet[2718]: I0813 01:46:27.162132 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","kube-system/coredns-668d6bf9bc-j47vf","kube-system/coredns-668d6bf9bc-dfjz8","calico-system/csi-node-driver-dbqt2","calico-system/calico-node-qgskr","tigera-operator/tigera-operator-747864d56d-xmx78","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:46:27.162163 kubelet[2718]: E0813 01:46:27.162162 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:46:27.162475 kubelet[2718]: E0813 01:46:27.162171 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:46:27.162475 kubelet[2718]: E0813 01:46:27.162177 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:46:27.162475 kubelet[2718]: E0813 01:46:27.162184 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:46:27.162475 kubelet[2718]: E0813 01:46:27.162191 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:46:27.162916 containerd[1559]: time="2025-08-13T01:46:27.162892630Z" level=info msg="StopContainer for \"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc\" with timeout 2 (s)" Aug 13 01:46:27.163573 containerd[1559]: time="2025-08-13T01:46:27.163489700Z" level=info msg="Stop container \"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc\" with signal terminated" Aug 13 01:46:27.184735 systemd[1]: cri-containerd-791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc.scope: Deactivated successfully. Aug 13 01:46:27.185495 systemd[1]: cri-containerd-791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc.scope: Consumed 4.274s CPU time, 85M memory peak. Aug 13 01:46:27.187243 containerd[1559]: time="2025-08-13T01:46:27.187195187Z" level=info msg="received exit event container_id:\"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc\" id:\"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc\" pid:3038 exited_at:{seconds:1755049587 nanos:186692716}" Aug 13 01:46:27.187443 containerd[1559]: time="2025-08-13T01:46:27.187415886Z" level=info msg="TaskExit event in podsandbox handler container_id:\"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc\" id:\"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc\" pid:3038 exited_at:{seconds:1755049587 nanos:186692716}" Aug 13 01:46:27.207670 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc-rootfs.mount: Deactivated successfully. Aug 13 01:46:27.216550 containerd[1559]: time="2025-08-13T01:46:27.216521242Z" level=info msg="StopContainer for \"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc\" returns successfully" Aug 13 01:46:27.217361 containerd[1559]: time="2025-08-13T01:46:27.217323072Z" level=info msg="StopPodSandbox for \"2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a\"" Aug 13 01:46:27.217412 containerd[1559]: time="2025-08-13T01:46:27.217367882Z" level=info msg="Container to stop \"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc\" must be in running or unknown state, current state \"CONTAINER_EXITED\"" Aug 13 01:46:27.223821 systemd[1]: cri-containerd-2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a.scope: Deactivated successfully. Aug 13 01:46:27.225896 containerd[1559]: time="2025-08-13T01:46:27.225470391Z" level=info msg="TaskExit event in podsandbox handler container_id:\"2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a\" id:\"2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a\" pid:2983 exit_status:137 exited_at:{seconds:1755049587 nanos:225044421}" Aug 13 01:46:27.258376 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a-rootfs.mount: Deactivated successfully. Aug 13 01:46:27.263130 containerd[1559]: time="2025-08-13T01:46:27.263105385Z" level=info msg="shim disconnected" id=2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a namespace=k8s.io Aug 13 01:46:27.263130 containerd[1559]: time="2025-08-13T01:46:27.263128605Z" level=warning msg="cleaning up after shim disconnected" id=2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a namespace=k8s.io Aug 13 01:46:27.263291 containerd[1559]: time="2025-08-13T01:46:27.263135935Z" level=info msg="cleaning up dead shim" namespace=k8s.io Aug 13 01:46:27.276897 containerd[1559]: time="2025-08-13T01:46:27.276777732Z" level=info msg="received exit event sandbox_id:\"2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a\" exit_status:137 exited_at:{seconds:1755049587 nanos:225044421}" Aug 13 01:46:27.278808 containerd[1559]: time="2025-08-13T01:46:27.278776692Z" level=info msg="TearDown network for sandbox \"2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a\" successfully" Aug 13 01:46:27.279134 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a-shm.mount: Deactivated successfully. Aug 13 01:46:27.279295 containerd[1559]: time="2025-08-13T01:46:27.279134743Z" level=info msg="StopPodSandbox for \"2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a\" returns successfully" Aug 13 01:46:27.284352 kubelet[2718]: I0813 01:46:27.284325 2718 eviction_manager.go:627] "Eviction manager: pod is evicted successfully" pod="tigera-operator/tigera-operator-747864d56d-xmx78" Aug 13 01:46:27.284352 kubelet[2718]: I0813 01:46:27.284348 2718 eviction_manager.go:208] "Eviction manager: pods evicted, waiting for pod to be cleaned up" pods=["tigera-operator/tigera-operator-747864d56d-xmx78"] Aug 13 01:46:27.308481 kubelet[2718]: I0813 01:46:27.308452 2718 kubelet.go:2351] "Pod admission denied" podUID="9b0e6ffd-4ac6-4c48-8b0c-8674cf62714a" pod="tigera-operator/tigera-operator-747864d56d-zwk8x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:27.331487 kubelet[2718]: I0813 01:46:27.331451 2718 kubelet.go:2351] "Pod admission denied" podUID="3e79e092-575f-4a2d-8ce9-dc62a282156c" pod="tigera-operator/tigera-operator-747864d56d-zxqxc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:27.345767 kubelet[2718]: I0813 01:46:27.345743 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v7zvm\" (UniqueName: \"kubernetes.io/projected/d05cedbe-da49-4dae-84df-e86dd09b0e02-kube-api-access-v7zvm\") pod \"d05cedbe-da49-4dae-84df-e86dd09b0e02\" (UID: \"d05cedbe-da49-4dae-84df-e86dd09b0e02\") " Aug 13 01:46:27.345991 kubelet[2718]: I0813 01:46:27.345777 2718 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d05cedbe-da49-4dae-84df-e86dd09b0e02-var-lib-calico\") pod \"d05cedbe-da49-4dae-84df-e86dd09b0e02\" (UID: \"d05cedbe-da49-4dae-84df-e86dd09b0e02\") " Aug 13 01:46:27.345991 kubelet[2718]: I0813 01:46:27.345882 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d05cedbe-da49-4dae-84df-e86dd09b0e02-var-lib-calico" (OuterVolumeSpecName: "var-lib-calico") pod "d05cedbe-da49-4dae-84df-e86dd09b0e02" (UID: "d05cedbe-da49-4dae-84df-e86dd09b0e02"). InnerVolumeSpecName "var-lib-calico". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Aug 13 01:46:27.354782 kubelet[2718]: I0813 01:46:27.354237 2718 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d05cedbe-da49-4dae-84df-e86dd09b0e02-kube-api-access-v7zvm" (OuterVolumeSpecName: "kube-api-access-v7zvm") pod "d05cedbe-da49-4dae-84df-e86dd09b0e02" (UID: "d05cedbe-da49-4dae-84df-e86dd09b0e02"). InnerVolumeSpecName "kube-api-access-v7zvm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Aug 13 01:46:27.355393 systemd[1]: var-lib-kubelet-pods-d05cedbe\x2dda49\x2d4dae\x2d84df\x2de86dd09b0e02-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dv7zvm.mount: Deactivated successfully. Aug 13 01:46:27.360967 kubelet[2718]: I0813 01:46:27.360870 2718 kubelet.go:2351] "Pod admission denied" podUID="ed1e1022-6c6c-428f-bd6e-535e2d8869e7" pod="tigera-operator/tigera-operator-747864d56d-cncqw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:27.379536 kubelet[2718]: I0813 01:46:27.379491 2718 kubelet.go:2351] "Pod admission denied" podUID="c67f0e1d-b708-4c07-89e4-cc9570051208" pod="tigera-operator/tigera-operator-747864d56d-fv25j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:27.398547 kubelet[2718]: I0813 01:46:27.398311 2718 kubelet.go:2351] "Pod admission denied" podUID="677b7d00-ce3f-4feb-809c-25295d8a4141" pod="tigera-operator/tigera-operator-747864d56d-n97j9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:27.419762 kubelet[2718]: I0813 01:46:27.419717 2718 kubelet.go:2351] "Pod admission denied" podUID="5d648740-2252-4173-b4fb-0458956c14e0" pod="tigera-operator/tigera-operator-747864d56d-ndx8z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:27.440952 kubelet[2718]: I0813 01:46:27.440847 2718 kubelet.go:2351] "Pod admission denied" podUID="23f55535-0bca-4846-b432-16c00a3f1450" pod="tigera-operator/tigera-operator-747864d56d-xhlds" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:27.446594 kubelet[2718]: I0813 01:46:27.446573 2718 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-v7zvm\" (UniqueName: \"kubernetes.io/projected/d05cedbe-da49-4dae-84df-e86dd09b0e02-kube-api-access-v7zvm\") on node \"172-232-7-133\" DevicePath \"\"" Aug 13 01:46:27.446594 kubelet[2718]: I0813 01:46:27.446593 2718 reconciler_common.go:299] "Volume detached for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d05cedbe-da49-4dae-84df-e86dd09b0e02-var-lib-calico\") on node \"172-232-7-133\" DevicePath \"\"" Aug 13 01:46:27.464120 kubelet[2718]: I0813 01:46:27.464069 2718 kubelet.go:2351] "Pod admission denied" podUID="91c49d3d-b92b-4d25-9fb9-1a9585067b97" pod="tigera-operator/tigera-operator-747864d56d-856dv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:27.601059 kubelet[2718]: I0813 01:46:27.600839 2718 kubelet.go:2351] "Pod admission denied" podUID="4c1530ed-df1c-4ef2-a8d5-19c5a7fc7884" pod="tigera-operator/tigera-operator-747864d56d-mgs7h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:27.752798 kubelet[2718]: I0813 01:46:27.752636 2718 kubelet.go:2351] "Pod admission denied" podUID="3dd38e2c-7dec-4d70-a8a0-44abaf1bd0f0" pod="tigera-operator/tigera-operator-747864d56d-nk8z9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:27.781201 containerd[1559]: time="2025-08-13T01:46:27.780568918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:46:27.900793 kubelet[2718]: I0813 01:46:27.900762 2718 kubelet.go:2351] "Pod admission denied" podUID="115e571a-1f3b-4f44-b2aa-d98cc4d21484" pod="tigera-operator/tigera-operator-747864d56d-lv4f4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:27.996366 kubelet[2718]: I0813 01:46:27.996301 2718 scope.go:117] "RemoveContainer" containerID="791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc" Aug 13 01:46:28.002964 containerd[1559]: time="2025-08-13T01:46:28.002832075Z" level=info msg="RemoveContainer for \"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc\"" Aug 13 01:46:28.005023 systemd[1]: Removed slice kubepods-besteffort-podd05cedbe_da49_4dae_84df_e86dd09b0e02.slice - libcontainer container kubepods-besteffort-podd05cedbe_da49_4dae_84df_e86dd09b0e02.slice. Aug 13 01:46:28.005337 systemd[1]: kubepods-besteffort-podd05cedbe_da49_4dae_84df_e86dd09b0e02.slice: Consumed 4.302s CPU time, 85.3M memory peak. Aug 13 01:46:28.007617 containerd[1559]: time="2025-08-13T01:46:28.007597855Z" level=info msg="RemoveContainer for \"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc\" returns successfully" Aug 13 01:46:28.008194 kubelet[2718]: I0813 01:46:28.008156 2718 scope.go:117] "RemoveContainer" containerID="791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc" Aug 13 01:46:28.008593 containerd[1559]: time="2025-08-13T01:46:28.008428794Z" level=error msg="ContainerStatus for \"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc\" failed" error="rpc error: code = NotFound desc = an error occurred when try to find container \"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc\": not found" Aug 13 01:46:28.008682 kubelet[2718]: E0813 01:46:28.008566 2718 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = an error occurred when try to find container \"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc\": not found" containerID="791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc" Aug 13 01:46:28.008803 kubelet[2718]: I0813 01:46:28.008726 2718 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"containerd","ID":"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc"} err="failed to get container status \"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc\": rpc error: code = NotFound desc = an error occurred when try to find container \"791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc\": not found" Aug 13 01:46:28.052676 kubelet[2718]: I0813 01:46:28.052488 2718 kubelet.go:2351] "Pod admission denied" podUID="582fedf4-1726-4a5e-bac7-872003da1b91" pod="tigera-operator/tigera-operator-747864d56d-xzpts" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:28.204811 kubelet[2718]: I0813 01:46:28.204724 2718 kubelet.go:2351] "Pod admission denied" podUID="365e6374-92a8-4906-b5bf-0770e7f79fbf" pod="tigera-operator/tigera-operator-747864d56d-ksxnq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:28.285170 kubelet[2718]: I0813 01:46:28.284892 2718 eviction_manager.go:458] "Eviction manager: pods successfully cleaned up" pods=["tigera-operator/tigera-operator-747864d56d-xmx78"] Aug 13 01:46:28.352211 kubelet[2718]: I0813 01:46:28.352158 2718 kubelet.go:2351] "Pod admission denied" podUID="8325bbd1-8616-4faa-b3a6-5f67d00f0564" pod="tigera-operator/tigera-operator-747864d56d-xhgtv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:28.603443 kubelet[2718]: I0813 01:46:28.602965 2718 kubelet.go:2351] "Pod admission denied" podUID="22dd445c-7ccd-48a9-ba78-7bd8570ab561" pod="tigera-operator/tigera-operator-747864d56d-w2fpz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:28.758198 kubelet[2718]: I0813 01:46:28.758095 2718 kubelet.go:2351] "Pod admission denied" podUID="e8c5aba7-611a-459c-8a9e-f7fc7f3bf0af" pod="tigera-operator/tigera-operator-747864d56d-jlk26" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:28.908620 kubelet[2718]: I0813 01:46:28.908220 2718 kubelet.go:2351] "Pod admission denied" podUID="5a17a98e-2477-4a4b-b452-9c5f0f000958" pod="tigera-operator/tigera-operator-747864d56d-dt6qw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:29.058186 kubelet[2718]: I0813 01:46:29.053674 2718 kubelet.go:2351] "Pod admission denied" podUID="7764d2ae-3696-4331-92b4-3c5d277ef1f5" pod="tigera-operator/tigera-operator-747864d56d-gqbz2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:29.059108 kubelet[2718]: I0813 01:46:29.058901 2718 status_manager.go:890] "Failed to get status for pod" podUID="7764d2ae-3696-4331-92b4-3c5d277ef1f5" pod="tigera-operator/tigera-operator-747864d56d-gqbz2" err="pods \"tigera-operator-747864d56d-gqbz2\" is forbidden: User \"system:node:172-232-7-133\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '172-232-7-133' and this object" Aug 13 01:46:30.101202 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1940071012.mount: Deactivated successfully. Aug 13 01:46:30.102343 containerd[1559]: time="2025-08-13T01:46:30.102177401Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1940071012: write /var/lib/containerd/tmpmounts/containerd-mount1940071012/usr/bin/calico-node: no space left on device" Aug 13 01:46:30.102343 containerd[1559]: time="2025-08-13T01:46:30.102265911Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:46:30.102889 kubelet[2718]: E0813 01:46:30.102753 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1940071012: write /var/lib/containerd/tmpmounts/containerd-mount1940071012/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:46:30.102889 kubelet[2718]: E0813 01:46:30.102820 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1940071012: write /var/lib/containerd/tmpmounts/containerd-mount1940071012/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:46:30.104155 kubelet[2718]: E0813 01:46:30.104044 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2s4z7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-qgskr_calico-system(9adfb11c-9977-45e9-b78f-00f4995e46c5): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1940071012: write /var/lib/containerd/tmpmounts/containerd-mount1940071012/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:46:30.105422 kubelet[2718]: E0813 01:46:30.105386 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1940071012: write /var/lib/containerd/tmpmounts/containerd-mount1940071012/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-qgskr" podUID="9adfb11c-9977-45e9-b78f-00f4995e46c5" Aug 13 01:46:32.800198 kubelet[2718]: I0813 01:46:32.799162 2718 kubelet.go:2351] "Pod admission denied" podUID="be60428d-4efb-46ad-8824-a3eede40c9d8" pod="tigera-operator/tigera-operator-747864d56d-5m52d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:32.824626 kubelet[2718]: I0813 01:46:32.824560 2718 kubelet.go:2351] "Pod admission denied" podUID="1b8401ac-d326-4874-97d6-2bf36ce22b25" pod="tigera-operator/tigera-operator-747864d56d-886bq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:32.846150 kubelet[2718]: I0813 01:46:32.846108 2718 kubelet.go:2351] "Pod admission denied" podUID="ec084df5-8bbf-447f-80b1-ba79e23af196" pod="tigera-operator/tigera-operator-747864d56d-87npw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:32.882290 kubelet[2718]: I0813 01:46:32.882233 2718 kubelet.go:2351] "Pod admission denied" podUID="c0356dd1-3882-480d-9782-1003e78521e1" pod="tigera-operator/tigera-operator-747864d56d-4ds8d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:32.900635 kubelet[2718]: I0813 01:46:32.900595 2718 kubelet.go:2351] "Pod admission denied" podUID="4189924b-aceb-4677-a4fd-c10b8c25994b" pod="tigera-operator/tigera-operator-747864d56d-6zsxf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:32.904092 kubelet[2718]: I0813 01:46:32.902825 2718 status_manager.go:890] "Failed to get status for pod" podUID="4189924b-aceb-4677-a4fd-c10b8c25994b" pod="tigera-operator/tigera-operator-747864d56d-6zsxf" err="pods \"tigera-operator-747864d56d-6zsxf\" is forbidden: User \"system:node:172-232-7-133\" cannot get resource \"pods\" in API group \"\" in the namespace \"tigera-operator\": no relationship found between node '172-232-7-133' and this object" Aug 13 01:46:36.780341 kubelet[2718]: E0813 01:46:36.779922 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:46:36.780732 containerd[1559]: time="2025-08-13T01:46:36.780671827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:36.783555 containerd[1559]: time="2025-08-13T01:46:36.780967102Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:36.852107 containerd[1559]: time="2025-08-13T01:46:36.852043340Z" level=error msg="Failed to destroy network for sandbox \"cb873b6fb0b044524d73aa2953bbdf00a77507732172071eb9d81905f06cc2d1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:36.855587 systemd[1]: run-netns-cni\x2d8461da7e\x2d1d7e\x2dea0b\x2da06a\x2dc968ea8461af.mount: Deactivated successfully. Aug 13 01:46:36.857221 containerd[1559]: time="2025-08-13T01:46:36.856874141Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb873b6fb0b044524d73aa2953bbdf00a77507732172071eb9d81905f06cc2d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:36.859626 kubelet[2718]: E0813 01:46:36.857817 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb873b6fb0b044524d73aa2953bbdf00a77507732172071eb9d81905f06cc2d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:36.859626 kubelet[2718]: E0813 01:46:36.858031 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb873b6fb0b044524d73aa2953bbdf00a77507732172071eb9d81905f06cc2d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:46:36.859626 kubelet[2718]: E0813 01:46:36.858058 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cb873b6fb0b044524d73aa2953bbdf00a77507732172071eb9d81905f06cc2d1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:46:36.859626 kubelet[2718]: E0813 01:46:36.858098 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cb873b6fb0b044524d73aa2953bbdf00a77507732172071eb9d81905f06cc2d1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j47vf" podUID="0ba3c042-02d2-446d-bb82-0965919f2962" Aug 13 01:46:36.860391 containerd[1559]: time="2025-08-13T01:46:36.860342715Z" level=error msg="Failed to destroy network for sandbox \"49bdd6b3b61d3d7a64c73c12eea982158ccc950b1e7f95e9c01dfbefedeb187d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:36.862028 systemd[1]: run-netns-cni\x2d400b35a6\x2d17e6\x2d5724\x2dc295\x2dd062e5f710ff.mount: Deactivated successfully. Aug 13 01:46:36.863704 containerd[1559]: time="2025-08-13T01:46:36.863630892Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"49bdd6b3b61d3d7a64c73c12eea982158ccc950b1e7f95e9c01dfbefedeb187d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:36.863909 kubelet[2718]: E0813 01:46:36.863823 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49bdd6b3b61d3d7a64c73c12eea982158ccc950b1e7f95e9c01dfbefedeb187d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:36.863909 kubelet[2718]: E0813 01:46:36.863897 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49bdd6b3b61d3d7a64c73c12eea982158ccc950b1e7f95e9c01dfbefedeb187d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:46:36.863999 kubelet[2718]: E0813 01:46:36.863915 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49bdd6b3b61d3d7a64c73c12eea982158ccc950b1e7f95e9c01dfbefedeb187d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:46:36.863999 kubelet[2718]: E0813 01:46:36.863941 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49bdd6b3b61d3d7a64c73c12eea982158ccc950b1e7f95e9c01dfbefedeb187d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:46:37.779474 kubelet[2718]: E0813 01:46:37.779427 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:46:37.780207 containerd[1559]: time="2025-08-13T01:46:37.780152008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:37.852487 containerd[1559]: time="2025-08-13T01:46:37.852424499Z" level=error msg="Failed to destroy network for sandbox \"45bfcfdf4ce24c19cf97aa5bf8d90ee959ab471a32045e5a44edf3a7296f6261\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:37.856049 containerd[1559]: time="2025-08-13T01:46:37.855962802Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"45bfcfdf4ce24c19cf97aa5bf8d90ee959ab471a32045e5a44edf3a7296f6261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:37.856786 kubelet[2718]: E0813 01:46:37.856346 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45bfcfdf4ce24c19cf97aa5bf8d90ee959ab471a32045e5a44edf3a7296f6261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:37.856786 kubelet[2718]: E0813 01:46:37.856414 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45bfcfdf4ce24c19cf97aa5bf8d90ee959ab471a32045e5a44edf3a7296f6261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:46:37.856786 kubelet[2718]: E0813 01:46:37.856446 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"45bfcfdf4ce24c19cf97aa5bf8d90ee959ab471a32045e5a44edf3a7296f6261\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:46:37.856786 kubelet[2718]: E0813 01:46:37.856500 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"45bfcfdf4ce24c19cf97aa5bf8d90ee959ab471a32045e5a44edf3a7296f6261\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dfjz8" podUID="dafdbb28-0754-4303-98bd-08c77ee94f1a" Aug 13 01:46:37.858068 systemd[1]: run-netns-cni\x2d64f11419\x2d8f6d\x2d097f\x2d8a92\x2da0db256cc68e.mount: Deactivated successfully. Aug 13 01:46:38.309598 kubelet[2718]: I0813 01:46:38.309561 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:38.309743 kubelet[2718]: I0813 01:46:38.309614 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:46:38.312212 containerd[1559]: time="2025-08-13T01:46:38.311908920Z" level=info msg="StopPodSandbox for \"2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a\"" Aug 13 01:46:38.312212 containerd[1559]: time="2025-08-13T01:46:38.312106967Z" level=info msg="TearDown network for sandbox \"2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a\" successfully" Aug 13 01:46:38.312212 containerd[1559]: time="2025-08-13T01:46:38.312145536Z" level=info msg="StopPodSandbox for \"2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a\" returns successfully" Aug 13 01:46:38.313268 containerd[1559]: time="2025-08-13T01:46:38.312481771Z" level=info msg="RemovePodSandbox for \"2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a\"" Aug 13 01:46:38.313268 containerd[1559]: time="2025-08-13T01:46:38.312760237Z" level=info msg="Forcibly stopping sandbox \"2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a\"" Aug 13 01:46:38.313268 containerd[1559]: time="2025-08-13T01:46:38.312842706Z" level=info msg="TearDown network for sandbox \"2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a\" successfully" Aug 13 01:46:38.314446 containerd[1559]: time="2025-08-13T01:46:38.314423221Z" level=info msg="Ensure that sandbox 2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a in task-service has been cleanup successfully" Aug 13 01:46:38.316933 containerd[1559]: time="2025-08-13T01:46:38.316913124Z" level=info msg="RemovePodSandbox \"2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a\" returns successfully" Aug 13 01:46:38.317330 kubelet[2718]: I0813 01:46:38.317314 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:46:38.327259 kubelet[2718]: I0813 01:46:38.327238 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:38.328867 kubelet[2718]: I0813 01:46:38.328423 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","kube-system/coredns-668d6bf9bc-j47vf","kube-system/coredns-668d6bf9bc-dfjz8","calico-system/csi-node-driver-dbqt2","calico-system/calico-node-qgskr","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:46:38.328867 kubelet[2718]: E0813 01:46:38.328533 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:46:38.328867 kubelet[2718]: E0813 01:46:38.328543 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:46:38.328867 kubelet[2718]: E0813 01:46:38.328556 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:46:38.328867 kubelet[2718]: E0813 01:46:38.328562 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:46:38.328867 kubelet[2718]: E0813 01:46:38.328572 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:46:38.328867 kubelet[2718]: E0813 01:46:38.328581 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:46:38.328867 kubelet[2718]: E0813 01:46:38.328590 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:46:38.328867 kubelet[2718]: E0813 01:46:38.328600 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:46:38.328867 kubelet[2718]: E0813 01:46:38.328630 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:46:38.328867 kubelet[2718]: E0813 01:46:38.328643 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:46:38.328867 kubelet[2718]: I0813 01:46:38.328659 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:46:39.780148 kubelet[2718]: E0813 01:46:39.779551 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:46:40.781214 containerd[1559]: time="2025-08-13T01:46:40.781148327Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:40.782599 kubelet[2718]: E0813 01:46:40.782493 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1940071012: write /var/lib/containerd/tmpmounts/containerd-mount1940071012/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-qgskr" podUID="9adfb11c-9977-45e9-b78f-00f4995e46c5" Aug 13 01:46:40.843797 kubelet[2718]: I0813 01:46:40.843755 2718 kubelet.go:2351] "Pod admission denied" podUID="c9166c72-619a-4604-8c14-98632f990c16" pod="tigera-operator/tigera-operator-747864d56d-xq7f8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:40.864611 containerd[1559]: time="2025-08-13T01:46:40.864569970Z" level=error msg="Failed to destroy network for sandbox \"f5b98625ddf13b041cbb9a82d1d70a9ba7114c013a0b3d8607edb42223d02ab5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:40.869421 containerd[1559]: time="2025-08-13T01:46:40.869390230Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5b98625ddf13b041cbb9a82d1d70a9ba7114c013a0b3d8607edb42223d02ab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:40.870254 kubelet[2718]: E0813 01:46:40.869757 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5b98625ddf13b041cbb9a82d1d70a9ba7114c013a0b3d8607edb42223d02ab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:40.870254 kubelet[2718]: E0813 01:46:40.869821 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5b98625ddf13b041cbb9a82d1d70a9ba7114c013a0b3d8607edb42223d02ab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:46:40.870254 kubelet[2718]: E0813 01:46:40.869841 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f5b98625ddf13b041cbb9a82d1d70a9ba7114c013a0b3d8607edb42223d02ab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:46:40.870254 kubelet[2718]: E0813 01:46:40.869903 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f5b98625ddf13b041cbb9a82d1d70a9ba7114c013a0b3d8607edb42223d02ab5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:46:40.870481 systemd[1]: run-netns-cni\x2d5fd4328d\x2d7337\x2d78d7\x2da9d6\x2d369cb8ecf6b9.mount: Deactivated successfully. Aug 13 01:46:40.881263 kubelet[2718]: I0813 01:46:40.881228 2718 kubelet.go:2351] "Pod admission denied" podUID="3d11dede-8b14-4ff0-b032-361ab1f6858e" pod="tigera-operator/tigera-operator-747864d56d-xdmm4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:40.902608 kubelet[2718]: I0813 01:46:40.902442 2718 kubelet.go:2351] "Pod admission denied" podUID="5e335e41-5a57-4c81-9a97-5425d0ee0535" pod="tigera-operator/tigera-operator-747864d56d-6bjvd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:40.918378 kubelet[2718]: I0813 01:46:40.918342 2718 kubelet.go:2351] "Pod admission denied" podUID="fb759e3d-3f55-498a-b903-8a6b8333ff83" pod="tigera-operator/tigera-operator-747864d56d-h6d2b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:40.946161 kubelet[2718]: I0813 01:46:40.944666 2718 kubelet.go:2351] "Pod admission denied" podUID="1ae3f1e7-03fb-4474-b478-adf4a02725a5" pod="tigera-operator/tigera-operator-747864d56d-kflp7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:40.964630 kubelet[2718]: I0813 01:46:40.964597 2718 kubelet.go:2351] "Pod admission denied" podUID="a18c19f8-e539-4a87-bd0f-ae533d5cc3a4" pod="tigera-operator/tigera-operator-747864d56d-hhhl7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:40.986467 kubelet[2718]: I0813 01:46:40.985899 2718 kubelet.go:2351] "Pod admission denied" podUID="02a78c82-8127-4a3e-ae88-aa27772ce9c3" pod="tigera-operator/tigera-operator-747864d56d-8kk8t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:41.001840 kubelet[2718]: I0813 01:46:41.001609 2718 kubelet.go:2351] "Pod admission denied" podUID="cad2bfe3-e60e-41ff-9049-9200e02d8a20" pod="tigera-operator/tigera-operator-747864d56d-5j898" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:41.021393 kubelet[2718]: I0813 01:46:41.021215 2718 kubelet.go:2351] "Pod admission denied" podUID="ec293b70-d834-48ff-b744-0d54514b82ca" pod="tigera-operator/tigera-operator-747864d56d-28mcl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:41.038594 kubelet[2718]: I0813 01:46:41.038323 2718 kubelet.go:2351] "Pod admission denied" podUID="59153fb2-eebd-4c96-8236-bef85483c59c" pod="tigera-operator/tigera-operator-747864d56d-dklln" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:41.058522 kubelet[2718]: I0813 01:46:41.058481 2718 kubelet.go:2351] "Pod admission denied" podUID="579265fc-29da-4f54-935b-cb499286c611" pod="tigera-operator/tigera-operator-747864d56d-f62l8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:41.085237 kubelet[2718]: I0813 01:46:41.085201 2718 kubelet.go:2351] "Pod admission denied" podUID="1c4aba78-2b51-441c-b60a-d0b9a0c12f2f" pod="tigera-operator/tigera-operator-747864d56d-6spb7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:41.182374 kubelet[2718]: I0813 01:46:41.182310 2718 kubelet.go:2351] "Pod admission denied" podUID="a42ac378-79a3-4f56-a3c5-29fdc664e77b" pod="tigera-operator/tigera-operator-747864d56d-n26tf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:41.283452 kubelet[2718]: I0813 01:46:41.283391 2718 kubelet.go:2351] "Pod admission denied" podUID="17b399df-5d76-406b-81b9-18d98ee74716" pod="tigera-operator/tigera-operator-747864d56d-kpblc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:41.382313 kubelet[2718]: I0813 01:46:41.382179 2718 kubelet.go:2351] "Pod admission denied" podUID="7e898fd7-a539-459e-baa3-54e12782c5c4" pod="tigera-operator/tigera-operator-747864d56d-59gcz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:41.484626 kubelet[2718]: I0813 01:46:41.484388 2718 kubelet.go:2351] "Pod admission denied" podUID="9a8623d3-a6a5-47f0-becc-95da06b47933" pod="tigera-operator/tigera-operator-747864d56d-5m9dg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:41.536667 kubelet[2718]: I0813 01:46:41.536581 2718 kubelet.go:2351] "Pod admission denied" podUID="736eeb56-f9ec-47b3-ba73-f377aeca0fb5" pod="tigera-operator/tigera-operator-747864d56d-9pt4n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:41.637924 kubelet[2718]: I0813 01:46:41.636616 2718 kubelet.go:2351] "Pod admission denied" podUID="9ee447c8-32f5-4239-8c81-b2fcf7a6faac" pod="tigera-operator/tigera-operator-747864d56d-cnvqh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:41.733732 kubelet[2718]: I0813 01:46:41.733667 2718 kubelet.go:2351] "Pod admission denied" podUID="fe62852a-093c-4b35-a062-eaa170768e46" pod="tigera-operator/tigera-operator-747864d56d-7xx7m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:41.832902 kubelet[2718]: I0813 01:46:41.832838 2718 kubelet.go:2351] "Pod admission denied" podUID="7ca75018-72ea-4f6d-b28a-38b03452b63f" pod="tigera-operator/tigera-operator-747864d56d-4w9ql" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:41.933618 kubelet[2718]: I0813 01:46:41.933480 2718 kubelet.go:2351] "Pod admission denied" podUID="7b2aac9a-fa8b-463e-aeb0-41721a5c946d" pod="tigera-operator/tigera-operator-747864d56d-6pdfh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.035390 kubelet[2718]: I0813 01:46:42.035149 2718 kubelet.go:2351] "Pod admission denied" podUID="a8b6d5fe-200e-4a8e-9ca2-096448ab7d63" pod="tigera-operator/tigera-operator-747864d56d-k4s5r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.239725 kubelet[2718]: I0813 01:46:42.239566 2718 kubelet.go:2351] "Pod admission denied" podUID="0df63f3f-9011-4903-aac6-89addd0aea58" pod="tigera-operator/tigera-operator-747864d56d-ctzqs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.334771 kubelet[2718]: I0813 01:46:42.334724 2718 kubelet.go:2351] "Pod admission denied" podUID="ac4fa855-c342-4d5b-946d-07447c903acb" pod="tigera-operator/tigera-operator-747864d56d-mdv9t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.432531 kubelet[2718]: I0813 01:46:42.432475 2718 kubelet.go:2351] "Pod admission denied" podUID="6ebf7ec6-e9ff-4da7-a9fd-b9dc3fa9ab97" pod="tigera-operator/tigera-operator-747864d56d-5jzps" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.532687 kubelet[2718]: I0813 01:46:42.532566 2718 kubelet.go:2351] "Pod admission denied" podUID="4fdfaa94-ad11-46ae-a798-e3bcb18296e4" pod="tigera-operator/tigera-operator-747864d56d-svmzq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.633153 kubelet[2718]: I0813 01:46:42.633092 2718 kubelet.go:2351] "Pod admission denied" podUID="96f651ac-ac52-425c-9ded-56de7208738e" pod="tigera-operator/tigera-operator-747864d56d-l7647" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.735188 kubelet[2718]: I0813 01:46:42.735136 2718 kubelet.go:2351] "Pod admission denied" podUID="3823f842-a51e-47f3-8ba1-cc580649e1c3" pod="tigera-operator/tigera-operator-747864d56d-gk8p7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.781249 kubelet[2718]: E0813 01:46:42.780844 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:46:42.835981 kubelet[2718]: I0813 01:46:42.835639 2718 kubelet.go:2351] "Pod admission denied" podUID="2ccd4008-94b0-4a22-8d0b-e9e13c42ada6" pod="tigera-operator/tigera-operator-747864d56d-55qks" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:42.932886 kubelet[2718]: I0813 01:46:42.932644 2718 kubelet.go:2351] "Pod admission denied" podUID="a126bc2a-f4b4-40d4-8298-b7b00166666e" pod="tigera-operator/tigera-operator-747864d56d-jmrt5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.033735 kubelet[2718]: I0813 01:46:43.033694 2718 kubelet.go:2351] "Pod admission denied" podUID="ef3b553e-1782-48ea-bd88-33575c12e46c" pod="tigera-operator/tigera-operator-747864d56d-n64vz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.134660 kubelet[2718]: I0813 01:46:43.134530 2718 kubelet.go:2351] "Pod admission denied" podUID="4c6290bd-7b08-4c5e-a184-47cdf59ad6a5" pod="tigera-operator/tigera-operator-747864d56d-npm9j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.183809 kubelet[2718]: I0813 01:46:43.183748 2718 kubelet.go:2351] "Pod admission denied" podUID="20298bde-55db-44c9-8cec-6032e3172af8" pod="tigera-operator/tigera-operator-747864d56d-stmmh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.283509 kubelet[2718]: I0813 01:46:43.283264 2718 kubelet.go:2351] "Pod admission denied" podUID="e52ee9b2-6067-494c-ba67-1519517dd588" pod="tigera-operator/tigera-operator-747864d56d-qfzzv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.484615 kubelet[2718]: I0813 01:46:43.484563 2718 kubelet.go:2351] "Pod admission denied" podUID="e8368053-fa7e-4f74-a4fa-38e5e58732d5" pod="tigera-operator/tigera-operator-747864d56d-bb9pb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.586496 kubelet[2718]: I0813 01:46:43.586438 2718 kubelet.go:2351] "Pod admission denied" podUID="2d8ce128-025c-48d3-8275-649755902850" pod="tigera-operator/tigera-operator-747864d56d-78mzl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.686951 kubelet[2718]: I0813 01:46:43.686894 2718 kubelet.go:2351] "Pod admission denied" podUID="f82b3738-1f72-4f1e-8400-497687189c51" pod="tigera-operator/tigera-operator-747864d56d-kn686" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.790832 kubelet[2718]: I0813 01:46:43.790691 2718 kubelet.go:2351] "Pod admission denied" podUID="896010f3-549c-4ec1-833e-c4dc39f32029" pod="tigera-operator/tigera-operator-747864d56d-bgzpz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:43.884322 kubelet[2718]: I0813 01:46:43.884275 2718 kubelet.go:2351] "Pod admission denied" podUID="4839d8f3-1b45-42d7-8315-57c2dbc0a4ae" pod="tigera-operator/tigera-operator-747864d56d-rmmfb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.088181 kubelet[2718]: I0813 01:46:44.087919 2718 kubelet.go:2351] "Pod admission denied" podUID="504198be-a24c-46cc-99a8-62083821b191" pod="tigera-operator/tigera-operator-747864d56d-ctpmx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.186392 kubelet[2718]: I0813 01:46:44.186334 2718 kubelet.go:2351] "Pod admission denied" podUID="ffcc8cdd-d30d-4304-b060-39d1a80c2ed6" pod="tigera-operator/tigera-operator-747864d56d-gvhqj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.234925 kubelet[2718]: I0813 01:46:44.234849 2718 kubelet.go:2351] "Pod admission denied" podUID="859280a2-8ae8-4368-8f4e-ac46a217c0c7" pod="tigera-operator/tigera-operator-747864d56d-72hxk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.336910 kubelet[2718]: I0813 01:46:44.336799 2718 kubelet.go:2351] "Pod admission denied" podUID="9c967910-81c7-43b9-86b3-3f7cb22ba67b" pod="tigera-operator/tigera-operator-747864d56d-cgdxz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.434582 kubelet[2718]: I0813 01:46:44.434464 2718 kubelet.go:2351] "Pod admission denied" podUID="6931b449-0fff-461f-a6f6-86c0f687f36c" pod="tigera-operator/tigera-operator-747864d56d-mdp6b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.544003 kubelet[2718]: I0813 01:46:44.543367 2718 kubelet.go:2351] "Pod admission denied" podUID="3acfc30d-6f49-4972-aa0e-30f6283dc4af" pod="tigera-operator/tigera-operator-747864d56d-6xq75" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.633174 kubelet[2718]: I0813 01:46:44.633137 2718 kubelet.go:2351] "Pod admission denied" podUID="37807729-4932-41c6-9ebe-a053b6146e0e" pod="tigera-operator/tigera-operator-747864d56d-njvz9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.733846 kubelet[2718]: I0813 01:46:44.733813 2718 kubelet.go:2351] "Pod admission denied" podUID="6675c889-c225-46a6-81ac-27bc36b4e26c" pod="tigera-operator/tigera-operator-747864d56d-bf4qt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.783174 kubelet[2718]: E0813 01:46:44.779651 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:46:44.834126 kubelet[2718]: I0813 01:46:44.833887 2718 kubelet.go:2351] "Pod admission denied" podUID="834015a9-e67f-45e5-8b93-4c83a276886e" pod="tigera-operator/tigera-operator-747864d56d-lk44v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.883505 kubelet[2718]: I0813 01:46:44.883224 2718 kubelet.go:2351] "Pod admission denied" podUID="56148248-7b62-455a-a22c-3e111112abb4" pod="tigera-operator/tigera-operator-747864d56d-qdm4d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:44.983650 kubelet[2718]: I0813 01:46:44.983611 2718 kubelet.go:2351] "Pod admission denied" podUID="c98a22c0-22ad-4f2b-b3d4-16be55695e73" pod="tigera-operator/tigera-operator-747864d56d-5nd67" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.082602 kubelet[2718]: I0813 01:46:45.082482 2718 kubelet.go:2351] "Pod admission denied" podUID="8709ce58-b662-4a2b-a042-fb2179dd7656" pod="tigera-operator/tigera-operator-747864d56d-r8c8h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.183157 kubelet[2718]: I0813 01:46:45.182499 2718 kubelet.go:2351] "Pod admission denied" podUID="f033a8fb-328b-4bc5-aad8-a254b14b682c" pod="tigera-operator/tigera-operator-747864d56d-pg6l2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.283122 kubelet[2718]: I0813 01:46:45.283014 2718 kubelet.go:2351] "Pod admission denied" podUID="eabcd85e-6f39-4a34-beb8-35616da46849" pod="tigera-operator/tigera-operator-747864d56d-nd5sr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.331187 kubelet[2718]: I0813 01:46:45.331155 2718 kubelet.go:2351] "Pod admission denied" podUID="25eab757-9f6e-46a1-98af-33eb322dc694" pod="tigera-operator/tigera-operator-747864d56d-dshnn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.435004 kubelet[2718]: I0813 01:46:45.434088 2718 kubelet.go:2351] "Pod admission denied" podUID="2d1a12b5-f449-4aca-ab61-ef5b98a42f51" pod="tigera-operator/tigera-operator-747864d56d-8n7mm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.641840 kubelet[2718]: I0813 01:46:45.641758 2718 kubelet.go:2351] "Pod admission denied" podUID="cd2a0ac9-9533-4873-b2d6-03ec14ed664d" pod="tigera-operator/tigera-operator-747864d56d-qzn6m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.735348 kubelet[2718]: I0813 01:46:45.735307 2718 kubelet.go:2351] "Pod admission denied" podUID="d75500eb-e2d2-4e25-a990-8c6d96b36ed7" pod="tigera-operator/tigera-operator-747864d56d-vnm87" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:45.832103 kubelet[2718]: I0813 01:46:45.832068 2718 kubelet.go:2351] "Pod admission denied" podUID="a420f337-1484-4a8f-af5a-482fcbc9982e" pod="tigera-operator/tigera-operator-747864d56d-9k728" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.038827 kubelet[2718]: I0813 01:46:46.038092 2718 kubelet.go:2351] "Pod admission denied" podUID="d4822bd7-8d7e-4620-ad02-579015d8010e" pod="tigera-operator/tigera-operator-747864d56d-7x7v4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.135925 kubelet[2718]: I0813 01:46:46.135874 2718 kubelet.go:2351] "Pod admission denied" podUID="089a346f-17ef-47ae-b6e3-71ccdb9e3090" pod="tigera-operator/tigera-operator-747864d56d-8kv75" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.235656 kubelet[2718]: I0813 01:46:46.235613 2718 kubelet.go:2351] "Pod admission denied" podUID="1704e166-300a-41db-80e7-7a0a1481b214" pod="tigera-operator/tigera-operator-747864d56d-9tp84" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.335559 kubelet[2718]: I0813 01:46:46.335304 2718 kubelet.go:2351] "Pod admission denied" podUID="52a311e1-0eb9-4e8d-adc8-f5741b592930" pod="tigera-operator/tigera-operator-747864d56d-h22tp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.383490 kubelet[2718]: I0813 01:46:46.383452 2718 kubelet.go:2351] "Pod admission denied" podUID="e98a59ac-0e0f-4cf4-bd03-4968086aabac" pod="tigera-operator/tigera-operator-747864d56d-qhz4s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.486554 kubelet[2718]: I0813 01:46:46.486506 2718 kubelet.go:2351] "Pod admission denied" podUID="7d8d13b6-c9f2-4119-bbff-dbae48cc7976" pod="tigera-operator/tigera-operator-747864d56d-pq7g5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.587888 kubelet[2718]: I0813 01:46:46.587287 2718 kubelet.go:2351] "Pod admission denied" podUID="e0e53e87-a6d5-475d-b6ae-7c6e741cfa22" pod="tigera-operator/tigera-operator-747864d56d-jgxg5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.688599 kubelet[2718]: I0813 01:46:46.688544 2718 kubelet.go:2351] "Pod admission denied" podUID="d77321dd-be7f-4e7d-a8c9-d30d5f222bf3" pod="tigera-operator/tigera-operator-747864d56d-xlrzn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.784758 kubelet[2718]: I0813 01:46:46.784710 2718 kubelet.go:2351] "Pod admission denied" podUID="ac7fa5f7-f8b3-4902-820b-8ee37cf59630" pod="tigera-operator/tigera-operator-747864d56d-lbbqs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.835890 kubelet[2718]: I0813 01:46:46.835818 2718 kubelet.go:2351] "Pod admission denied" podUID="c6acc759-dfb4-4cee-ae04-7ca7ce4c0f3a" pod="tigera-operator/tigera-operator-747864d56d-sdp6w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:46.938392 kubelet[2718]: I0813 01:46:46.937505 2718 kubelet.go:2351] "Pod admission denied" podUID="ea7344ea-c5c8-4990-a2eb-3d473d8285b4" pod="tigera-operator/tigera-operator-747864d56d-hxfnk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.035988 kubelet[2718]: I0813 01:46:47.035743 2718 kubelet.go:2351] "Pod admission denied" podUID="225f943e-642c-458c-a6c1-54a7c0d160d9" pod="tigera-operator/tigera-operator-747864d56d-q89k4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.136653 kubelet[2718]: I0813 01:46:47.136613 2718 kubelet.go:2351] "Pod admission denied" podUID="8e4f7aaa-c659-485c-862e-ade01feb12d1" pod="tigera-operator/tigera-operator-747864d56d-dzz7z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.338531 kubelet[2718]: I0813 01:46:47.338455 2718 kubelet.go:2351] "Pod admission denied" podUID="c7ceb1c3-5034-408e-a627-fb14f50d9682" pod="tigera-operator/tigera-operator-747864d56d-zqfv6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.434580 kubelet[2718]: I0813 01:46:47.434535 2718 kubelet.go:2351] "Pod admission denied" podUID="ee4b1d6a-4c9e-4e95-bac9-3bdbab204ca2" pod="tigera-operator/tigera-operator-747864d56d-rrdts" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.534274 kubelet[2718]: I0813 01:46:47.534230 2718 kubelet.go:2351] "Pod admission denied" podUID="d355004c-2b58-48c0-b418-503e41b2f986" pod="tigera-operator/tigera-operator-747864d56d-c5sh2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.638477 kubelet[2718]: I0813 01:46:47.638274 2718 kubelet.go:2351] "Pod admission denied" podUID="a17bc6c4-2e10-46af-81fd-fa970d43a370" pod="tigera-operator/tigera-operator-747864d56d-mz7hv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.687770 kubelet[2718]: I0813 01:46:47.687708 2718 kubelet.go:2351] "Pod admission denied" podUID="7c267a3a-9bad-4c87-8314-39899addd581" pod="tigera-operator/tigera-operator-747864d56d-lwnks" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.786804 kubelet[2718]: I0813 01:46:47.786755 2718 kubelet.go:2351] "Pod admission denied" podUID="90629bfa-09d0-4483-bdbd-1e6343202d53" pod="tigera-operator/tigera-operator-747864d56d-5ndzw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:47.988292 kubelet[2718]: I0813 01:46:47.988227 2718 kubelet.go:2351] "Pod admission denied" podUID="7abb27f6-f21f-429f-ae2f-2dbde84353cd" pod="tigera-operator/tigera-operator-747864d56d-czlvk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.086235 kubelet[2718]: I0813 01:46:48.086170 2718 kubelet.go:2351] "Pod admission denied" podUID="270590ec-2441-4e6d-afe3-d302c5b7142d" pod="tigera-operator/tigera-operator-747864d56d-wpn2c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.186622 kubelet[2718]: I0813 01:46:48.186567 2718 kubelet.go:2351] "Pod admission denied" podUID="624a2ec7-a50d-469e-b1e1-971cbe9bbe5e" pod="tigera-operator/tigera-operator-747864d56d-527m8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.344606 kubelet[2718]: I0813 01:46:48.343913 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:48.344606 kubelet[2718]: I0813 01:46:48.343953 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:46:48.347266 kubelet[2718]: I0813 01:46:48.347246 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:46:48.357067 kubelet[2718]: I0813 01:46:48.357051 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:48.357137 kubelet[2718]: I0813 01:46:48.357123 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-dfjz8","kube-system/coredns-668d6bf9bc-j47vf","calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","calico-system/calico-node-qgskr","calico-system/csi-node-driver-dbqt2","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:46:48.357216 kubelet[2718]: E0813 01:46:48.357150 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:46:48.357216 kubelet[2718]: E0813 01:46:48.357160 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:46:48.357216 kubelet[2718]: E0813 01:46:48.357167 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:46:48.357216 kubelet[2718]: E0813 01:46:48.357174 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:46:48.357216 kubelet[2718]: E0813 01:46:48.357181 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:46:48.357216 kubelet[2718]: E0813 01:46:48.357191 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:46:48.357216 kubelet[2718]: E0813 01:46:48.357200 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:46:48.357216 kubelet[2718]: E0813 01:46:48.357208 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:46:48.357216 kubelet[2718]: E0813 01:46:48.357216 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:46:48.357588 kubelet[2718]: E0813 01:46:48.357223 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:46:48.357588 kubelet[2718]: I0813 01:46:48.357232 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:46:48.385452 kubelet[2718]: I0813 01:46:48.385422 2718 kubelet.go:2351] "Pod admission denied" podUID="51486b02-f395-4e7c-b3eb-b4562aaf452d" pod="tigera-operator/tigera-operator-747864d56d-qfnns" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.496209 kubelet[2718]: I0813 01:46:48.496155 2718 kubelet.go:2351] "Pod admission denied" podUID="2e845dd6-08d0-4a73-9103-f66a0e609efe" pod="tigera-operator/tigera-operator-747864d56d-cvlxs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.587969 kubelet[2718]: I0813 01:46:48.587920 2718 kubelet.go:2351] "Pod admission denied" podUID="0c5221fa-dfb9-4882-8e63-417f7661dce5" pod="tigera-operator/tigera-operator-747864d56d-zb69p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.686411 kubelet[2718]: I0813 01:46:48.686246 2718 kubelet.go:2351] "Pod admission denied" podUID="b8540c3b-be94-4deb-9e3b-d94f2edf2857" pod="tigera-operator/tigera-operator-747864d56d-p684t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.780528 kubelet[2718]: E0813 01:46:48.780087 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:46:48.782443 kubelet[2718]: E0813 01:46:48.782417 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:46:48.782694 containerd[1559]: time="2025-08-13T01:46:48.782658696Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:48.783415 containerd[1559]: time="2025-08-13T01:46:48.783325718Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:48.809172 kubelet[2718]: I0813 01:46:48.807714 2718 kubelet.go:2351] "Pod admission denied" podUID="39e308fb-514c-492e-a1d6-c4ca85168f47" pod="tigera-operator/tigera-operator-747864d56d-rkx59" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:48.927847 containerd[1559]: time="2025-08-13T01:46:48.927783500Z" level=error msg="Failed to destroy network for sandbox \"f9603eae8af0f2e73a6774daf333e982af32ad7021a42fe5661214ec621ef59e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:48.928043 containerd[1559]: time="2025-08-13T01:46:48.927979548Z" level=error msg="Failed to destroy network for sandbox \"033285e7f57952e84a0b9c618d815ad428679393b62a885c8eaa0dc40947c852\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:48.930460 containerd[1559]: time="2025-08-13T01:46:48.930250271Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"033285e7f57952e84a0b9c618d815ad428679393b62a885c8eaa0dc40947c852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:48.931429 kubelet[2718]: E0813 01:46:48.931353 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"033285e7f57952e84a0b9c618d815ad428679393b62a885c8eaa0dc40947c852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:48.932087 kubelet[2718]: E0813 01:46:48.931638 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"033285e7f57952e84a0b9c618d815ad428679393b62a885c8eaa0dc40947c852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:46:48.932087 kubelet[2718]: E0813 01:46:48.931675 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"033285e7f57952e84a0b9c618d815ad428679393b62a885c8eaa0dc40947c852\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:46:48.932087 kubelet[2718]: E0813 01:46:48.931728 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"033285e7f57952e84a0b9c618d815ad428679393b62a885c8eaa0dc40947c852\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j47vf" podUID="0ba3c042-02d2-446d-bb82-0965919f2962" Aug 13 01:46:48.932484 containerd[1559]: time="2025-08-13T01:46:48.932352717Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9603eae8af0f2e73a6774daf333e982af32ad7021a42fe5661214ec621ef59e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:48.933099 kubelet[2718]: E0813 01:46:48.933075 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9603eae8af0f2e73a6774daf333e982af32ad7021a42fe5661214ec621ef59e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:48.933180 kubelet[2718]: E0813 01:46:48.933164 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9603eae8af0f2e73a6774daf333e982af32ad7021a42fe5661214ec621ef59e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:46:48.933311 kubelet[2718]: E0813 01:46:48.933236 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9603eae8af0f2e73a6774daf333e982af32ad7021a42fe5661214ec621ef59e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:46:48.933311 kubelet[2718]: E0813 01:46:48.933277 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9603eae8af0f2e73a6774daf333e982af32ad7021a42fe5661214ec621ef59e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dfjz8" podUID="dafdbb28-0754-4303-98bd-08c77ee94f1a" Aug 13 01:46:48.933372 systemd[1]: run-netns-cni\x2d5d6a67ff\x2d8240\x2da65e\x2d5105\x2dba94de57fbe4.mount: Deactivated successfully. Aug 13 01:46:48.933777 systemd[1]: run-netns-cni\x2d31ad97d5\x2dcd28\x2d5ea0\x2d391d\x2dcc862aa0c539.mount: Deactivated successfully. Aug 13 01:46:48.985734 kubelet[2718]: I0813 01:46:48.985691 2718 kubelet.go:2351] "Pod admission denied" podUID="8178e60b-92a9-417b-8428-f55d27b029de" pod="tigera-operator/tigera-operator-747864d56d-gx9mk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.083771 kubelet[2718]: I0813 01:46:49.083729 2718 kubelet.go:2351] "Pod admission denied" podUID="57c72b3c-2b45-4c80-9d51-4f925b01c007" pod="tigera-operator/tigera-operator-747864d56d-btt9x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.183150 kubelet[2718]: I0813 01:46:49.183098 2718 kubelet.go:2351] "Pod admission denied" podUID="4447762f-40c5-48ae-997c-e16584109c2b" pod="tigera-operator/tigera-operator-747864d56d-6lpc9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.282929 kubelet[2718]: I0813 01:46:49.282791 2718 kubelet.go:2351] "Pod admission denied" podUID="1ab20ee5-7267-4691-b2cb-77e738d8bc90" pod="tigera-operator/tigera-operator-747864d56d-k5gb7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.384830 kubelet[2718]: I0813 01:46:49.384772 2718 kubelet.go:2351] "Pod admission denied" podUID="fc1fdae3-8706-43ed-b745-83cbba8f9546" pod="tigera-operator/tigera-operator-747864d56d-xwxlc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.589064 kubelet[2718]: I0813 01:46:49.588896 2718 kubelet.go:2351] "Pod admission denied" podUID="c93d0567-bfd9-4137-9d0e-8451da1e7e1d" pod="tigera-operator/tigera-operator-747864d56d-6snrj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.687661 kubelet[2718]: I0813 01:46:49.687613 2718 kubelet.go:2351] "Pod admission denied" podUID="cc557ae9-c40f-4146-9fe3-65a7fc954486" pod="tigera-operator/tigera-operator-747864d56d-rnnxn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.786989 kubelet[2718]: I0813 01:46:49.786923 2718 kubelet.go:2351] "Pod admission denied" podUID="b95784ff-225d-4dce-89dc-fe2de6b8c5dd" pod="tigera-operator/tigera-operator-747864d56d-lfpdh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.885808 kubelet[2718]: I0813 01:46:49.885639 2718 kubelet.go:2351] "Pod admission denied" podUID="74bb7ebb-4654-4836-a785-9bb0fbd4fb65" pod="tigera-operator/tigera-operator-747864d56d-8584n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:49.988811 kubelet[2718]: I0813 01:46:49.988753 2718 kubelet.go:2351] "Pod admission denied" podUID="0b80a8bb-6219-4b81-8c59-3f6c15504979" pod="tigera-operator/tigera-operator-747864d56d-v4bmw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.084816 kubelet[2718]: I0813 01:46:50.084776 2718 kubelet.go:2351] "Pod admission denied" podUID="a7fb5d9d-814f-42da-a154-06471a091be0" pod="tigera-operator/tigera-operator-747864d56d-8hq7j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.135019 kubelet[2718]: I0813 01:46:50.134979 2718 kubelet.go:2351] "Pod admission denied" podUID="de7cf07c-fe33-46e6-af8a-408f92874ee9" pod="tigera-operator/tigera-operator-747864d56d-4dbqp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.234783 kubelet[2718]: I0813 01:46:50.234739 2718 kubelet.go:2351] "Pod admission denied" podUID="8ff757d2-c6bb-44eb-a591-eaca57cb75a1" pod="tigera-operator/tigera-operator-747864d56d-7m88f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.440887 kubelet[2718]: I0813 01:46:50.440726 2718 kubelet.go:2351] "Pod admission denied" podUID="dabf1f9d-bdcc-4733-865c-cfcc8a1764c4" pod="tigera-operator/tigera-operator-747864d56d-r4795" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.536978 kubelet[2718]: I0813 01:46:50.536798 2718 kubelet.go:2351] "Pod admission denied" podUID="75874e23-96f0-422c-88aa-4c22f3cf3746" pod="tigera-operator/tigera-operator-747864d56d-7rmzr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.586162 kubelet[2718]: I0813 01:46:50.586099 2718 kubelet.go:2351] "Pod admission denied" podUID="68f389fc-139b-413b-aba9-00061f3cabc6" pod="tigera-operator/tigera-operator-747864d56d-s757h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.684034 kubelet[2718]: I0813 01:46:50.683983 2718 kubelet.go:2351] "Pod admission denied" podUID="6080bcc9-a317-4c9d-9d8c-5ce632dad47d" pod="tigera-operator/tigera-operator-747864d56d-2ccpc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.784445 kubelet[2718]: I0813 01:46:50.784405 2718 kubelet.go:2351] "Pod admission denied" podUID="853a6bae-c1db-4971-9954-6a05c102385a" pod="tigera-operator/tigera-operator-747864d56d-hrxs6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.884432 kubelet[2718]: I0813 01:46:50.884091 2718 kubelet.go:2351] "Pod admission denied" podUID="5f876591-41c1-4cdd-9537-84cf0a5932d7" pod="tigera-operator/tigera-operator-747864d56d-msrcv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:50.986537 kubelet[2718]: I0813 01:46:50.986163 2718 kubelet.go:2351] "Pod admission denied" podUID="cb46a642-0fb8-408e-83f7-0294b3d4f359" pod="tigera-operator/tigera-operator-747864d56d-55tbp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.081959 kubelet[2718]: I0813 01:46:51.081916 2718 kubelet.go:2351] "Pod admission denied" podUID="b55279dd-7491-4cfe-a03b-d2e9f119d4ef" pod="tigera-operator/tigera-operator-747864d56d-9z85p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.184445 kubelet[2718]: I0813 01:46:51.184331 2718 kubelet.go:2351] "Pod admission denied" podUID="6ab15088-8074-4c13-aca4-5a2453dab38d" pod="tigera-operator/tigera-operator-747864d56d-hq8xz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.238484 kubelet[2718]: I0813 01:46:51.238446 2718 kubelet.go:2351] "Pod admission denied" podUID="f1c5d0eb-c722-407f-8c9c-07ff27311f0f" pod="tigera-operator/tigera-operator-747864d56d-ngsvc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.336470 kubelet[2718]: I0813 01:46:51.336144 2718 kubelet.go:2351] "Pod admission denied" podUID="8410fc2c-f034-4cea-85af-cc60a6caf068" pod="tigera-operator/tigera-operator-747864d56d-cvtm2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.435663 kubelet[2718]: I0813 01:46:51.435419 2718 kubelet.go:2351] "Pod admission denied" podUID="94c65fcd-2022-4e6f-9d54-5445dc9620e9" pod="tigera-operator/tigera-operator-747864d56d-8k6vm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.538115 kubelet[2718]: I0813 01:46:51.538053 2718 kubelet.go:2351] "Pod admission denied" podUID="5203a122-e157-4cbc-8e25-757fc7fa7bf4" pod="tigera-operator/tigera-operator-747864d56d-pksq8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.638575 kubelet[2718]: I0813 01:46:51.638501 2718 kubelet.go:2351] "Pod admission denied" podUID="b186233e-dd92-46ff-ace1-b5ebe1b43725" pod="tigera-operator/tigera-operator-747864d56d-6xq6j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.736328 kubelet[2718]: I0813 01:46:51.736272 2718 kubelet.go:2351] "Pod admission denied" podUID="2d5016d8-0c35-459a-9999-739338c6a298" pod="tigera-operator/tigera-operator-747864d56d-89dlw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.780934 containerd[1559]: time="2025-08-13T01:46:51.780691585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:51.840629 containerd[1559]: time="2025-08-13T01:46:51.840301851Z" level=error msg="Failed to destroy network for sandbox \"e9a9592cbea0f3324af3b301d9bdd0c317109cae3b3d51b7a46f4e755f0cdab5\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:51.842666 systemd[1]: run-netns-cni\x2d3ad75f80\x2d422d\x2de963\x2de72c\x2dbcd826cc90eb.mount: Deactivated successfully. Aug 13 01:46:51.844712 containerd[1559]: time="2025-08-13T01:46:51.844268869Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9a9592cbea0f3324af3b301d9bdd0c317109cae3b3d51b7a46f4e755f0cdab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:51.845908 kubelet[2718]: E0813 01:46:51.845799 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9a9592cbea0f3324af3b301d9bdd0c317109cae3b3d51b7a46f4e755f0cdab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:51.845908 kubelet[2718]: E0813 01:46:51.845873 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9a9592cbea0f3324af3b301d9bdd0c317109cae3b3d51b7a46f4e755f0cdab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:46:51.845908 kubelet[2718]: E0813 01:46:51.845895 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9a9592cbea0f3324af3b301d9bdd0c317109cae3b3d51b7a46f4e755f0cdab5\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:46:51.846231 kubelet[2718]: E0813 01:46:51.845931 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9a9592cbea0f3324af3b301d9bdd0c317109cae3b3d51b7a46f4e755f0cdab5\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:46:51.847750 kubelet[2718]: I0813 01:46:51.847727 2718 kubelet.go:2351] "Pod admission denied" podUID="1584ec0b-85cb-482b-b286-9c643c896db7" pod="tigera-operator/tigera-operator-747864d56d-mvlst" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:51.933606 kubelet[2718]: I0813 01:46:51.933551 2718 kubelet.go:2351] "Pod admission denied" podUID="8faa0d97-535c-4ff8-b336-2108ef09a83e" pod="tigera-operator/tigera-operator-747864d56d-tjc4w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.034155 kubelet[2718]: I0813 01:46:52.034037 2718 kubelet.go:2351] "Pod admission denied" podUID="e7c16b93-b5f7-48a9-a513-c2af0f574ae4" pod="tigera-operator/tigera-operator-747864d56d-jq8pq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.133260 kubelet[2718]: I0813 01:46:52.132999 2718 kubelet.go:2351] "Pod admission denied" podUID="c7b2e5fe-8e27-43e6-baed-6eddecc10435" pod="tigera-operator/tigera-operator-747864d56d-wwxtk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.335484 kubelet[2718]: I0813 01:46:52.335356 2718 kubelet.go:2351] "Pod admission denied" podUID="70d6cd15-0ab5-4f33-ab68-dca39cd4cc68" pod="tigera-operator/tigera-operator-747864d56d-hp7hc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.434837 kubelet[2718]: I0813 01:46:52.434801 2718 kubelet.go:2351] "Pod admission denied" podUID="02fb5b30-420f-465f-9553-f03daed2150a" pod="tigera-operator/tigera-operator-747864d56d-jws6j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.484690 kubelet[2718]: I0813 01:46:52.484653 2718 kubelet.go:2351] "Pod admission denied" podUID="9101ae20-ebfc-4e73-af88-919ed925c599" pod="tigera-operator/tigera-operator-747864d56d-6lz9l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.586395 kubelet[2718]: I0813 01:46:52.586306 2718 kubelet.go:2351] "Pod admission denied" podUID="80686442-9f00-45f4-b6bb-4e2f57d2224b" pod="tigera-operator/tigera-operator-747864d56d-dhxds" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.684740 kubelet[2718]: I0813 01:46:52.684706 2718 kubelet.go:2351] "Pod admission denied" podUID="86b2d281-33a9-45e6-b264-c28f46b82566" pod="tigera-operator/tigera-operator-747864d56d-t9wlz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.792008 kubelet[2718]: I0813 01:46:52.791344 2718 kubelet.go:2351] "Pod admission denied" podUID="54c32a97-7a04-4bb6-8fa0-42a198e4ce2f" pod="tigera-operator/tigera-operator-747864d56d-w2bzq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:52.991287 kubelet[2718]: I0813 01:46:52.991222 2718 kubelet.go:2351] "Pod admission denied" podUID="fc971802-d97e-407b-91e0-ccfd9ffa43a2" pod="tigera-operator/tigera-operator-747864d56d-fwwrj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.085394 kubelet[2718]: I0813 01:46:53.085338 2718 kubelet.go:2351] "Pod admission denied" podUID="7b1e4c7c-1276-48f3-a695-5c21f7e1c2a9" pod="tigera-operator/tigera-operator-747864d56d-nkjrk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.135128 kubelet[2718]: I0813 01:46:53.135091 2718 kubelet.go:2351] "Pod admission denied" podUID="fbe550e3-ac88-43f2-a780-6ece820f21ea" pod="tigera-operator/tigera-operator-747864d56d-4xfwf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.236248 kubelet[2718]: I0813 01:46:53.236173 2718 kubelet.go:2351] "Pod admission denied" podUID="5093f12b-e652-4476-bbd9-07717d0d1c61" pod="tigera-operator/tigera-operator-747864d56d-f8gmp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.334434 kubelet[2718]: I0813 01:46:53.334301 2718 kubelet.go:2351] "Pod admission denied" podUID="499fa646-729c-43b8-adf6-086935cece0f" pod="tigera-operator/tigera-operator-747864d56d-qmtwm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.435347 kubelet[2718]: I0813 01:46:53.435300 2718 kubelet.go:2351] "Pod admission denied" podUID="17714dc2-1ad9-45e7-ac92-502ad8cf597d" pod="tigera-operator/tigera-operator-747864d56d-hf4zb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.643448 kubelet[2718]: I0813 01:46:53.642885 2718 kubelet.go:2351] "Pod admission denied" podUID="49866499-0892-4bc0-9723-0689650d953a" pod="tigera-operator/tigera-operator-747864d56d-n6t5f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.739884 kubelet[2718]: I0813 01:46:53.739812 2718 kubelet.go:2351] "Pod admission denied" podUID="da6f7b49-2abd-4646-93b5-c2ace9ec1120" pod="tigera-operator/tigera-operator-747864d56d-9hljk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:53.783356 containerd[1559]: time="2025-08-13T01:46:53.783105239Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,}" Aug 13 01:46:53.840180 containerd[1559]: time="2025-08-13T01:46:53.837829378Z" level=error msg="Failed to destroy network for sandbox \"fa5ec82b3de1003ca28f65fb018478b3b65a7d230d0991211f891d5df34ba7da\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:53.839769 systemd[1]: run-netns-cni\x2d8f87c73c\x2db94b\x2d4a4d\x2d207a\x2dc789b42d280b.mount: Deactivated successfully. Aug 13 01:46:53.843975 containerd[1559]: time="2025-08-13T01:46:53.842689948Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa5ec82b3de1003ca28f65fb018478b3b65a7d230d0991211f891d5df34ba7da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:53.844238 kubelet[2718]: E0813 01:46:53.844208 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa5ec82b3de1003ca28f65fb018478b3b65a7d230d0991211f891d5df34ba7da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:53.844369 kubelet[2718]: E0813 01:46:53.844351 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa5ec82b3de1003ca28f65fb018478b3b65a7d230d0991211f891d5df34ba7da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:46:53.844431 kubelet[2718]: E0813 01:46:53.844415 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fa5ec82b3de1003ca28f65fb018478b3b65a7d230d0991211f891d5df34ba7da\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:46:53.844515 kubelet[2718]: E0813 01:46:53.844494 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fa5ec82b3de1003ca28f65fb018478b3b65a7d230d0991211f891d5df34ba7da\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:46:53.848883 kubelet[2718]: I0813 01:46:53.848850 2718 kubelet.go:2351] "Pod admission denied" podUID="fff724bb-8cbb-4774-8169-26f9c192e6e3" pod="tigera-operator/tigera-operator-747864d56d-d8tv2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.044193 kubelet[2718]: I0813 01:46:54.044128 2718 kubelet.go:2351] "Pod admission denied" podUID="ec076214-dd14-4827-887a-41b328bce6ad" pod="tigera-operator/tigera-operator-747864d56d-46mvw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.148885 kubelet[2718]: I0813 01:46:54.147890 2718 kubelet.go:2351] "Pod admission denied" podUID="3edfc88c-d905-4268-8986-8c50f64d7c58" pod="tigera-operator/tigera-operator-747864d56d-t7xz4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.238095 kubelet[2718]: I0813 01:46:54.238043 2718 kubelet.go:2351] "Pod admission denied" podUID="130e1f0d-231b-4138-9578-a93e57168231" pod="tigera-operator/tigera-operator-747864d56d-lpd9h" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.437783 kubelet[2718]: I0813 01:46:54.437658 2718 kubelet.go:2351] "Pod admission denied" podUID="671a813f-be55-4523-aeeb-df7438a7148a" pod="tigera-operator/tigera-operator-747864d56d-hhzgt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.535946 kubelet[2718]: I0813 01:46:54.535900 2718 kubelet.go:2351] "Pod admission denied" podUID="3b855370-d974-4d03-b41e-b76727f8c717" pod="tigera-operator/tigera-operator-747864d56d-4dc4m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.585626 kubelet[2718]: I0813 01:46:54.585572 2718 kubelet.go:2351] "Pod admission denied" podUID="11218c6b-71e7-4d96-adcc-cf9a3232b549" pod="tigera-operator/tigera-operator-747864d56d-x6nfd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.688940 kubelet[2718]: I0813 01:46:54.688441 2718 kubelet.go:2351] "Pod admission denied" podUID="64a0daf1-0a4c-49f9-b684-88b91925a68d" pod="tigera-operator/tigera-operator-747864d56d-lkrk5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.781914 kubelet[2718]: E0813 01:46:54.781376 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1940071012: write /var/lib/containerd/tmpmounts/containerd-mount1940071012/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-qgskr" podUID="9adfb11c-9977-45e9-b78f-00f4995e46c5" Aug 13 01:46:54.793876 kubelet[2718]: I0813 01:46:54.792789 2718 kubelet.go:2351] "Pod admission denied" podUID="0239343a-2779-4b99-8a18-ab757e242389" pod="tigera-operator/tigera-operator-747864d56d-278gr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.891600 kubelet[2718]: I0813 01:46:54.891544 2718 kubelet.go:2351] "Pod admission denied" podUID="48c9e4c5-6f58-48b4-b9b4-a80a79ffb511" pod="tigera-operator/tigera-operator-747864d56d-5dd4w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:54.988719 kubelet[2718]: I0813 01:46:54.988664 2718 kubelet.go:2351] "Pod admission denied" podUID="d11a97b1-44ec-458b-9da6-43e62b01320f" pod="tigera-operator/tigera-operator-747864d56d-86lpn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.037727 kubelet[2718]: I0813 01:46:55.037672 2718 kubelet.go:2351] "Pod admission denied" podUID="8d35b09d-f8b6-4ea6-a5c2-39f0bd4f8949" pod="tigera-operator/tigera-operator-747864d56d-zx86p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.138876 kubelet[2718]: I0813 01:46:55.137828 2718 kubelet.go:2351] "Pod admission denied" podUID="e53fb001-cb20-4696-ad7d-8b4edbf2aad0" pod="tigera-operator/tigera-operator-747864d56d-cqlpg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.339949 kubelet[2718]: I0813 01:46:55.339429 2718 kubelet.go:2351] "Pod admission denied" podUID="e6fde7fe-21c9-486c-8d6b-0d0dc5ee1a5d" pod="tigera-operator/tigera-operator-747864d56d-tv8xb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.442638 kubelet[2718]: I0813 01:46:55.442440 2718 kubelet.go:2351] "Pod admission denied" podUID="d7b025a8-bdc4-444e-a430-3e3ad8136c29" pod="tigera-operator/tigera-operator-747864d56d-4mwcm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.543533 kubelet[2718]: I0813 01:46:55.543461 2718 kubelet.go:2351] "Pod admission denied" podUID="e3457c7e-17b8-4a53-bfa0-6e4b2ccb49b7" pod="tigera-operator/tigera-operator-747864d56d-jnhck" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.639197 kubelet[2718]: I0813 01:46:55.638774 2718 kubelet.go:2351] "Pod admission denied" podUID="7c81219b-2fc6-41c9-ab90-47c3029cc1bc" pod="tigera-operator/tigera-operator-747864d56d-q7rvv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.740910 kubelet[2718]: I0813 01:46:55.740840 2718 kubelet.go:2351] "Pod admission denied" podUID="e766ce19-d730-4648-ab79-fd82fca25406" pod="tigera-operator/tigera-operator-747864d56d-tbfjz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:55.941511 kubelet[2718]: I0813 01:46:55.940099 2718 kubelet.go:2351] "Pod admission denied" podUID="25081f23-3638-40bc-b147-60e75e995d30" pod="tigera-operator/tigera-operator-747864d56d-q75dj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.037570 kubelet[2718]: I0813 01:46:56.037526 2718 kubelet.go:2351] "Pod admission denied" podUID="e2e943f2-225b-4722-891f-1a6e48ca7a17" pod="tigera-operator/tigera-operator-747864d56d-q82jg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.139637 kubelet[2718]: I0813 01:46:56.139576 2718 kubelet.go:2351] "Pod admission denied" podUID="2ac4a4f7-4b57-4e52-9427-64091b2b28e7" pod="tigera-operator/tigera-operator-747864d56d-bcwpp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.240788 kubelet[2718]: I0813 01:46:56.240731 2718 kubelet.go:2351] "Pod admission denied" podUID="78cea037-6ef6-4958-b287-de1f3098117c" pod="tigera-operator/tigera-operator-747864d56d-ngs54" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.336267 kubelet[2718]: I0813 01:46:56.336226 2718 kubelet.go:2351] "Pod admission denied" podUID="f64ca373-ee5e-4b65-bc70-2abc1d47936d" pod="tigera-operator/tigera-operator-747864d56d-558rn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.444740 kubelet[2718]: I0813 01:46:56.444695 2718 kubelet.go:2351] "Pod admission denied" podUID="e5733521-ead3-42fc-a826-b5fdac15072d" pod="tigera-operator/tigera-operator-747864d56d-4b4jd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.535463 kubelet[2718]: I0813 01:46:56.535367 2718 kubelet.go:2351] "Pod admission denied" podUID="1ad6fe10-b956-47db-8d57-552e3ec3a76c" pod="tigera-operator/tigera-operator-747864d56d-gwsn8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.635293 kubelet[2718]: I0813 01:46:56.635095 2718 kubelet.go:2351] "Pod admission denied" podUID="8cca6071-0457-4879-b996-1b75ed5719d3" pod="tigera-operator/tigera-operator-747864d56d-npzm2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.685645 kubelet[2718]: I0813 01:46:56.685607 2718 kubelet.go:2351] "Pod admission denied" podUID="e685600d-f757-4fcb-be55-0ed3ef89d53c" pod="tigera-operator/tigera-operator-747864d56d-bflvz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.782659 kubelet[2718]: E0813 01:46:56.782633 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:46:56.798100 kubelet[2718]: I0813 01:46:56.798000 2718 kubelet.go:2351] "Pod admission denied" podUID="294943c4-615b-4431-b912-1db5a2f04726" pod="tigera-operator/tigera-operator-747864d56d-hldjz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:56.988657 kubelet[2718]: I0813 01:46:56.988609 2718 kubelet.go:2351] "Pod admission denied" podUID="ffea7924-55ca-4fbe-92c6-7b1a27240f0f" pod="tigera-operator/tigera-operator-747864d56d-ch426" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.090608 kubelet[2718]: I0813 01:46:57.090484 2718 kubelet.go:2351] "Pod admission denied" podUID="0e9ba05d-86e8-4b19-91a7-b31828f0debd" pod="tigera-operator/tigera-operator-747864d56d-6mxsh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.134633 kubelet[2718]: I0813 01:46:57.134599 2718 kubelet.go:2351] "Pod admission denied" podUID="044c3af3-a5ce-45b9-94f3-7d05cf5754b9" pod="tigera-operator/tigera-operator-747864d56d-j4x8g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.235604 kubelet[2718]: I0813 01:46:57.235566 2718 kubelet.go:2351] "Pod admission denied" podUID="3f6e8545-acb1-438b-b6c5-b7f10f50a438" pod="tigera-operator/tigera-operator-747864d56d-j8lfg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.445894 kubelet[2718]: I0813 01:46:57.444787 2718 kubelet.go:2351] "Pod admission denied" podUID="e25304cb-f512-4b76-b353-d78925f7e3b6" pod="tigera-operator/tigera-operator-747864d56d-pzq2s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.538164 kubelet[2718]: I0813 01:46:57.538123 2718 kubelet.go:2351] "Pod admission denied" podUID="6b6db6bf-19eb-4b93-b6d6-af68ed4e7768" pod="tigera-operator/tigera-operator-747864d56d-r654f" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.585625 kubelet[2718]: I0813 01:46:57.585582 2718 kubelet.go:2351] "Pod admission denied" podUID="a2e24a3f-ee81-4c8c-b23c-260a0c62ffdc" pod="tigera-operator/tigera-operator-747864d56d-d9wgj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.686259 kubelet[2718]: I0813 01:46:57.686213 2718 kubelet.go:2351] "Pod admission denied" podUID="f9c7bbef-3623-4e1a-9d34-20370821c409" pod="tigera-operator/tigera-operator-747864d56d-jl4h5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.784921 kubelet[2718]: I0813 01:46:57.784877 2718 kubelet.go:2351] "Pod admission denied" podUID="e381792c-bdaf-46dc-89c3-fb5bb943ad71" pod="tigera-operator/tigera-operator-747864d56d-cbkvw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.889876 kubelet[2718]: I0813 01:46:57.889815 2718 kubelet.go:2351] "Pod admission denied" podUID="eeec3095-ba88-436b-8691-9b1d2d417df5" pod="tigera-operator/tigera-operator-747864d56d-mbsc4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:57.991510 kubelet[2718]: I0813 01:46:57.991441 2718 kubelet.go:2351] "Pod admission denied" podUID="4ea137e2-12ce-4e09-9248-21337cc3b8ad" pod="tigera-operator/tigera-operator-747864d56d-xpclg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.039097 kubelet[2718]: I0813 01:46:58.038909 2718 kubelet.go:2351] "Pod admission denied" podUID="3161d096-f3d0-464e-b925-61f2564e381e" pod="tigera-operator/tigera-operator-747864d56d-cpqbj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.143059 kubelet[2718]: I0813 01:46:58.143005 2718 kubelet.go:2351] "Pod admission denied" podUID="c5227eaa-b42c-4960-b4bb-5f0ce6db8adf" pod="tigera-operator/tigera-operator-747864d56d-dl4tm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.237984 kubelet[2718]: I0813 01:46:58.237622 2718 kubelet.go:2351] "Pod admission denied" podUID="3cf8bfb6-1d96-4d3b-9a5e-644e99b2a707" pod="tigera-operator/tigera-operator-747864d56d-mjvdz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.338887 kubelet[2718]: I0813 01:46:58.338742 2718 kubelet.go:2351] "Pod admission denied" podUID="d1ca805c-06ed-41e5-9225-490b83b490c6" pod="tigera-operator/tigera-operator-747864d56d-6cthp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.372837 kubelet[2718]: I0813 01:46:58.372796 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:58.372837 kubelet[2718]: I0813 01:46:58.372833 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:46:58.376553 kubelet[2718]: I0813 01:46:58.376513 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:46:58.388653 kubelet[2718]: I0813 01:46:58.388618 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:46:58.388779 kubelet[2718]: I0813 01:46:58.388738 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["kube-system/coredns-668d6bf9bc-j47vf","calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","kube-system/coredns-668d6bf9bc-dfjz8","calico-system/calico-node-qgskr","calico-system/csi-node-driver-dbqt2","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:46:58.388886 kubelet[2718]: E0813 01:46:58.388778 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:46:58.388886 kubelet[2718]: E0813 01:46:58.388792 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:46:58.388886 kubelet[2718]: E0813 01:46:58.388809 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:46:58.388886 kubelet[2718]: E0813 01:46:58.388820 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:46:58.388886 kubelet[2718]: E0813 01:46:58.388834 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:46:58.388886 kubelet[2718]: E0813 01:46:58.388848 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:46:58.389027 kubelet[2718]: E0813 01:46:58.388905 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:46:58.389027 kubelet[2718]: E0813 01:46:58.388919 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:46:58.389027 kubelet[2718]: E0813 01:46:58.388932 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:46:58.389027 kubelet[2718]: E0813 01:46:58.388944 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:46:58.389027 kubelet[2718]: I0813 01:46:58.388958 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:46:58.541963 kubelet[2718]: I0813 01:46:58.541901 2718 kubelet.go:2351] "Pod admission denied" podUID="9515a4a7-2a33-4a2a-b1f2-eb2e54997e1f" pod="tigera-operator/tigera-operator-747864d56d-67stj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.636248 kubelet[2718]: I0813 01:46:58.635714 2718 kubelet.go:2351] "Pod admission denied" podUID="cc21aa07-b2d5-4f7e-a62c-a28d9eca8ec8" pod="tigera-operator/tigera-operator-747864d56d-xjzk5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.738650 kubelet[2718]: I0813 01:46:58.737608 2718 kubelet.go:2351] "Pod admission denied" podUID="c6093b77-3769-4040-b3d5-93fed388c7d2" pod="tigera-operator/tigera-operator-747864d56d-mzf55" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:58.936752 kubelet[2718]: I0813 01:46:58.936358 2718 kubelet.go:2351] "Pod admission denied" podUID="c5ca20f1-1ebb-4a45-ace0-33fa111b33d3" pod="tigera-operator/tigera-operator-747864d56d-r82wz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.040227 kubelet[2718]: I0813 01:46:59.040043 2718 kubelet.go:2351] "Pod admission denied" podUID="d4e1b2fb-ae19-4bc0-a271-2bd56af05eff" pod="tigera-operator/tigera-operator-747864d56d-d96jl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.140030 kubelet[2718]: I0813 01:46:59.139216 2718 kubelet.go:2351] "Pod admission denied" podUID="1d75aa6c-c839-4932-9cdd-e5a1694b5a44" pod="tigera-operator/tigera-operator-747864d56d-b9zxk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.237074 kubelet[2718]: I0813 01:46:59.237033 2718 kubelet.go:2351] "Pod admission denied" podUID="6cda6dde-39ed-47f3-ac99-e01e78f089c8" pod="tigera-operator/tigera-operator-747864d56d-jsfkq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.288475 kubelet[2718]: I0813 01:46:59.288427 2718 kubelet.go:2351] "Pod admission denied" podUID="559b3eba-4e09-4686-8923-2838dc9b3058" pod="tigera-operator/tigera-operator-747864d56d-phfg6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.388038 kubelet[2718]: I0813 01:46:59.387987 2718 kubelet.go:2351] "Pod admission denied" podUID="12a9da5a-9c5f-4188-b25f-159901577ca2" pod="tigera-operator/tigera-operator-747864d56d-vdktd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.592361 kubelet[2718]: I0813 01:46:59.592215 2718 kubelet.go:2351] "Pod admission denied" podUID="c708c13b-e5f8-443b-a300-5ee179781f5e" pod="tigera-operator/tigera-operator-747864d56d-pr5jj" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.702549 kubelet[2718]: I0813 01:46:59.701975 2718 kubelet.go:2351] "Pod admission denied" podUID="0bb9b71d-56e7-4f34-b781-0e20db8266fe" pod="tigera-operator/tigera-operator-747864d56d-ttmd8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.739510 kubelet[2718]: I0813 01:46:59.739445 2718 kubelet.go:2351] "Pod admission denied" podUID="95352f94-2a59-4c2b-a900-08c0f1075770" pod="tigera-operator/tigera-operator-747864d56d-6z588" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:46:59.779944 kubelet[2718]: E0813 01:46:59.779652 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:46:59.781334 containerd[1559]: time="2025-08-13T01:46:59.781285905Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,}" Aug 13 01:46:59.848796 containerd[1559]: time="2025-08-13T01:46:59.848574247Z" level=error msg="Failed to destroy network for sandbox \"368369a14f343815e52bbaa8c8007672f5c44a6680b487b6802ece4c4a2aaf6f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:59.851255 systemd[1]: run-netns-cni\x2d5bd655db\x2d37c3\x2db0ce\x2d3f4f\x2d436a3bf13d6a.mount: Deactivated successfully. Aug 13 01:46:59.853939 containerd[1559]: time="2025-08-13T01:46:59.852740791Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"368369a14f343815e52bbaa8c8007672f5c44a6680b487b6802ece4c4a2aaf6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:59.854232 kubelet[2718]: E0813 01:46:59.853328 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"368369a14f343815e52bbaa8c8007672f5c44a6680b487b6802ece4c4a2aaf6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:46:59.854232 kubelet[2718]: E0813 01:46:59.853380 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"368369a14f343815e52bbaa8c8007672f5c44a6680b487b6802ece4c4a2aaf6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:46:59.854232 kubelet[2718]: E0813 01:46:59.853402 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"368369a14f343815e52bbaa8c8007672f5c44a6680b487b6802ece4c4a2aaf6f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:46:59.854232 kubelet[2718]: E0813 01:46:59.853437 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"368369a14f343815e52bbaa8c8007672f5c44a6680b487b6802ece4c4a2aaf6f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j47vf" podUID="0ba3c042-02d2-446d-bb82-0965919f2962" Aug 13 01:46:59.855185 kubelet[2718]: I0813 01:46:59.855148 2718 kubelet.go:2351] "Pod admission denied" podUID="9396c500-0303-49b1-9776-99c41e9c9d91" pod="tigera-operator/tigera-operator-747864d56d-mlhxk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.038222 kubelet[2718]: I0813 01:47:00.038169 2718 kubelet.go:2351] "Pod admission denied" podUID="1495dd1b-473c-438d-aead-f35d62e813f7" pod="tigera-operator/tigera-operator-747864d56d-6cvm8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.135383 kubelet[2718]: I0813 01:47:00.134410 2718 kubelet.go:2351] "Pod admission denied" podUID="0b8d1a02-4372-4e17-a82f-c1fef9843c54" pod="tigera-operator/tigera-operator-747864d56d-6828v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.245886 kubelet[2718]: I0813 01:47:00.245758 2718 kubelet.go:2351] "Pod admission denied" podUID="53bea7ef-e53b-468e-8d57-3b0ec5feab31" pod="tigera-operator/tigera-operator-747864d56d-4855z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.337752 kubelet[2718]: I0813 01:47:00.337701 2718 kubelet.go:2351] "Pod admission denied" podUID="e9a37121-f653-4571-ab9c-41357bf89ec5" pod="tigera-operator/tigera-operator-747864d56d-wstnq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.435599 kubelet[2718]: I0813 01:47:00.434510 2718 kubelet.go:2351] "Pod admission denied" podUID="ff2a9e9d-e233-4986-af72-c110cf37e85c" pod="tigera-operator/tigera-operator-747864d56d-7xmcw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.539411 kubelet[2718]: I0813 01:47:00.539349 2718 kubelet.go:2351] "Pod admission denied" podUID="c0c09b6e-2edd-40b0-99a0-a86cdab18b14" pod="tigera-operator/tigera-operator-747864d56d-vw6m7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.637480 kubelet[2718]: I0813 01:47:00.637420 2718 kubelet.go:2351] "Pod admission denied" podUID="08ceed39-fcaf-4925-9d2c-7bc1898d3557" pod="tigera-operator/tigera-operator-747864d56d-ngc64" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.780873 kubelet[2718]: E0813 01:47:00.780412 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:47:00.781815 containerd[1559]: time="2025-08-13T01:47:00.781789415Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:00.845612 kubelet[2718]: I0813 01:47:00.845505 2718 kubelet.go:2351] "Pod admission denied" podUID="8637cbc6-000d-4cb5-bf44-335812083037" pod="tigera-operator/tigera-operator-747864d56d-8jgdn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:00.857764 containerd[1559]: time="2025-08-13T01:47:00.857710360Z" level=error msg="Failed to destroy network for sandbox \"0213e1c8a6f14a1c2a05f7095886de403ed8346c9b1ab7d424644bd087303594\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:00.862658 systemd[1]: run-netns-cni\x2d36fbede5\x2da3e5\x2d921a\x2dc0d9\x2de06bf75ae087.mount: Deactivated successfully. Aug 13 01:47:00.864048 containerd[1559]: time="2025-08-13T01:47:00.863924139Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"0213e1c8a6f14a1c2a05f7095886de403ed8346c9b1ab7d424644bd087303594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:00.864374 kubelet[2718]: E0813 01:47:00.864264 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0213e1c8a6f14a1c2a05f7095886de403ed8346c9b1ab7d424644bd087303594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:00.864996 kubelet[2718]: E0813 01:47:00.864890 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0213e1c8a6f14a1c2a05f7095886de403ed8346c9b1ab7d424644bd087303594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:47:00.864996 kubelet[2718]: E0813 01:47:00.864931 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0213e1c8a6f14a1c2a05f7095886de403ed8346c9b1ab7d424644bd087303594\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:47:00.864996 kubelet[2718]: E0813 01:47:00.864966 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0213e1c8a6f14a1c2a05f7095886de403ed8346c9b1ab7d424644bd087303594\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dfjz8" podUID="dafdbb28-0754-4303-98bd-08c77ee94f1a" Aug 13 01:47:00.938109 kubelet[2718]: I0813 01:47:00.938040 2718 kubelet.go:2351] "Pod admission denied" podUID="081833d7-8801-4fbf-9c79-7e23a13feb1e" pod="tigera-operator/tigera-operator-747864d56d-gxq5t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.037749 kubelet[2718]: I0813 01:47:01.036194 2718 kubelet.go:2351] "Pod admission denied" podUID="d5a9db8b-6050-48d3-8c4e-c840993cfd1f" pod="tigera-operator/tigera-operator-747864d56d-xjz88" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.136955 kubelet[2718]: I0813 01:47:01.136914 2718 kubelet.go:2351] "Pod admission denied" podUID="994540fb-11d5-4380-83ed-b44118370e40" pod="tigera-operator/tigera-operator-747864d56d-xqtp2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.246116 kubelet[2718]: I0813 01:47:01.245596 2718 kubelet.go:2351] "Pod admission denied" podUID="aa6976f2-4721-4b04-85d1-7dd8114b7042" pod="tigera-operator/tigera-operator-747864d56d-cgv7n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.441561 kubelet[2718]: I0813 01:47:01.441423 2718 kubelet.go:2351] "Pod admission denied" podUID="5adea3ea-31d7-44b1-8511-479681484469" pod="tigera-operator/tigera-operator-747864d56d-wx75g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.538136 kubelet[2718]: I0813 01:47:01.538067 2718 kubelet.go:2351] "Pod admission denied" podUID="2d9a5793-bdd4-45ca-9e55-13a4eb4d61ed" pod="tigera-operator/tigera-operator-747864d56d-l925x" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.639136 kubelet[2718]: I0813 01:47:01.639079 2718 kubelet.go:2351] "Pod admission denied" podUID="73bb4fa6-50b5-4a93-beb9-3e6b7ee7785c" pod="tigera-operator/tigera-operator-747864d56d-wbwhc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.746976 kubelet[2718]: I0813 01:47:01.746915 2718 kubelet.go:2351] "Pod admission denied" podUID="d641321c-f4a1-407b-9b1e-a8087b9a1956" pod="tigera-operator/tigera-operator-747864d56d-46jk9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.796358 kubelet[2718]: I0813 01:47:01.796315 2718 kubelet.go:2351] "Pod admission denied" podUID="2a39517d-a01d-4b29-a267-38450c213ec2" pod="tigera-operator/tigera-operator-747864d56d-f4mgt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.889554 kubelet[2718]: I0813 01:47:01.889503 2718 kubelet.go:2351] "Pod admission denied" podUID="f99b2d51-2c50-42da-b4a3-a1c184ff7dae" pod="tigera-operator/tigera-operator-747864d56d-d85gd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:01.989413 kubelet[2718]: I0813 01:47:01.989360 2718 kubelet.go:2351] "Pod admission denied" podUID="eb82dd30-14d0-4cfa-8541-71ea1c2f5713" pod="tigera-operator/tigera-operator-747864d56d-9ktbp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.036990 kubelet[2718]: I0813 01:47:02.036255 2718 kubelet.go:2351] "Pod admission denied" podUID="ed508dfe-957f-4ec0-8e46-25373e64e0b0" pod="tigera-operator/tigera-operator-747864d56d-749nt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.144787 kubelet[2718]: I0813 01:47:02.144415 2718 kubelet.go:2351] "Pod admission denied" podUID="00d3a2b4-6ba0-4b7a-9bcf-900c45ac770d" pod="tigera-operator/tigera-operator-747864d56d-5vszx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.236089 kubelet[2718]: I0813 01:47:02.236045 2718 kubelet.go:2351] "Pod admission denied" podUID="cf1a60f2-287d-46d5-be81-a541e405e349" pod="tigera-operator/tigera-operator-747864d56d-5wzd4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.334572 kubelet[2718]: I0813 01:47:02.334446 2718 kubelet.go:2351] "Pod admission denied" podUID="30a75152-aba9-4be1-a84c-f31e89769df3" pod="tigera-operator/tigera-operator-747864d56d-6ft45" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.436777 kubelet[2718]: I0813 01:47:02.436728 2718 kubelet.go:2351] "Pod admission denied" podUID="a31855c7-2bb6-4051-89bb-4664fc0536a9" pod="tigera-operator/tigera-operator-747864d56d-tklx8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.536923 kubelet[2718]: I0813 01:47:02.536875 2718 kubelet.go:2351] "Pod admission denied" podUID="47c01404-80f3-4e39-a35a-4106650b5c63" pod="tigera-operator/tigera-operator-747864d56d-9xhbp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.637704 kubelet[2718]: I0813 01:47:02.637218 2718 kubelet.go:2351] "Pod admission denied" podUID="4ad29266-9cc1-4130-95f6-49623ebb3607" pod="tigera-operator/tigera-operator-747864d56d-d9m5j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.737410 kubelet[2718]: I0813 01:47:02.737346 2718 kubelet.go:2351] "Pod admission denied" podUID="23dcb77b-082c-46a1-a4b7-7955cfd090cd" pod="tigera-operator/tigera-operator-747864d56d-fsn4p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.837124 kubelet[2718]: I0813 01:47:02.837048 2718 kubelet.go:2351] "Pod admission denied" podUID="00c2a27b-42a1-41e8-947d-fb9546cdcc12" pod="tigera-operator/tigera-operator-747864d56d-bbwng" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:02.938430 kubelet[2718]: I0813 01:47:02.938174 2718 kubelet.go:2351] "Pod admission denied" podUID="f703dd33-8b0c-4eb7-b9c9-a37d93c26ce7" pod="tigera-operator/tigera-operator-747864d56d-x7whb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.034962 kubelet[2718]: I0813 01:47:03.034921 2718 kubelet.go:2351] "Pod admission denied" podUID="3bcf5c1b-6975-4bdc-9c4d-ec2eb508e23a" pod="tigera-operator/tigera-operator-747864d56d-kndx9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.134143 kubelet[2718]: I0813 01:47:03.134103 2718 kubelet.go:2351] "Pod admission denied" podUID="e0761dc1-ce6b-4520-b686-be8fc3e17d62" pod="tigera-operator/tigera-operator-747864d56d-kxsnb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.240781 kubelet[2718]: I0813 01:47:03.240710 2718 kubelet.go:2351] "Pod admission denied" podUID="bbeea15c-84df-4960-a0c9-36e133e31775" pod="tigera-operator/tigera-operator-747864d56d-grh5l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.338103 kubelet[2718]: I0813 01:47:03.338036 2718 kubelet.go:2351] "Pod admission denied" podUID="91928434-e925-40ca-a66a-f64445bad9d5" pod="tigera-operator/tigera-operator-747864d56d-trttv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.446692 kubelet[2718]: I0813 01:47:03.446255 2718 kubelet.go:2351] "Pod admission denied" podUID="55186fe9-3190-42f1-98ef-c4a26e050b38" pod="tigera-operator/tigera-operator-747864d56d-dftm6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.485501 kubelet[2718]: I0813 01:47:03.485456 2718 kubelet.go:2351] "Pod admission denied" podUID="ecfb45a4-fba7-4fb9-ab6f-848bf30b18ac" pod="tigera-operator/tigera-operator-747864d56d-x8rhc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.589323 kubelet[2718]: I0813 01:47:03.589174 2718 kubelet.go:2351] "Pod admission denied" podUID="4aeaef83-5374-4c6e-8878-562a6a9337a4" pod="tigera-operator/tigera-operator-747864d56d-9qv2j" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.688775 kubelet[2718]: I0813 01:47:03.688730 2718 kubelet.go:2351] "Pod admission denied" podUID="2728d07c-df31-484b-9fb7-71b5d16fb4d9" pod="tigera-operator/tigera-operator-747864d56d-v6stw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.792481 kubelet[2718]: I0813 01:47:03.792416 2718 kubelet.go:2351] "Pod admission denied" podUID="cdf902f8-921a-4220-b31e-4410d144f497" pod="tigera-operator/tigera-operator-747864d56d-7mhv5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.888753 kubelet[2718]: I0813 01:47:03.888635 2718 kubelet.go:2351] "Pod admission denied" podUID="a8945b48-e673-48f2-ab61-cb08127de386" pod="tigera-operator/tigera-operator-747864d56d-swvk4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:03.986633 kubelet[2718]: I0813 01:47:03.986576 2718 kubelet.go:2351] "Pod admission denied" podUID="13615df3-01ce-43dc-9cdc-a6e2b0364aa2" pod="tigera-operator/tigera-operator-747864d56d-4shxx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.186120 kubelet[2718]: I0813 01:47:04.186010 2718 kubelet.go:2351] "Pod admission denied" podUID="7907fb67-a003-4eac-9f2e-0fda0cd443ef" pod="tigera-operator/tigera-operator-747864d56d-zx46b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.283955 kubelet[2718]: I0813 01:47:04.283906 2718 kubelet.go:2351] "Pod admission denied" podUID="e41739af-e087-4b12-9581-ccd3924d8af8" pod="tigera-operator/tigera-operator-747864d56d-bpdsq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.343550 kubelet[2718]: I0813 01:47:04.342888 2718 kubelet.go:2351] "Pod admission denied" podUID="2bd48a59-a9ae-4045-8c14-1ce1185c1fe2" pod="tigera-operator/tigera-operator-747864d56d-mr8z7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.437224 kubelet[2718]: I0813 01:47:04.436916 2718 kubelet.go:2351] "Pod admission denied" podUID="dfa17452-55d9-40f9-a76c-3e6622a5da51" pod="tigera-operator/tigera-operator-747864d56d-rr9jl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.641497 kubelet[2718]: I0813 01:47:04.641157 2718 kubelet.go:2351] "Pod admission denied" podUID="e6dc9c7e-5353-4c13-9c95-b3cff2fb3478" pod="tigera-operator/tigera-operator-747864d56d-2f8r5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.738644 kubelet[2718]: I0813 01:47:04.738603 2718 kubelet.go:2351] "Pod admission denied" podUID="53f9fe68-6758-4c8e-8269-bfc1690a0986" pod="tigera-operator/tigera-operator-747864d56d-gqcjl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.786222 kubelet[2718]: I0813 01:47:04.786182 2718 kubelet.go:2351] "Pod admission denied" podUID="78c277d2-df5b-4bdf-991f-3eafc74622c4" pod="tigera-operator/tigera-operator-747864d56d-hpnjz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:04.892147 kubelet[2718]: I0813 01:47:04.892103 2718 kubelet.go:2351] "Pod admission denied" podUID="0378a043-d875-4ae7-8b3e-d94eb0e15302" pod="tigera-operator/tigera-operator-747864d56d-r9lrz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.090622 kubelet[2718]: I0813 01:47:05.090530 2718 kubelet.go:2351] "Pod admission denied" podUID="6a17ad25-a84e-4d3d-af78-567a69067fc0" pod="tigera-operator/tigera-operator-747864d56d-2hg75" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.185000 kubelet[2718]: I0813 01:47:05.184947 2718 kubelet.go:2351] "Pod admission denied" podUID="d0807916-0e2f-459a-b56e-c473a4489182" pod="tigera-operator/tigera-operator-747864d56d-fcjk6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.286210 kubelet[2718]: I0813 01:47:05.286168 2718 kubelet.go:2351] "Pod admission denied" podUID="ac6e4e04-893f-4654-b3e1-b47ebb1d09ad" pod="tigera-operator/tigera-operator-747864d56d-xtrv9" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.489250 kubelet[2718]: I0813 01:47:05.489185 2718 kubelet.go:2351] "Pod admission denied" podUID="17dd773c-72fc-4e4c-80dd-4cc8add2dec5" pod="tigera-operator/tigera-operator-747864d56d-fdcgv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.589575 kubelet[2718]: I0813 01:47:05.589506 2718 kubelet.go:2351] "Pod admission denied" podUID="8925f841-b025-4d54-969d-5e860679c255" pod="tigera-operator/tigera-operator-747864d56d-28qdq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.691254 kubelet[2718]: I0813 01:47:05.691189 2718 kubelet.go:2351] "Pod admission denied" podUID="8870ac2a-41bd-42c8-bd97-cf9255c77764" pod="tigera-operator/tigera-operator-747864d56d-x24b4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.782055 containerd[1559]: time="2025-08-13T01:47:05.781374125Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:05.796098 kubelet[2718]: I0813 01:47:05.796060 2718 kubelet.go:2351] "Pod admission denied" podUID="3fadd26b-4de6-4a2b-89b2-1361f969f070" pod="tigera-operator/tigera-operator-747864d56d-cgqq7" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.852754 containerd[1559]: time="2025-08-13T01:47:05.852701591Z" level=error msg="Failed to destroy network for sandbox \"8849e8e7639644410ac11cac9645f67ac6148394ce49d5988a782cb60571195d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:05.856029 containerd[1559]: time="2025-08-13T01:47:05.853839143Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8849e8e7639644410ac11cac9645f67ac6148394ce49d5988a782cb60571195d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:05.856510 systemd[1]: run-netns-cni\x2d7d83ccae\x2d4ce0\x2d4310\x2d9078\x2d5ae80e233f65.mount: Deactivated successfully. Aug 13 01:47:05.857033 kubelet[2718]: E0813 01:47:05.856989 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8849e8e7639644410ac11cac9645f67ac6148394ce49d5988a782cb60571195d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:05.857139 kubelet[2718]: E0813 01:47:05.857122 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8849e8e7639644410ac11cac9645f67ac6148394ce49d5988a782cb60571195d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:47:05.857202 kubelet[2718]: E0813 01:47:05.857187 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8849e8e7639644410ac11cac9645f67ac6148394ce49d5988a782cb60571195d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:47:05.857295 kubelet[2718]: E0813 01:47:05.857273 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8849e8e7639644410ac11cac9645f67ac6148394ce49d5988a782cb60571195d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:47:05.887990 kubelet[2718]: I0813 01:47:05.887954 2718 kubelet.go:2351] "Pod admission denied" podUID="9131951e-f10b-40e8-8ea8-7b70d065b48b" pod="tigera-operator/tigera-operator-747864d56d-r8lwp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:05.985983 kubelet[2718]: I0813 01:47:05.985935 2718 kubelet.go:2351] "Pod admission denied" podUID="356a95bd-f51e-4a43-a0b0-73de9f300053" pod="tigera-operator/tigera-operator-747864d56d-psph5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.089333 kubelet[2718]: I0813 01:47:06.089188 2718 kubelet.go:2351] "Pod admission denied" podUID="683066c3-b517-4f5e-bbaf-4d0666a5f408" pod="tigera-operator/tigera-operator-747864d56d-xv77r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.187067 kubelet[2718]: I0813 01:47:06.187025 2718 kubelet.go:2351] "Pod admission denied" podUID="e5b207ca-f4e8-4b80-93c0-c11566a9e48d" pod="tigera-operator/tigera-operator-747864d56d-pmfn5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.286253 kubelet[2718]: I0813 01:47:06.286211 2718 kubelet.go:2351] "Pod admission denied" podUID="2972a352-3a4f-430c-a679-8564abd8f05a" pod="tigera-operator/tigera-operator-747864d56d-bc9jg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.386835 kubelet[2718]: I0813 01:47:06.386613 2718 kubelet.go:2351] "Pod admission denied" podUID="2760b003-4cba-4e6f-8ef7-dd5a372284c6" pod="tigera-operator/tigera-operator-747864d56d-cff6r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.433436 kubelet[2718]: I0813 01:47:06.433397 2718 kubelet.go:2351] "Pod admission denied" podUID="90df2352-687b-4c9e-8f6e-c5d3562c197f" pod="tigera-operator/tigera-operator-747864d56d-cjp6g" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.535437 kubelet[2718]: I0813 01:47:06.535401 2718 kubelet.go:2351] "Pod admission denied" podUID="ca8b1d4a-5d31-41ba-8e81-4afa7e9b70b8" pod="tigera-operator/tigera-operator-747864d56d-wwxxg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.637300 kubelet[2718]: I0813 01:47:06.637068 2718 kubelet.go:2351] "Pod admission denied" podUID="6538c3b5-f9a1-4d6f-916d-d075b9e7be70" pod="tigera-operator/tigera-operator-747864d56d-ttp6t" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.737345 kubelet[2718]: I0813 01:47:06.737298 2718 kubelet.go:2351] "Pod admission denied" podUID="9ed6b70c-849d-428b-a5eb-254fc3a8346f" pod="tigera-operator/tigera-operator-747864d56d-q7b9w" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.838226 kubelet[2718]: I0813 01:47:06.838180 2718 kubelet.go:2351] "Pod admission denied" podUID="6cbed7dc-eda2-4704-aa40-189071380f3e" pod="tigera-operator/tigera-operator-747864d56d-j4s5d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:06.938955 kubelet[2718]: I0813 01:47:06.938796 2718 kubelet.go:2351] "Pod admission denied" podUID="213cb441-15ec-4a88-98ab-09a44c64107e" pod="tigera-operator/tigera-operator-747864d56d-w868v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.139509 kubelet[2718]: I0813 01:47:07.139447 2718 kubelet.go:2351] "Pod admission denied" podUID="9eb6c2af-6041-4a1e-a791-65320d8d884c" pod="tigera-operator/tigera-operator-747864d56d-g65ss" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.236604 kubelet[2718]: I0813 01:47:07.236564 2718 kubelet.go:2351] "Pod admission denied" podUID="941247f6-582e-4cf5-b71a-feb6fce4a40a" pod="tigera-operator/tigera-operator-747864d56d-96vj4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.337936 kubelet[2718]: I0813 01:47:07.337882 2718 kubelet.go:2351] "Pod admission denied" podUID="9ea1e2bb-7dff-4ea4-9d81-3fdb0cda052f" pod="tigera-operator/tigera-operator-747864d56d-jbbb8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.543113 kubelet[2718]: I0813 01:47:07.542875 2718 kubelet.go:2351] "Pod admission denied" podUID="e984a449-9c38-4184-ab23-027ecfc0145a" pod="tigera-operator/tigera-operator-747864d56d-56gdx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.638870 kubelet[2718]: I0813 01:47:07.638786 2718 kubelet.go:2351] "Pod admission denied" podUID="c3b0eca3-f37d-4e90-8add-5e3b380a2d66" pod="tigera-operator/tigera-operator-747864d56d-pzkc8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.743877 kubelet[2718]: I0813 01:47:07.743801 2718 kubelet.go:2351] "Pod admission denied" podUID="ba91cdd6-1d8c-41d2-a957-203c69330bfe" pod="tigera-operator/tigera-operator-747864d56d-xcx7q" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.780407 containerd[1559]: time="2025-08-13T01:47:07.780359620Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:07.837810 containerd[1559]: time="2025-08-13T01:47:07.833483060Z" level=error msg="Failed to destroy network for sandbox \"57b14074fa1cf4949201800c0f72388e286ba37eb8aa8652d4e83375e744261c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:07.837217 systemd[1]: run-netns-cni\x2ddbc26714\x2d13d5\x2daf0f\x2deb4e\x2d3bde9113ac9a.mount: Deactivated successfully. Aug 13 01:47:07.839633 containerd[1559]: time="2025-08-13T01:47:07.839581367Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"57b14074fa1cf4949201800c0f72388e286ba37eb8aa8652d4e83375e744261c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:07.840383 kubelet[2718]: E0813 01:47:07.840288 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57b14074fa1cf4949201800c0f72388e286ba37eb8aa8652d4e83375e744261c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:07.840383 kubelet[2718]: E0813 01:47:07.840343 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57b14074fa1cf4949201800c0f72388e286ba37eb8aa8652d4e83375e744261c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:47:07.840383 kubelet[2718]: E0813 01:47:07.840363 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"57b14074fa1cf4949201800c0f72388e286ba37eb8aa8652d4e83375e744261c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:47:07.840494 kubelet[2718]: E0813 01:47:07.840396 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"57b14074fa1cf4949201800c0f72388e286ba37eb8aa8652d4e83375e744261c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:47:07.848795 kubelet[2718]: I0813 01:47:07.848763 2718 kubelet.go:2351] "Pod admission denied" podUID="deeea3af-aad9-41f9-9d33-b2d10b1a1eda" pod="tigera-operator/tigera-operator-747864d56d-ddcgd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:07.937228 kubelet[2718]: I0813 01:47:07.937182 2718 kubelet.go:2351] "Pod admission denied" podUID="264100bf-ce4f-4a85-81af-ea27bda9f59f" pod="tigera-operator/tigera-operator-747864d56d-xc2wz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.139391 kubelet[2718]: I0813 01:47:08.138674 2718 kubelet.go:2351] "Pod admission denied" podUID="eedde45d-b5f9-406c-beec-75ac2aca1185" pod="tigera-operator/tigera-operator-747864d56d-2sh6r" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.236415 kubelet[2718]: I0813 01:47:08.236371 2718 kubelet.go:2351] "Pod admission denied" podUID="37b036a1-1c35-437d-bf37-5e1a971059d8" pod="tigera-operator/tigera-operator-747864d56d-l6r6p" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.338226 kubelet[2718]: I0813 01:47:08.338176 2718 kubelet.go:2351] "Pod admission denied" podUID="43a4a9de-e0e0-40f4-9235-c1a3b6594941" pod="tigera-operator/tigera-operator-747864d56d-sbhvz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.405025 kubelet[2718]: I0813 01:47:08.404629 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:08.405025 kubelet[2718]: I0813 01:47:08.404672 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:47:08.406316 kubelet[2718]: I0813 01:47:08.406255 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:47:08.416393 kubelet[2718]: I0813 01:47:08.416370 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:08.416470 kubelet[2718]: I0813 01:47:08.416440 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","kube-system/coredns-668d6bf9bc-dfjz8","kube-system/coredns-668d6bf9bc-j47vf","calico-system/calico-node-qgskr","calico-system/csi-node-driver-dbqt2","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:47:08.416470 kubelet[2718]: E0813 01:47:08.416464 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:47:08.416558 kubelet[2718]: E0813 01:47:08.416474 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:47:08.416558 kubelet[2718]: E0813 01:47:08.416481 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:47:08.416558 kubelet[2718]: E0813 01:47:08.416487 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:47:08.416558 kubelet[2718]: E0813 01:47:08.416494 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:47:08.416558 kubelet[2718]: E0813 01:47:08.416504 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:47:08.416558 kubelet[2718]: E0813 01:47:08.416512 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:47:08.416558 kubelet[2718]: E0813 01:47:08.416520 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:47:08.416558 kubelet[2718]: E0813 01:47:08.416529 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:47:08.416558 kubelet[2718]: E0813 01:47:08.416536 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:47:08.416558 kubelet[2718]: I0813 01:47:08.416544 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:47:08.445515 kubelet[2718]: I0813 01:47:08.444995 2718 kubelet.go:2351] "Pod admission denied" podUID="8ef53b43-998d-40a8-a3a6-ba325560bd8e" pod="tigera-operator/tigera-operator-747864d56d-nl82d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.537156 kubelet[2718]: I0813 01:47:08.536813 2718 kubelet.go:2351] "Pod admission denied" podUID="1257c49d-81b7-410a-aab3-631e53baea02" pod="tigera-operator/tigera-operator-747864d56d-kq4ck" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.645368 kubelet[2718]: I0813 01:47:08.645280 2718 kubelet.go:2351] "Pod admission denied" podUID="e9bc5fcd-0dff-4505-9d0f-bf262ddaceb5" pod="tigera-operator/tigera-operator-747864d56d-s8fcb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.741037 kubelet[2718]: I0813 01:47:08.740964 2718 kubelet.go:2351] "Pod admission denied" podUID="946322b6-dd7b-4edb-bec3-be2fd5254564" pod="tigera-operator/tigera-operator-747864d56d-9k67z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:08.780483 kubelet[2718]: E0813 01:47:08.780414 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1940071012: write /var/lib/containerd/tmpmounts/containerd-mount1940071012/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-qgskr" podUID="9adfb11c-9977-45e9-b78f-00f4995e46c5" Aug 13 01:47:08.942732 kubelet[2718]: I0813 01:47:08.940630 2718 kubelet.go:2351] "Pod admission denied" podUID="fe8bae61-191b-44b1-920a-f279fee867f1" pod="tigera-operator/tigera-operator-747864d56d-q7fpc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.038974 kubelet[2718]: I0813 01:47:09.038833 2718 kubelet.go:2351] "Pod admission denied" podUID="c10cee58-d2d0-41c8-90bd-715ecb2681d9" pod="tigera-operator/tigera-operator-747864d56d-k9xs6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.141055 kubelet[2718]: I0813 01:47:09.141012 2718 kubelet.go:2351] "Pod admission denied" podUID="7eaa010d-d473-4d8e-aacb-e727f2b1bcd8" pod="tigera-operator/tigera-operator-747864d56d-mvqng" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.238439 kubelet[2718]: I0813 01:47:09.238179 2718 kubelet.go:2351] "Pod admission denied" podUID="88afd190-e13d-47e0-b6e5-3420e2d338ae" pod="tigera-operator/tigera-operator-747864d56d-vxbjf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.340243 kubelet[2718]: I0813 01:47:09.340122 2718 kubelet.go:2351] "Pod admission denied" podUID="87439921-2485-44b5-bb60-6d72635c8680" pod="tigera-operator/tigera-operator-747864d56d-xwgkf" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.440674 kubelet[2718]: I0813 01:47:09.440626 2718 kubelet.go:2351] "Pod admission denied" podUID="48a1a3ac-c698-4b1d-86a4-b6c989cc4976" pod="tigera-operator/tigera-operator-747864d56d-tpbws" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.541418 kubelet[2718]: I0813 01:47:09.541360 2718 kubelet.go:2351] "Pod admission denied" podUID="fbdec751-cfcb-4367-87e0-13abb6fd06d3" pod="tigera-operator/tigera-operator-747864d56d-74spk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.640262 kubelet[2718]: I0813 01:47:09.639690 2718 kubelet.go:2351] "Pod admission denied" podUID="18e79602-83c8-43a5-95f9-163ab9a17a2a" pod="tigera-operator/tigera-operator-747864d56d-75x9m" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.690706 kubelet[2718]: I0813 01:47:09.690647 2718 kubelet.go:2351] "Pod admission denied" podUID="08b5556d-02e0-4ca1-bbc6-f246ff2c2e28" pod="tigera-operator/tigera-operator-747864d56d-kbfzs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.795367 kubelet[2718]: I0813 01:47:09.794757 2718 kubelet.go:2351] "Pod admission denied" podUID="820ae342-82d5-4a69-9651-5196dd6142aa" pod="tigera-operator/tigera-operator-747864d56d-g7s9b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:09.990533 kubelet[2718]: I0813 01:47:09.990480 2718 kubelet.go:2351] "Pod admission denied" podUID="074b4e45-ef13-4890-b079-77fa93689302" pod="tigera-operator/tigera-operator-747864d56d-w97pl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.089826 kubelet[2718]: I0813 01:47:10.089330 2718 kubelet.go:2351] "Pod admission denied" podUID="cd5039ad-0562-46ac-bb00-7bf169a01a32" pod="tigera-operator/tigera-operator-747864d56d-69slh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.194629 kubelet[2718]: I0813 01:47:10.194555 2718 kubelet.go:2351] "Pod admission denied" podUID="fceb1051-34cd-4312-9466-c915489c4bec" pod="tigera-operator/tigera-operator-747864d56d-5m6ss" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.290677 kubelet[2718]: I0813 01:47:10.290556 2718 kubelet.go:2351] "Pod admission denied" podUID="e3e99a17-e3a7-446a-983b-75e458bff19f" pod="tigera-operator/tigera-operator-747864d56d-znvrz" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.397700 kubelet[2718]: I0813 01:47:10.397618 2718 kubelet.go:2351] "Pod admission denied" podUID="4c78a844-2611-4d50-8a8c-f171d05bbb2f" pod="tigera-operator/tigera-operator-747864d56d-8642n" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.488980 kubelet[2718]: I0813 01:47:10.488911 2718 kubelet.go:2351] "Pod admission denied" podUID="a4c66cc4-6c12-4fcd-8369-3b8366351018" pod="tigera-operator/tigera-operator-747864d56d-bkznc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.592618 kubelet[2718]: I0813 01:47:10.592076 2718 kubelet.go:2351] "Pod admission denied" podUID="4c949467-9c87-4aaa-8da4-5110ae44dd3a" pod="tigera-operator/tigera-operator-747864d56d-7486v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.800743 kubelet[2718]: I0813 01:47:10.800674 2718 kubelet.go:2351] "Pod admission denied" podUID="95fbfc13-c6a7-4993-9273-854d1ab1f8f7" pod="tigera-operator/tigera-operator-747864d56d-chnvq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.892881 kubelet[2718]: I0813 01:47:10.892384 2718 kubelet.go:2351] "Pod admission denied" podUID="b26716ba-675c-43dd-a676-b587dbf8c133" pod="tigera-operator/tigera-operator-747864d56d-88rkm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:10.988282 kubelet[2718]: I0813 01:47:10.988238 2718 kubelet.go:2351] "Pod admission denied" podUID="edc96b7d-a014-465a-bdf6-8f3b94e6874e" pod="tigera-operator/tigera-operator-747864d56d-gq4vw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.196762 kubelet[2718]: I0813 01:47:11.196204 2718 kubelet.go:2351] "Pod admission denied" podUID="72ab4da4-8a55-450b-9e96-043f3ef60147" pod="tigera-operator/tigera-operator-747864d56d-xl6ff" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.291643 kubelet[2718]: I0813 01:47:11.291595 2718 kubelet.go:2351] "Pod admission denied" podUID="20c8878b-043c-42df-aaba-86874cd39756" pod="tigera-operator/tigera-operator-747864d56d-xx8d8" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.390789 kubelet[2718]: I0813 01:47:11.390735 2718 kubelet.go:2351] "Pod admission denied" podUID="a07f571d-4be6-4691-8027-f5c6d6cb6ed5" pod="tigera-operator/tigera-operator-747864d56d-dvdvb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.488467 kubelet[2718]: I0813 01:47:11.488423 2718 kubelet.go:2351] "Pod admission denied" podUID="28ba88be-ead2-4dd3-a970-d0f2741e4e99" pod="tigera-operator/tigera-operator-747864d56d-hchnl" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.587072 kubelet[2718]: I0813 01:47:11.587034 2718 kubelet.go:2351] "Pod admission denied" podUID="979c405d-0962-41b3-ab7d-587671d1e49a" pod="tigera-operator/tigera-operator-747864d56d-tts6l" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.781614 kubelet[2718]: E0813 01:47:11.781339 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:47:11.783354 containerd[1559]: time="2025-08-13T01:47:11.783065115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:11.801568 kubelet[2718]: I0813 01:47:11.801467 2718 kubelet.go:2351] "Pod admission denied" podUID="8c552b5a-d0e8-4abd-976a-3817a4cc2f8a" pod="tigera-operator/tigera-operator-747864d56d-72z7z" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.855690 containerd[1559]: time="2025-08-13T01:47:11.855645526Z" level=error msg="Failed to destroy network for sandbox \"f2658cb508f71c5efc46e6fd37ec0119e19ad8498439197c4449fc878148d894\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:11.857467 containerd[1559]: time="2025-08-13T01:47:11.856871528Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2658cb508f71c5efc46e6fd37ec0119e19ad8498439197c4449fc878148d894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:11.857537 kubelet[2718]: E0813 01:47:11.857045 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2658cb508f71c5efc46e6fd37ec0119e19ad8498439197c4449fc878148d894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:11.857537 kubelet[2718]: E0813 01:47:11.857095 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2658cb508f71c5efc46e6fd37ec0119e19ad8498439197c4449fc878148d894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:47:11.857537 kubelet[2718]: E0813 01:47:11.857118 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f2658cb508f71c5efc46e6fd37ec0119e19ad8498439197c4449fc878148d894\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:47:11.857537 kubelet[2718]: E0813 01:47:11.857151 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f2658cb508f71c5efc46e6fd37ec0119e19ad8498439197c4449fc878148d894\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j47vf" podUID="0ba3c042-02d2-446d-bb82-0965919f2962" Aug 13 01:47:11.858268 systemd[1]: run-netns-cni\x2d76d89f4c\x2d8fcf\x2d9692\x2d198b\x2dbedc761e492f.mount: Deactivated successfully. Aug 13 01:47:11.893255 kubelet[2718]: I0813 01:47:11.892845 2718 kubelet.go:2351] "Pod admission denied" podUID="0e01782e-e32b-4553-9331-b1a4b2f92b0a" pod="tigera-operator/tigera-operator-747864d56d-8mgz5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:11.992253 kubelet[2718]: I0813 01:47:11.992187 2718 kubelet.go:2351] "Pod admission denied" podUID="1e5db989-1b93-4963-94c7-34ef7d1256ee" pod="tigera-operator/tigera-operator-747864d56d-v2tf4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.190316 kubelet[2718]: I0813 01:47:12.190149 2718 kubelet.go:2351] "Pod admission denied" podUID="311e745f-8bbb-4572-b0c5-c073147fe511" pod="tigera-operator/tigera-operator-747864d56d-dbrxq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.288376 kubelet[2718]: I0813 01:47:12.288325 2718 kubelet.go:2351] "Pod admission denied" podUID="369c79e3-7038-4da9-a3f8-c147292b7385" pod="tigera-operator/tigera-operator-747864d56d-gjrxn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.388690 kubelet[2718]: I0813 01:47:12.388645 2718 kubelet.go:2351] "Pod admission denied" podUID="e892085e-a53c-40dc-905e-a1203ae53573" pod="tigera-operator/tigera-operator-747864d56d-62plm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.491164 kubelet[2718]: I0813 01:47:12.491100 2718 kubelet.go:2351] "Pod admission denied" podUID="9ef3343a-991a-4b14-b63f-5c675b252dac" pod="tigera-operator/tigera-operator-747864d56d-kktd6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.586481 kubelet[2718]: I0813 01:47:12.586445 2718 kubelet.go:2351] "Pod admission denied" podUID="a3e9ce95-2ee5-46db-ba70-a7bbbe056a22" pod="tigera-operator/tigera-operator-747864d56d-8zbcq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.686432 kubelet[2718]: I0813 01:47:12.686377 2718 kubelet.go:2351] "Pod admission denied" podUID="de6f8c8f-ace4-4607-8d9c-1ee324a68d84" pod="tigera-operator/tigera-operator-747864d56d-lfpch" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.789367 kubelet[2718]: I0813 01:47:12.789257 2718 kubelet.go:2351] "Pod admission denied" podUID="229ebac7-49d2-4d41-bcdb-0a630f65f9f8" pod="tigera-operator/tigera-operator-747864d56d-kngn2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.885631 kubelet[2718]: I0813 01:47:12.885589 2718 kubelet.go:2351] "Pod admission denied" podUID="8c5736af-e578-43a2-b5d3-aba8ea9ffe7d" pod="tigera-operator/tigera-operator-747864d56d-dgtv6" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:12.984872 kubelet[2718]: I0813 01:47:12.984796 2718 kubelet.go:2351] "Pod admission denied" podUID="9c784fb2-92b0-4401-abcf-293b70503cb6" pod="tigera-operator/tigera-operator-747864d56d-2ldh4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.097159 kubelet[2718]: I0813 01:47:13.096058 2718 kubelet.go:2351] "Pod admission denied" podUID="1a18129c-4239-4d33-8cd7-2dc15cba4af3" pod="tigera-operator/tigera-operator-747864d56d-xmvlx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.199878 kubelet[2718]: I0813 01:47:13.198341 2718 kubelet.go:2351] "Pod admission denied" podUID="c95c29b5-8345-472f-b129-94600fea3754" pod="tigera-operator/tigera-operator-747864d56d-gmgjw" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.293301 kubelet[2718]: I0813 01:47:13.293250 2718 kubelet.go:2351] "Pod admission denied" podUID="5189fa66-5e9c-4d0f-9c84-f9ddb5ec7adb" pod="tigera-operator/tigera-operator-747864d56d-2g5xt" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.388235 kubelet[2718]: I0813 01:47:13.388001 2718 kubelet.go:2351] "Pod admission denied" podUID="559c4f81-acf3-4a27-b49b-9af736ff9470" pod="tigera-operator/tigera-operator-747864d56d-x6pdb" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.489870 kubelet[2718]: I0813 01:47:13.489809 2718 kubelet.go:2351] "Pod admission denied" podUID="00e07db9-f9ef-4453-8962-8fc7e8ef4aaf" pod="tigera-operator/tigera-operator-747864d56d-8sjhv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.538344 kubelet[2718]: I0813 01:47:13.538302 2718 kubelet.go:2351] "Pod admission denied" podUID="eb4dce1d-0746-4563-9bb1-050e4998a8a3" pod="tigera-operator/tigera-operator-747864d56d-7d4cd" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.637002 kubelet[2718]: I0813 01:47:13.636964 2718 kubelet.go:2351] "Pod admission denied" podUID="a91a4d89-d10f-4bc0-8f64-bcac4460a3bb" pod="tigera-operator/tigera-operator-747864d56d-g6lgh" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.742875 kubelet[2718]: I0813 01:47:13.742601 2718 kubelet.go:2351] "Pod admission denied" podUID="8ff0b9ca-da0c-43a2-be5b-f780af4b3e5e" pod="tigera-operator/tigera-operator-747864d56d-wg9rk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:13.779125 kubelet[2718]: E0813 01:47:13.779081 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:47:13.780246 containerd[1559]: time="2025-08-13T01:47:13.780196497Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:13.837069 containerd[1559]: time="2025-08-13T01:47:13.837003945Z" level=error msg="Failed to destroy network for sandbox \"832d51ede97b62e1974f49ef4fbb7e69fc676f1a8662e5b3c8b941e592a03d92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:13.840378 containerd[1559]: time="2025-08-13T01:47:13.840245575Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"832d51ede97b62e1974f49ef4fbb7e69fc676f1a8662e5b3c8b941e592a03d92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:13.841201 systemd[1]: run-netns-cni\x2d22b84e5a\x2dab53\x2dcd65\x2db693\x2da728a46c7806.mount: Deactivated successfully. Aug 13 01:47:13.842649 kubelet[2718]: E0813 01:47:13.842584 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"832d51ede97b62e1974f49ef4fbb7e69fc676f1a8662e5b3c8b941e592a03d92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:13.842704 kubelet[2718]: E0813 01:47:13.842668 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"832d51ede97b62e1974f49ef4fbb7e69fc676f1a8662e5b3c8b941e592a03d92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:47:13.842730 kubelet[2718]: E0813 01:47:13.842713 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"832d51ede97b62e1974f49ef4fbb7e69fc676f1a8662e5b3c8b941e592a03d92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:47:13.842824 kubelet[2718]: E0813 01:47:13.842749 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"832d51ede97b62e1974f49ef4fbb7e69fc676f1a8662e5b3c8b941e592a03d92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dfjz8" podUID="dafdbb28-0754-4303-98bd-08c77ee94f1a" Aug 13 01:47:13.850036 kubelet[2718]: I0813 01:47:13.850004 2718 kubelet.go:2351] "Pod admission denied" podUID="f5d582ec-0ae3-4e47-bd95-ba83b62be594" pod="tigera-operator/tigera-operator-747864d56d-cw8d5" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.041266 kubelet[2718]: I0813 01:47:14.041179 2718 kubelet.go:2351] "Pod admission denied" podUID="d6f1ecde-be8c-46d8-b3dd-96d2297c9f9e" pod="tigera-operator/tigera-operator-747864d56d-cgnvm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.138545 kubelet[2718]: I0813 01:47:14.138513 2718 kubelet.go:2351] "Pod admission denied" podUID="2c5e3805-9cc6-4b58-b167-8c1d2021c414" pod="tigera-operator/tigera-operator-747864d56d-7wfx4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.189155 kubelet[2718]: I0813 01:47:14.189097 2718 kubelet.go:2351] "Pod admission denied" podUID="96506a97-7877-4823-9431-c43b21c1f137" pod="tigera-operator/tigera-operator-747864d56d-skr7s" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.287386 kubelet[2718]: I0813 01:47:14.287350 2718 kubelet.go:2351] "Pod admission denied" podUID="22420737-0f29-4def-93f7-5b4e82fa38c1" pod="tigera-operator/tigera-operator-747864d56d-2kscn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.491754 kubelet[2718]: I0813 01:47:14.491680 2718 kubelet.go:2351] "Pod admission denied" podUID="35eded64-16d8-40ea-8ef0-ceee6f9b93e0" pod="tigera-operator/tigera-operator-747864d56d-65lb2" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.600877 kubelet[2718]: I0813 01:47:14.600664 2718 kubelet.go:2351] "Pod admission denied" podUID="46d1fa0a-b29d-4c0e-bfdb-d6c184074a4a" pod="tigera-operator/tigera-operator-747864d56d-n6ptm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.691726 kubelet[2718]: I0813 01:47:14.691660 2718 kubelet.go:2351] "Pod admission denied" podUID="90b32691-a199-4a68-a0fe-771bee615278" pod="tigera-operator/tigera-operator-747864d56d-8prtg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.798079 kubelet[2718]: I0813 01:47:14.797956 2718 kubelet.go:2351] "Pod admission denied" podUID="4d7ab367-024c-4a66-972d-9d8cc1bbde21" pod="tigera-operator/tigera-operator-747864d56d-8jlwx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.887564 kubelet[2718]: I0813 01:47:14.887510 2718 kubelet.go:2351] "Pod admission denied" podUID="086909da-e5bd-4c3e-8f89-a006c2f619ad" pod="tigera-operator/tigera-operator-747864d56d-bh6xn" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:14.987684 kubelet[2718]: I0813 01:47:14.987641 2718 kubelet.go:2351] "Pod admission denied" podUID="f763d394-edc1-4c08-b1d0-5db86a05e576" pod="tigera-operator/tigera-operator-747864d56d-v2kxq" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.091383 kubelet[2718]: I0813 01:47:15.091032 2718 kubelet.go:2351] "Pod admission denied" podUID="81cc6448-64fd-4fd7-9df2-4ed51f49e53d" pod="tigera-operator/tigera-operator-747864d56d-55x74" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.185955 kubelet[2718]: I0813 01:47:15.185904 2718 kubelet.go:2351] "Pod admission denied" podUID="db1a1c49-6586-429e-a269-0dc7e153ad3f" pod="tigera-operator/tigera-operator-747864d56d-rfh49" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.294787 kubelet[2718]: I0813 01:47:15.294196 2718 kubelet.go:2351] "Pod admission denied" podUID="7b6837c8-ed1b-463c-beee-f4395f2e8766" pod="tigera-operator/tigera-operator-747864d56d-prb7b" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.390754 kubelet[2718]: I0813 01:47:15.390201 2718 kubelet.go:2351] "Pod admission denied" podUID="faba6c7b-4cfe-4db7-8228-c93d5c9db6d6" pod="tigera-operator/tigera-operator-747864d56d-6bxqg" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.493599 kubelet[2718]: I0813 01:47:15.493570 2718 kubelet.go:2351] "Pod admission denied" podUID="6dee85b1-3baf-4e8c-9b0b-3211afb2374e" pod="tigera-operator/tigera-operator-747864d56d-82qbs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.587201 kubelet[2718]: I0813 01:47:15.587138 2718 kubelet.go:2351] "Pod admission denied" podUID="3f1ab19f-c193-42bb-8eff-12b502513754" pod="tigera-operator/tigera-operator-747864d56d-ddt6c" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.705441 kubelet[2718]: I0813 01:47:15.705193 2718 kubelet.go:2351] "Pod admission denied" podUID="fa7d7ef2-13e7-4961-837c-815c449e9526" pod="tigera-operator/tigera-operator-747864d56d-lrvnk" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.790501 kubelet[2718]: I0813 01:47:15.790462 2718 kubelet.go:2351] "Pod admission denied" podUID="65f7a04b-8966-4d29-ac24-483bcce02bd7" pod="tigera-operator/tigera-operator-747864d56d-qfl47" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.890638 kubelet[2718]: I0813 01:47:15.890580 2718 kubelet.go:2351] "Pod admission denied" podUID="c2b303ab-6846-4c4e-8198-89260fcb3b88" pod="tigera-operator/tigera-operator-747864d56d-wkg5v" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:15.989182 kubelet[2718]: I0813 01:47:15.989131 2718 kubelet.go:2351] "Pod admission denied" podUID="2e4c9d7f-7efa-4f6e-aa43-7878bb4c63ae" pod="tigera-operator/tigera-operator-747864d56d-4w7bc" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.044250 kubelet[2718]: I0813 01:47:16.044188 2718 kubelet.go:2351] "Pod admission denied" podUID="2097be2b-0b64-4858-8e2b-14dde3197630" pod="tigera-operator/tigera-operator-747864d56d-t9vgs" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.140006 kubelet[2718]: I0813 01:47:16.139949 2718 kubelet.go:2351] "Pod admission denied" podUID="d9158004-1a3b-4d98-b1ee-9f2b1f913d14" pod="tigera-operator/tigera-operator-747864d56d-z66xv" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.343838 kubelet[2718]: I0813 01:47:16.343702 2718 kubelet.go:2351] "Pod admission denied" podUID="d4ba1099-ae44-4639-a8c5-7c60d2726a9d" pod="tigera-operator/tigera-operator-747864d56d-d7fr4" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.443492 kubelet[2718]: I0813 01:47:16.443432 2718 kubelet.go:2351] "Pod admission denied" podUID="1c0b8fdd-2445-4b7a-83b7-d741437bad40" pod="tigera-operator/tigera-operator-747864d56d-r4sdx" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.541727 kubelet[2718]: I0813 01:47:16.541652 2718 kubelet.go:2351] "Pod admission denied" podUID="9b1440e0-a947-4171-abe6-342b3466366e" pod="tigera-operator/tigera-operator-747864d56d-fhbxr" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.647670 kubelet[2718]: I0813 01:47:16.647266 2718 kubelet.go:2351] "Pod admission denied" podUID="2b39dd69-ad1b-4556-a0f8-da987af40a88" pod="tigera-operator/tigera-operator-747864d56d-4mm2d" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.741672 kubelet[2718]: I0813 01:47:16.741624 2718 kubelet.go:2351] "Pod admission denied" podUID="2607e339-2586-45a7-866d-dc11cbfc8392" pod="tigera-operator/tigera-operator-747864d56d-kw7fp" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:16.846879 kubelet[2718]: I0813 01:47:16.846459 2718 kubelet.go:2351] "Pod admission denied" podUID="3f36f10c-4046-4f7c-a8dd-8199a9f6d7d0" pod="tigera-operator/tigera-operator-747864d56d-8g7lm" reason="Evicted" message="The node had condition: [DiskPressure]. " Aug 13 01:47:17.243633 systemd[1]: Started sshd@7-172.232.7.133:22-147.75.109.163:34662.service - OpenSSH per-connection server daemon (147.75.109.163:34662). Aug 13 01:47:17.585364 sshd[4677]: Accepted publickey for core from 147.75.109.163 port 34662 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:47:17.586880 sshd-session[4677]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:47:17.592914 systemd-logind[1539]: New session 8 of user core. Aug 13 01:47:17.599990 systemd[1]: Started session-8.scope - Session 8 of User core. Aug 13 01:47:17.901272 sshd[4679]: Connection closed by 147.75.109.163 port 34662 Aug 13 01:47:17.902922 sshd-session[4677]: pam_unix(sshd:session): session closed for user core Aug 13 01:47:17.906663 systemd-logind[1539]: Session 8 logged out. Waiting for processes to exit. Aug 13 01:47:17.907543 systemd[1]: sshd@7-172.232.7.133:22-147.75.109.163:34662.service: Deactivated successfully. Aug 13 01:47:17.909542 systemd[1]: session-8.scope: Deactivated successfully. Aug 13 01:47:17.911495 systemd-logind[1539]: Removed session 8. Aug 13 01:47:18.431011 kubelet[2718]: I0813 01:47:18.430982 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:18.431011 kubelet[2718]: I0813 01:47:18.431009 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:47:18.434235 kubelet[2718]: I0813 01:47:18.434223 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:47:18.449427 kubelet[2718]: I0813 01:47:18.449405 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:18.449470 kubelet[2718]: I0813 01:47:18.449462 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","kube-system/coredns-668d6bf9bc-dfjz8","kube-system/coredns-668d6bf9bc-j47vf","calico-system/calico-node-qgskr","calico-system/csi-node-driver-dbqt2","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:47:18.449524 kubelet[2718]: E0813 01:47:18.449484 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:47:18.449524 kubelet[2718]: E0813 01:47:18.449493 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:47:18.449524 kubelet[2718]: E0813 01:47:18.449499 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:47:18.449524 kubelet[2718]: E0813 01:47:18.449505 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:47:18.449524 kubelet[2718]: E0813 01:47:18.449511 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:47:18.449524 kubelet[2718]: E0813 01:47:18.449519 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:47:18.449524 kubelet[2718]: E0813 01:47:18.449527 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:47:18.449671 kubelet[2718]: E0813 01:47:18.449536 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:47:18.449671 kubelet[2718]: E0813 01:47:18.449542 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:47:18.449671 kubelet[2718]: E0813 01:47:18.449551 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:47:18.449671 kubelet[2718]: I0813 01:47:18.449559 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:47:19.780834 containerd[1559]: time="2025-08-13T01:47:19.780790475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:19.831921 containerd[1559]: time="2025-08-13T01:47:19.831872637Z" level=error msg="Failed to destroy network for sandbox \"8b689f459777364e3aaa0e6346b4102a121e9f5cf66763ed16e43a39ce1970dd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:19.835201 containerd[1559]: time="2025-08-13T01:47:19.834957021Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b689f459777364e3aaa0e6346b4102a121e9f5cf66763ed16e43a39ce1970dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:19.837654 kubelet[2718]: E0813 01:47:19.835369 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b689f459777364e3aaa0e6346b4102a121e9f5cf66763ed16e43a39ce1970dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:19.837654 kubelet[2718]: E0813 01:47:19.835418 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b689f459777364e3aaa0e6346b4102a121e9f5cf66763ed16e43a39ce1970dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:47:19.837654 kubelet[2718]: E0813 01:47:19.835439 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8b689f459777364e3aaa0e6346b4102a121e9f5cf66763ed16e43a39ce1970dd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:47:19.837654 kubelet[2718]: E0813 01:47:19.835483 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8b689f459777364e3aaa0e6346b4102a121e9f5cf66763ed16e43a39ce1970dd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:47:19.835428 systemd[1]: run-netns-cni\x2dd440aac5\x2d2e90\x2d626a\x2d9083\x2de4a656e57ef3.mount: Deactivated successfully. Aug 13 01:47:20.782056 containerd[1559]: time="2025-08-13T01:47:20.781635258Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:47:22.732592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1136792411.mount: Deactivated successfully. Aug 13 01:47:22.734012 containerd[1559]: time="2025-08-13T01:47:22.733910604Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1136792411: write /var/lib/containerd/tmpmounts/containerd-mount1136792411/usr/bin/calico-node: no space left on device" Aug 13 01:47:22.734012 containerd[1559]: time="2025-08-13T01:47:22.733946484Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:47:22.734668 kubelet[2718]: E0813 01:47:22.734610 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1136792411: write /var/lib/containerd/tmpmounts/containerd-mount1136792411/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:47:22.735234 kubelet[2718]: E0813 01:47:22.734682 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1136792411: write /var/lib/containerd/tmpmounts/containerd-mount1136792411/usr/bin/calico-node: no space left on device" image="ghcr.io/flatcar/calico/node:v3.30.2" Aug 13 01:47:22.735268 kubelet[2718]: E0813 01:47:22.734875 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-node,Image:ghcr.io/flatcar/calico/node:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:WAIT_FOR_DATASTORE,Value:true,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:k8s,operator,bgp,ValueFrom:nil,},EnvVar{Name:CALICO_DISABLE_FILE_LOGGING,Value:false,ValueFrom:nil,},EnvVar{Name:FELIX_DEFAULTENDPOINTTOHOSTACTION,Value:ACCEPT,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHENABLED,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_HEALTHPORT,Value:9099,ValueFrom:nil,},EnvVar{Name:NODENAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:FELIX_TYPHAK8SNAMESPACE,Value:calico-system,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAK8SSERVICENAME,Value:calico-typha,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACAFILE,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACERTFILE,Value:/node-certs/tls.crt,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHAKEYFILE,Value:/node-certs/tls.key,ValueFrom:nil,},EnvVar{Name:NO_DEFAULT_POOLS,Value:true,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSGOLDMANESERVER,Value:goldmane.calico-system.svc:7443,ValueFrom:nil,},EnvVar{Name:FELIX_FLOWLOGSFLUSHINTERVAL,Value:15,ValueFrom:nil,},EnvVar{Name:FELIX_TYPHACN,Value:typha-server,ValueFrom:nil,},EnvVar{Name:CALICO_MANAGE_CNI,Value:true,ValueFrom:nil,},EnvVar{Name:CALICO_NETWORKING_BACKEND,Value:bird,ValueFrom:nil,},EnvVar{Name:IP,Value:autodetect,ValueFrom:nil,},EnvVar{Name:IP_AUTODETECTION_METHOD,Value:first-found,ValueFrom:nil,},EnvVar{Name:IP6,Value:none,ValueFrom:nil,},EnvVar{Name:FELIX_IPV6SUPPORT,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:cni-log-dir,ReadOnly:false,MountPath:/var/log/calico/cni,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cni-net-dir,ReadOnly:false,MountPath:/host/etc/cni/net.d,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:lib-modules,ReadOnly:true,MountPath:/lib/modules,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:node-certs,ReadOnly:true,MountPath:/node-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:policysync,ReadOnly:false,MountPath:/var/run/nodeagent,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-lib-calico,ReadOnly:false,MountPath:/var/lib/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:var-run-calico,ReadOnly:false,MountPath:/var/run/calico,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:xtables-lock,ReadOnly:false,MountPath:/run/xtables.lock,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2s4z7,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/liveness,Port:{0 9099 },Host:localhost,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/bin/calico-node -bird-ready -felix-ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:&Lifecycle{PostStart:nil,PreStop:&LifecycleHandler{Exec:&ExecAction{Command:[/bin/calico-node -shutdown],},HTTPGet:nil,TCPSocket:nil,Sleep:nil,},},TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-node-qgskr_calico-system(9adfb11c-9977-45e9-b78f-00f4995e46c5): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node:v3.30.2\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1136792411: write /var/lib/containerd/tmpmounts/containerd-mount1136792411/usr/bin/calico-node: no space left on device" logger="UnhandledError" Aug 13 01:47:22.736556 kubelet[2718]: E0813 01:47:22.736507 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1136792411: write /var/lib/containerd/tmpmounts/containerd-mount1136792411/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-qgskr" podUID="9adfb11c-9977-45e9-b78f-00f4995e46c5" Aug 13 01:47:22.785116 kubelet[2718]: E0813 01:47:22.783754 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:47:22.797831 containerd[1559]: time="2025-08-13T01:47:22.797804411Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:22.898311 containerd[1559]: time="2025-08-13T01:47:22.898262989Z" level=error msg="Failed to destroy network for sandbox \"89fcafc97c357dd8fea97105a01d301f2db82aeaa3864dd99483e059f9cc2914\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:22.900993 containerd[1559]: time="2025-08-13T01:47:22.900958937Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"89fcafc97c357dd8fea97105a01d301f2db82aeaa3864dd99483e059f9cc2914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:22.901060 systemd[1]: run-netns-cni\x2dbf0f00c0\x2d81cc\x2d857a\x2d2bde\x2d9a1ce676fd87.mount: Deactivated successfully. Aug 13 01:47:22.901357 kubelet[2718]: E0813 01:47:22.901327 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89fcafc97c357dd8fea97105a01d301f2db82aeaa3864dd99483e059f9cc2914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:22.901421 kubelet[2718]: E0813 01:47:22.901371 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89fcafc97c357dd8fea97105a01d301f2db82aeaa3864dd99483e059f9cc2914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:47:22.901421 kubelet[2718]: E0813 01:47:22.901392 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"89fcafc97c357dd8fea97105a01d301f2db82aeaa3864dd99483e059f9cc2914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:47:22.901479 kubelet[2718]: E0813 01:47:22.901443 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"89fcafc97c357dd8fea97105a01d301f2db82aeaa3864dd99483e059f9cc2914\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:47:22.969299 systemd[1]: Started sshd@8-172.232.7.133:22-147.75.109.163:55204.service - OpenSSH per-connection server daemon (147.75.109.163:55204). Aug 13 01:47:23.309877 sshd[4753]: Accepted publickey for core from 147.75.109.163 port 55204 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:47:23.310449 sshd-session[4753]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:47:23.316075 systemd-logind[1539]: New session 9 of user core. Aug 13 01:47:23.319985 systemd[1]: Started session-9.scope - Session 9 of User core. Aug 13 01:47:23.615548 sshd[4755]: Connection closed by 147.75.109.163 port 55204 Aug 13 01:47:23.616322 sshd-session[4753]: pam_unix(sshd:session): session closed for user core Aug 13 01:47:23.620172 systemd-logind[1539]: Session 9 logged out. Waiting for processes to exit. Aug 13 01:47:23.620893 systemd[1]: sshd@8-172.232.7.133:22-147.75.109.163:55204.service: Deactivated successfully. Aug 13 01:47:23.623626 systemd[1]: session-9.scope: Deactivated successfully. Aug 13 01:47:23.625932 systemd-logind[1539]: Removed session 9. Aug 13 01:47:25.780412 kubelet[2718]: E0813 01:47:25.780342 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:47:25.782103 containerd[1559]: time="2025-08-13T01:47:25.781807450Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:25.836215 containerd[1559]: time="2025-08-13T01:47:25.836151651Z" level=error msg="Failed to destroy network for sandbox \"d52bcf6d4401b32a9aa701e9d1f8031581f1cd083569d6e5d3958263f45212a8\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:25.840580 containerd[1559]: time="2025-08-13T01:47:25.839327976Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d52bcf6d4401b32a9aa701e9d1f8031581f1cd083569d6e5d3958263f45212a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:25.840690 kubelet[2718]: E0813 01:47:25.840253 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d52bcf6d4401b32a9aa701e9d1f8031581f1cd083569d6e5d3958263f45212a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:25.840690 kubelet[2718]: E0813 01:47:25.840303 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d52bcf6d4401b32a9aa701e9d1f8031581f1cd083569d6e5d3958263f45212a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:47:25.840690 kubelet[2718]: E0813 01:47:25.840326 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d52bcf6d4401b32a9aa701e9d1f8031581f1cd083569d6e5d3958263f45212a8\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:47:25.840690 kubelet[2718]: E0813 01:47:25.840365 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d52bcf6d4401b32a9aa701e9d1f8031581f1cd083569d6e5d3958263f45212a8\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j47vf" podUID="0ba3c042-02d2-446d-bb82-0965919f2962" Aug 13 01:47:25.839527 systemd[1]: run-netns-cni\x2d2a95f33b\x2d0f9b\x2d8204\x2d00e8\x2df3000a839c67.mount: Deactivated successfully. Aug 13 01:47:28.461472 kubelet[2718]: I0813 01:47:28.461438 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:28.461472 kubelet[2718]: I0813 01:47:28.461475 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:47:28.463451 kubelet[2718]: I0813 01:47:28.463394 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:47:28.464740 kubelet[2718]: I0813 01:47:28.464713 2718 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc" size=57680541 runtimeHandler="" Aug 13 01:47:28.464946 containerd[1559]: time="2025-08-13T01:47:28.464920188Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 01:47:28.466053 containerd[1559]: time="2025-08-13T01:47:28.466031553Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd:3.5.16-0\"" Aug 13 01:47:28.466788 containerd[1559]: time="2025-08-13T01:47:28.466767191Z" level=info msg="ImageDelete event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\"" Aug 13 01:47:28.467200 containerd[1559]: time="2025-08-13T01:47:28.467181279Z" level=info msg="RemoveImage \"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\" returns successfully" Aug 13 01:47:28.467272 containerd[1559]: time="2025-08-13T01:47:28.467255818Z" level=info msg="ImageDelete event name:\"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc\"" Aug 13 01:47:28.467376 kubelet[2718]: I0813 01:47:28.467355 2718 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6" size=18562039 runtimeHandler="" Aug 13 01:47:28.467477 containerd[1559]: time="2025-08-13T01:47:28.467459837Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:47:28.468104 containerd[1559]: time="2025-08-13T01:47:28.468055174Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:47:28.468526 containerd[1559]: time="2025-08-13T01:47:28.468504112Z" level=info msg="ImageDelete event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\"" Aug 13 01:47:28.468834 containerd[1559]: time="2025-08-13T01:47:28.468814682Z" level=info msg="RemoveImage \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" returns successfully" Aug 13 01:47:28.468919 containerd[1559]: time="2025-08-13T01:47:28.468892081Z" level=info msg="ImageDelete event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:47:28.475749 kubelet[2718]: I0813 01:47:28.475729 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:47:28.475836 kubelet[2718]: I0813 01:47:28.475818 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","kube-system/coredns-668d6bf9bc-dfjz8","kube-system/coredns-668d6bf9bc-j47vf","calico-system/calico-node-qgskr","calico-system/csi-node-driver-dbqt2","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:47:28.475911 kubelet[2718]: E0813 01:47:28.475845 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:47:28.475911 kubelet[2718]: E0813 01:47:28.475876 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:47:28.475911 kubelet[2718]: E0813 01:47:28.475883 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:47:28.475911 kubelet[2718]: E0813 01:47:28.475890 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:47:28.475911 kubelet[2718]: E0813 01:47:28.475896 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:47:28.475911 kubelet[2718]: E0813 01:47:28.475905 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:47:28.475911 kubelet[2718]: E0813 01:47:28.475912 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:47:28.476050 kubelet[2718]: E0813 01:47:28.475920 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:47:28.476050 kubelet[2718]: E0813 01:47:28.475927 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:47:28.476050 kubelet[2718]: E0813 01:47:28.475934 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:47:28.476050 kubelet[2718]: I0813 01:47:28.475943 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:47:28.684242 systemd[1]: Started sshd@9-172.232.7.133:22-147.75.109.163:48552.service - OpenSSH per-connection server daemon (147.75.109.163:48552). Aug 13 01:47:28.780205 kubelet[2718]: E0813 01:47:28.780098 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:47:28.781245 containerd[1559]: time="2025-08-13T01:47:28.781060909Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:28.828997 containerd[1559]: time="2025-08-13T01:47:28.828938633Z" level=error msg="Failed to destroy network for sandbox \"c2a779a35479ee1f522ec6c766cc2ada10a595bc3955b6bd145517c6fca266f7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:28.831087 containerd[1559]: time="2025-08-13T01:47:28.829931459Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a779a35479ee1f522ec6c766cc2ada10a595bc3955b6bd145517c6fca266f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:28.831185 kubelet[2718]: E0813 01:47:28.831096 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a779a35479ee1f522ec6c766cc2ada10a595bc3955b6bd145517c6fca266f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:28.831185 kubelet[2718]: E0813 01:47:28.831153 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a779a35479ee1f522ec6c766cc2ada10a595bc3955b6bd145517c6fca266f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:47:28.831185 kubelet[2718]: E0813 01:47:28.831179 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2a779a35479ee1f522ec6c766cc2ada10a595bc3955b6bd145517c6fca266f7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:47:28.831303 kubelet[2718]: E0813 01:47:28.831228 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2a779a35479ee1f522ec6c766cc2ada10a595bc3955b6bd145517c6fca266f7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dfjz8" podUID="dafdbb28-0754-4303-98bd-08c77ee94f1a" Aug 13 01:47:28.832229 systemd[1]: run-netns-cni\x2d5b1e5f0f\x2df5dd\x2ddce9\x2d27c2\x2da53580ae37db.mount: Deactivated successfully. Aug 13 01:47:29.014839 sshd[4794]: Accepted publickey for core from 147.75.109.163 port 48552 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:47:29.016622 sshd-session[4794]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:47:29.022260 systemd-logind[1539]: New session 10 of user core. Aug 13 01:47:29.033999 systemd[1]: Started session-10.scope - Session 10 of User core. Aug 13 01:47:29.307973 sshd[4822]: Connection closed by 147.75.109.163 port 48552 Aug 13 01:47:29.308176 sshd-session[4794]: pam_unix(sshd:session): session closed for user core Aug 13 01:47:29.312380 systemd-logind[1539]: Session 10 logged out. Waiting for processes to exit. Aug 13 01:47:29.313167 systemd[1]: sshd@9-172.232.7.133:22-147.75.109.163:48552.service: Deactivated successfully. Aug 13 01:47:29.315183 systemd[1]: session-10.scope: Deactivated successfully. Aug 13 01:47:29.316614 systemd-logind[1539]: Removed session 10. Aug 13 01:47:29.371042 systemd[1]: Started sshd@10-172.232.7.133:22-147.75.109.163:48564.service - OpenSSH per-connection server daemon (147.75.109.163:48564). Aug 13 01:47:29.704409 sshd[4835]: Accepted publickey for core from 147.75.109.163 port 48564 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:47:29.706207 sshd-session[4835]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:47:29.712173 systemd-logind[1539]: New session 11 of user core. Aug 13 01:47:29.719029 systemd[1]: Started session-11.scope - Session 11 of User core. Aug 13 01:47:30.043940 sshd[4837]: Connection closed by 147.75.109.163 port 48564 Aug 13 01:47:30.044527 sshd-session[4835]: pam_unix(sshd:session): session closed for user core Aug 13 01:47:30.047950 systemd[1]: sshd@10-172.232.7.133:22-147.75.109.163:48564.service: Deactivated successfully. Aug 13 01:47:30.050177 systemd[1]: session-11.scope: Deactivated successfully. Aug 13 01:47:30.051106 systemd-logind[1539]: Session 11 logged out. Waiting for processes to exit. Aug 13 01:47:30.052672 systemd-logind[1539]: Removed session 11. Aug 13 01:47:30.101203 systemd[1]: Started sshd@11-172.232.7.133:22-147.75.109.163:48578.service - OpenSSH per-connection server daemon (147.75.109.163:48578). Aug 13 01:47:30.428732 sshd[4847]: Accepted publickey for core from 147.75.109.163 port 48578 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:47:30.431357 sshd-session[4847]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:47:30.438928 systemd-logind[1539]: New session 12 of user core. Aug 13 01:47:30.447990 systemd[1]: Started session-12.scope - Session 12 of User core. Aug 13 01:47:30.733551 sshd[4849]: Connection closed by 147.75.109.163 port 48578 Aug 13 01:47:30.735142 sshd-session[4847]: pam_unix(sshd:session): session closed for user core Aug 13 01:47:30.738825 systemd[1]: sshd@11-172.232.7.133:22-147.75.109.163:48578.service: Deactivated successfully. Aug 13 01:47:30.739107 systemd-logind[1539]: Session 12 logged out. Waiting for processes to exit. Aug 13 01:47:30.741961 systemd[1]: session-12.scope: Deactivated successfully. Aug 13 01:47:30.746528 systemd-logind[1539]: Removed session 12. Aug 13 01:47:32.783169 containerd[1559]: time="2025-08-13T01:47:32.783083127Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:32.831840 containerd[1559]: time="2025-08-13T01:47:32.831780725Z" level=error msg="Failed to destroy network for sandbox \"bea4a58dfd74b87c00402c1c1ea50be6f20608feac538f7ce2c285cf4075166d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:32.833831 systemd[1]: run-netns-cni\x2db44764f5\x2d6b51\x2de83a\x2d6532\x2dea84793d0512.mount: Deactivated successfully. Aug 13 01:47:32.835360 containerd[1559]: time="2025-08-13T01:47:32.835320670Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea4a58dfd74b87c00402c1c1ea50be6f20608feac538f7ce2c285cf4075166d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:32.835842 kubelet[2718]: E0813 01:47:32.835799 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea4a58dfd74b87c00402c1c1ea50be6f20608feac538f7ce2c285cf4075166d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:32.836130 kubelet[2718]: E0813 01:47:32.835885 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea4a58dfd74b87c00402c1c1ea50be6f20608feac538f7ce2c285cf4075166d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:47:32.836130 kubelet[2718]: E0813 01:47:32.835908 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bea4a58dfd74b87c00402c1c1ea50be6f20608feac538f7ce2c285cf4075166d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:47:32.836130 kubelet[2718]: E0813 01:47:32.835952 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bea4a58dfd74b87c00402c1c1ea50be6f20608feac538f7ce2c285cf4075166d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:47:34.784311 kubelet[2718]: E0813 01:47:34.784226 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1136792411: write /var/lib/containerd/tmpmounts/containerd-mount1136792411/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-qgskr" podUID="9adfb11c-9977-45e9-b78f-00f4995e46c5" Aug 13 01:47:35.780348 containerd[1559]: time="2025-08-13T01:47:35.780287464Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:35.800166 systemd[1]: Started sshd@12-172.232.7.133:22-147.75.109.163:48592.service - OpenSSH per-connection server daemon (147.75.109.163:48592). Aug 13 01:47:35.839006 containerd[1559]: time="2025-08-13T01:47:35.838973825Z" level=error msg="Failed to destroy network for sandbox \"b47a54717cd9bab33925f69abd87026908d930e8c9dfe6b12dbbae91ac88a320\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:35.842267 systemd[1]: run-netns-cni\x2da246430a\x2d2efd\x2d747a\x2d187a\x2d3a373cbb7dfd.mount: Deactivated successfully. Aug 13 01:47:35.842631 containerd[1559]: time="2025-08-13T01:47:35.842518782Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47a54717cd9bab33925f69abd87026908d930e8c9dfe6b12dbbae91ac88a320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:35.842829 kubelet[2718]: E0813 01:47:35.842800 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47a54717cd9bab33925f69abd87026908d930e8c9dfe6b12dbbae91ac88a320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:35.843289 kubelet[2718]: E0813 01:47:35.842848 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47a54717cd9bab33925f69abd87026908d930e8c9dfe6b12dbbae91ac88a320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:47:35.843289 kubelet[2718]: E0813 01:47:35.842917 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b47a54717cd9bab33925f69abd87026908d930e8c9dfe6b12dbbae91ac88a320\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:47:35.843289 kubelet[2718]: E0813 01:47:35.842955 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b47a54717cd9bab33925f69abd87026908d930e8c9dfe6b12dbbae91ac88a320\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:47:36.141000 sshd[4895]: Accepted publickey for core from 147.75.109.163 port 48592 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:47:36.141465 sshd-session[4895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:47:36.146532 systemd-logind[1539]: New session 13 of user core. Aug 13 01:47:36.148988 systemd[1]: Started session-13.scope - Session 13 of User core. Aug 13 01:47:36.447114 sshd[4912]: Connection closed by 147.75.109.163 port 48592 Aug 13 01:47:36.448127 sshd-session[4895]: pam_unix(sshd:session): session closed for user core Aug 13 01:47:36.452897 systemd[1]: sshd@12-172.232.7.133:22-147.75.109.163:48592.service: Deactivated successfully. Aug 13 01:47:36.455055 systemd[1]: session-13.scope: Deactivated successfully. Aug 13 01:47:36.455802 systemd-logind[1539]: Session 13 logged out. Waiting for processes to exit. Aug 13 01:47:36.457392 systemd-logind[1539]: Removed session 13. Aug 13 01:47:36.509982 systemd[1]: Started sshd@13-172.232.7.133:22-147.75.109.163:48608.service - OpenSSH per-connection server daemon (147.75.109.163:48608). Aug 13 01:47:36.843882 sshd[4924]: Accepted publickey for core from 147.75.109.163 port 48608 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:47:36.845060 sshd-session[4924]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:47:36.850915 systemd-logind[1539]: New session 14 of user core. Aug 13 01:47:36.858959 systemd[1]: Started session-14.scope - Session 14 of User core. Aug 13 01:47:37.186105 sshd[4926]: Connection closed by 147.75.109.163 port 48608 Aug 13 01:47:37.186906 sshd-session[4924]: pam_unix(sshd:session): session closed for user core Aug 13 01:47:37.190962 systemd[1]: sshd@13-172.232.7.133:22-147.75.109.163:48608.service: Deactivated successfully. Aug 13 01:47:37.193128 systemd[1]: session-14.scope: Deactivated successfully. Aug 13 01:47:37.193972 systemd-logind[1539]: Session 14 logged out. Waiting for processes to exit. Aug 13 01:47:37.195558 systemd-logind[1539]: Removed session 14. Aug 13 01:47:37.247228 systemd[1]: Started sshd@14-172.232.7.133:22-147.75.109.163:48624.service - OpenSSH per-connection server daemon (147.75.109.163:48624). Aug 13 01:47:37.588780 sshd[4936]: Accepted publickey for core from 147.75.109.163 port 48624 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:47:37.590547 sshd-session[4936]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:47:37.596467 systemd-logind[1539]: New session 15 of user core. Aug 13 01:47:37.604000 systemd[1]: Started session-15.scope - Session 15 of User core. Aug 13 01:47:38.335308 sshd[4938]: Connection closed by 147.75.109.163 port 48624 Aug 13 01:47:38.335973 sshd-session[4936]: pam_unix(sshd:session): session closed for user core Aug 13 01:47:38.340464 systemd-logind[1539]: Session 15 logged out. Waiting for processes to exit. Aug 13 01:47:38.341588 systemd[1]: sshd@14-172.232.7.133:22-147.75.109.163:48624.service: Deactivated successfully. Aug 13 01:47:38.344253 systemd[1]: session-15.scope: Deactivated successfully. Aug 13 01:47:38.346551 systemd-logind[1539]: Removed session 15. Aug 13 01:47:38.394327 systemd[1]: Started sshd@15-172.232.7.133:22-147.75.109.163:56542.service - OpenSSH per-connection server daemon (147.75.109.163:56542). Aug 13 01:47:38.733181 sshd[4955]: Accepted publickey for core from 147.75.109.163 port 56542 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:47:38.734934 sshd-session[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:47:38.740766 systemd-logind[1539]: New session 16 of user core. Aug 13 01:47:38.745158 systemd[1]: Started session-16.scope - Session 16 of User core. Aug 13 01:47:39.149103 sshd[4957]: Connection closed by 147.75.109.163 port 56542 Aug 13 01:47:39.150638 sshd-session[4955]: pam_unix(sshd:session): session closed for user core Aug 13 01:47:39.155190 systemd[1]: sshd@15-172.232.7.133:22-147.75.109.163:56542.service: Deactivated successfully. Aug 13 01:47:39.156943 systemd[1]: session-16.scope: Deactivated successfully. Aug 13 01:47:39.158428 systemd-logind[1539]: Session 16 logged out. Waiting for processes to exit. Aug 13 01:47:39.159751 systemd-logind[1539]: Removed session 16. Aug 13 01:47:39.211880 systemd[1]: Started sshd@16-172.232.7.133:22-147.75.109.163:56556.service - OpenSSH per-connection server daemon (147.75.109.163:56556). Aug 13 01:47:39.548717 sshd[4967]: Accepted publickey for core from 147.75.109.163 port 56556 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:47:39.550290 sshd-session[4967]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:47:39.554839 systemd-logind[1539]: New session 17 of user core. Aug 13 01:47:39.557961 systemd[1]: Started session-17.scope - Session 17 of User core. Aug 13 01:47:39.779095 kubelet[2718]: E0813 01:47:39.779042 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:47:39.779463 kubelet[2718]: E0813 01:47:39.779335 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:47:39.781203 containerd[1559]: time="2025-08-13T01:47:39.780650332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:39.781896 containerd[1559]: time="2025-08-13T01:47:39.781876527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:39.856406 containerd[1559]: time="2025-08-13T01:47:39.855508092Z" level=error msg="Failed to destroy network for sandbox \"b2b30ed47bd650875724378b0d3c8d670244183cb8080e3492cf3a4784eaf8cd\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:39.857756 systemd[1]: run-netns-cni\x2dd8e1e396\x2d720e\x2dcf47\x2d4c3b\x2d74a272e3cce7.mount: Deactivated successfully. Aug 13 01:47:39.860574 containerd[1559]: time="2025-08-13T01:47:39.860453305Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2b30ed47bd650875724378b0d3c8d670244183cb8080e3492cf3a4784eaf8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:39.860921 kubelet[2718]: E0813 01:47:39.860791 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2b30ed47bd650875724378b0d3c8d670244183cb8080e3492cf3a4784eaf8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:39.860921 kubelet[2718]: E0813 01:47:39.860835 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2b30ed47bd650875724378b0d3c8d670244183cb8080e3492cf3a4784eaf8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:47:39.862221 kubelet[2718]: E0813 01:47:39.861609 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2b30ed47bd650875724378b0d3c8d670244183cb8080e3492cf3a4784eaf8cd\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:47:39.862221 kubelet[2718]: E0813 01:47:39.861653 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2b30ed47bd650875724378b0d3c8d670244183cb8080e3492cf3a4784eaf8cd\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j47vf" podUID="0ba3c042-02d2-446d-bb82-0965919f2962" Aug 13 01:47:39.863017 sshd[4969]: Connection closed by 147.75.109.163 port 56556 Aug 13 01:47:39.864039 sshd-session[4967]: pam_unix(sshd:session): session closed for user core Aug 13 01:47:39.864390 containerd[1559]: time="2025-08-13T01:47:39.864255602Z" level=error msg="Failed to destroy network for sandbox \"c0a640bdccc8ea949984514c192dfecb11ed6b88059a2675222f8e2656020147\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:39.865849 systemd[1]: run-netns-cni\x2db6c691d8\x2d7dac\x2d6c8f\x2d9771\x2db51fa8e98a00.mount: Deactivated successfully. Aug 13 01:47:39.867592 containerd[1559]: time="2025-08-13T01:47:39.867542571Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0a640bdccc8ea949984514c192dfecb11ed6b88059a2675222f8e2656020147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:39.869780 kubelet[2718]: E0813 01:47:39.867754 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0a640bdccc8ea949984514c192dfecb11ed6b88059a2675222f8e2656020147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:39.869780 kubelet[2718]: E0813 01:47:39.868399 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0a640bdccc8ea949984514c192dfecb11ed6b88059a2675222f8e2656020147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:47:39.869780 kubelet[2718]: E0813 01:47:39.868417 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0a640bdccc8ea949984514c192dfecb11ed6b88059a2675222f8e2656020147\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:47:39.869780 kubelet[2718]: E0813 01:47:39.868927 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0a640bdccc8ea949984514c192dfecb11ed6b88059a2675222f8e2656020147\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dfjz8" podUID="dafdbb28-0754-4303-98bd-08c77ee94f1a" Aug 13 01:47:39.871551 systemd[1]: sshd@16-172.232.7.133:22-147.75.109.163:56556.service: Deactivated successfully. Aug 13 01:47:39.874023 systemd[1]: session-17.scope: Deactivated successfully. Aug 13 01:47:39.875095 systemd-logind[1539]: Session 17 logged out. Waiting for processes to exit. Aug 13 01:47:39.876750 systemd-logind[1539]: Removed session 17. Aug 13 01:47:43.780726 containerd[1559]: time="2025-08-13T01:47:43.780544730Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:43.843221 containerd[1559]: time="2025-08-13T01:47:43.843160098Z" level=error msg="Failed to destroy network for sandbox \"4691348eedd03be90f67f7b6e8fcb2f4a1789ffbee264454968fb1f8bb8c8c3e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:43.846953 containerd[1559]: time="2025-08-13T01:47:43.844711903Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4691348eedd03be90f67f7b6e8fcb2f4a1789ffbee264454968fb1f8bb8c8c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:43.846450 systemd[1]: run-netns-cni\x2d2ef1b46a\x2dab6e\x2d2f44\x2d6274\x2dd14e4937dc17.mount: Deactivated successfully. Aug 13 01:47:43.847915 kubelet[2718]: E0813 01:47:43.847822 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4691348eedd03be90f67f7b6e8fcb2f4a1789ffbee264454968fb1f8bb8c8c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:43.848325 kubelet[2718]: E0813 01:47:43.847935 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4691348eedd03be90f67f7b6e8fcb2f4a1789ffbee264454968fb1f8bb8c8c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:47:43.848325 kubelet[2718]: E0813 01:47:43.847957 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4691348eedd03be90f67f7b6e8fcb2f4a1789ffbee264454968fb1f8bb8c8c3e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:47:43.848325 kubelet[2718]: E0813 01:47:43.848186 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4691348eedd03be90f67f7b6e8fcb2f4a1789ffbee264454968fb1f8bb8c8c3e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:47:44.926067 systemd[1]: Started sshd@17-172.232.7.133:22-147.75.109.163:56558.service - OpenSSH per-connection server daemon (147.75.109.163:56558). Aug 13 01:47:45.273515 sshd[5072]: Accepted publickey for core from 147.75.109.163 port 56558 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:47:45.275015 sshd-session[5072]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:47:45.279701 systemd-logind[1539]: New session 18 of user core. Aug 13 01:47:45.285218 systemd[1]: Started session-18.scope - Session 18 of User core. Aug 13 01:47:45.592695 sshd[5074]: Connection closed by 147.75.109.163 port 56558 Aug 13 01:47:45.593678 sshd-session[5072]: pam_unix(sshd:session): session closed for user core Aug 13 01:47:45.598349 systemd[1]: sshd@17-172.232.7.133:22-147.75.109.163:56558.service: Deactivated successfully. Aug 13 01:47:45.601066 systemd[1]: session-18.scope: Deactivated successfully. Aug 13 01:47:45.602535 systemd-logind[1539]: Session 18 logged out. Waiting for processes to exit. Aug 13 01:47:45.604779 systemd-logind[1539]: Removed session 18. Aug 13 01:47:45.784613 kubelet[2718]: E0813 01:47:45.784483 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1136792411: write /var/lib/containerd/tmpmounts/containerd-mount1136792411/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-qgskr" podUID="9adfb11c-9977-45e9-b78f-00f4995e46c5" Aug 13 01:47:48.782194 containerd[1559]: time="2025-08-13T01:47:48.782119291Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:48.831680 containerd[1559]: time="2025-08-13T01:47:48.831620494Z" level=error msg="Failed to destroy network for sandbox \"29b3ef00e10a00710600153ae277b4308e636b2e5b915aba55e7740d897088f0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:48.833653 systemd[1]: run-netns-cni\x2d0d6c1efb\x2d872e\x2df18f\x2dda56\x2d45753a1d3a2f.mount: Deactivated successfully. Aug 13 01:47:48.835102 containerd[1559]: time="2025-08-13T01:47:48.835043854Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"29b3ef00e10a00710600153ae277b4308e636b2e5b915aba55e7740d897088f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:48.835693 kubelet[2718]: E0813 01:47:48.835647 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29b3ef00e10a00710600153ae277b4308e636b2e5b915aba55e7740d897088f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:48.835998 kubelet[2718]: E0813 01:47:48.835713 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29b3ef00e10a00710600153ae277b4308e636b2e5b915aba55e7740d897088f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:47:48.835998 kubelet[2718]: E0813 01:47:48.835736 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"29b3ef00e10a00710600153ae277b4308e636b2e5b915aba55e7740d897088f0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:47:48.835998 kubelet[2718]: E0813 01:47:48.835787 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"29b3ef00e10a00710600153ae277b4308e636b2e5b915aba55e7740d897088f0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:47:50.658367 systemd[1]: Started sshd@18-172.232.7.133:22-147.75.109.163:43496.service - OpenSSH per-connection server daemon (147.75.109.163:43496). Aug 13 01:47:50.780669 kubelet[2718]: E0813 01:47:50.780560 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:47:50.782046 kubelet[2718]: E0813 01:47:50.781813 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:47:50.994335 sshd[5113]: Accepted publickey for core from 147.75.109.163 port 43496 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:47:50.995728 sshd-session[5113]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:47:51.000824 systemd-logind[1539]: New session 19 of user core. Aug 13 01:47:51.007065 systemd[1]: Started session-19.scope - Session 19 of User core. Aug 13 01:47:51.315924 sshd[5115]: Connection closed by 147.75.109.163 port 43496 Aug 13 01:47:51.316745 sshd-session[5113]: pam_unix(sshd:session): session closed for user core Aug 13 01:47:51.322522 systemd[1]: sshd@18-172.232.7.133:22-147.75.109.163:43496.service: Deactivated successfully. Aug 13 01:47:51.324615 systemd[1]: session-19.scope: Deactivated successfully. Aug 13 01:47:51.327212 systemd-logind[1539]: Session 19 logged out. Waiting for processes to exit. Aug 13 01:47:51.329527 systemd-logind[1539]: Removed session 19. Aug 13 01:47:54.780768 kubelet[2718]: E0813 01:47:54.780177 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:47:54.782827 kubelet[2718]: E0813 01:47:54.781742 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:47:54.783215 containerd[1559]: time="2025-08-13T01:47:54.782507324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:54.783215 containerd[1559]: time="2025-08-13T01:47:54.782630704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,}" Aug 13 01:47:54.849582 containerd[1559]: time="2025-08-13T01:47:54.849496073Z" level=error msg="Failed to destroy network for sandbox \"58d59ff48ff638fa39f26e36faf9396ada3b3a6f9340ffb736f6b02b520f6753\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:54.853603 systemd[1]: run-netns-cni\x2d70b50b63\x2d955a\x2d82cc\x2d6dc3\x2d8beee5e88524.mount: Deactivated successfully. Aug 13 01:47:54.854457 containerd[1559]: time="2025-08-13T01:47:54.853729941Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"58d59ff48ff638fa39f26e36faf9396ada3b3a6f9340ffb736f6b02b520f6753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:54.855026 kubelet[2718]: E0813 01:47:54.854763 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58d59ff48ff638fa39f26e36faf9396ada3b3a6f9340ffb736f6b02b520f6753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:54.855026 kubelet[2718]: E0813 01:47:54.854823 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58d59ff48ff638fa39f26e36faf9396ada3b3a6f9340ffb736f6b02b520f6753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:47:54.855026 kubelet[2718]: E0813 01:47:54.854846 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"58d59ff48ff638fa39f26e36faf9396ada3b3a6f9340ffb736f6b02b520f6753\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:47:54.855026 kubelet[2718]: E0813 01:47:54.854922 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"58d59ff48ff638fa39f26e36faf9396ada3b3a6f9340ffb736f6b02b520f6753\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dfjz8" podUID="dafdbb28-0754-4303-98bd-08c77ee94f1a" Aug 13 01:47:54.878777 containerd[1559]: time="2025-08-13T01:47:54.878667773Z" level=error msg="Failed to destroy network for sandbox \"c2380c83f4029ec2a8fc37cc63195ad3069c4361268f7983af2316357b9016f1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:54.879911 containerd[1559]: time="2025-08-13T01:47:54.879883070Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2380c83f4029ec2a8fc37cc63195ad3069c4361268f7983af2316357b9016f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:54.881071 kubelet[2718]: E0813 01:47:54.881038 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2380c83f4029ec2a8fc37cc63195ad3069c4361268f7983af2316357b9016f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:54.881181 kubelet[2718]: E0813 01:47:54.881166 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2380c83f4029ec2a8fc37cc63195ad3069c4361268f7983af2316357b9016f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:47:54.881280 kubelet[2718]: E0813 01:47:54.881231 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c2380c83f4029ec2a8fc37cc63195ad3069c4361268f7983af2316357b9016f1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:47:54.881414 kubelet[2718]: E0813 01:47:54.881321 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c2380c83f4029ec2a8fc37cc63195ad3069c4361268f7983af2316357b9016f1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j47vf" podUID="0ba3c042-02d2-446d-bb82-0965919f2962" Aug 13 01:47:54.882621 systemd[1]: run-netns-cni\x2d4a83d675\x2d0c83\x2d900a\x2dfcfa\x2d3767f9a41768.mount: Deactivated successfully. Aug 13 01:47:56.377500 systemd[1]: Started sshd@19-172.232.7.133:22-147.75.109.163:43502.service - OpenSSH per-connection server daemon (147.75.109.163:43502). Aug 13 01:47:56.718281 sshd[5182]: Accepted publickey for core from 147.75.109.163 port 43502 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:47:56.719889 sshd-session[5182]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:47:56.725372 systemd-logind[1539]: New session 20 of user core. Aug 13 01:47:56.737009 systemd[1]: Started session-20.scope - Session 20 of User core. Aug 13 01:47:56.780388 containerd[1559]: time="2025-08-13T01:47:56.780347625Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,}" Aug 13 01:47:56.838996 containerd[1559]: time="2025-08-13T01:47:56.838953220Z" level=error msg="Failed to destroy network for sandbox \"ac106ccdde55f380eef8b5a06d9beb75b417a0c7ccdad413e2fafbe1f871680b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:56.842298 containerd[1559]: time="2025-08-13T01:47:56.842056481Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac106ccdde55f380eef8b5a06d9beb75b417a0c7ccdad413e2fafbe1f871680b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:56.844445 kubelet[2718]: E0813 01:47:56.844418 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac106ccdde55f380eef8b5a06d9beb75b417a0c7ccdad413e2fafbe1f871680b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:47:56.844723 kubelet[2718]: E0813 01:47:56.844463 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac106ccdde55f380eef8b5a06d9beb75b417a0c7ccdad413e2fafbe1f871680b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:47:56.844723 kubelet[2718]: E0813 01:47:56.844482 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac106ccdde55f380eef8b5a06d9beb75b417a0c7ccdad413e2fafbe1f871680b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:47:56.844723 kubelet[2718]: E0813 01:47:56.844515 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac106ccdde55f380eef8b5a06d9beb75b417a0c7ccdad413e2fafbe1f871680b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:47:56.845294 systemd[1]: run-netns-cni\x2dabf5cf87\x2dfaf9\x2d8bfb\x2dcae4\x2d7c1924e5ea75.mount: Deactivated successfully. Aug 13 01:47:57.018406 sshd[5184]: Connection closed by 147.75.109.163 port 43502 Aug 13 01:47:57.017594 sshd-session[5182]: pam_unix(sshd:session): session closed for user core Aug 13 01:47:57.022561 systemd[1]: sshd@19-172.232.7.133:22-147.75.109.163:43502.service: Deactivated successfully. Aug 13 01:47:57.024796 systemd[1]: session-20.scope: Deactivated successfully. Aug 13 01:47:57.025810 systemd-logind[1539]: Session 20 logged out. Waiting for processes to exit. Aug 13 01:47:57.027628 systemd-logind[1539]: Removed session 20. Aug 13 01:47:58.780574 kubelet[2718]: E0813 01:47:58.780518 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1136792411: write /var/lib/containerd/tmpmounts/containerd-mount1136792411/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-qgskr" podUID="9adfb11c-9977-45e9-b78f-00f4995e46c5" Aug 13 01:48:02.077586 systemd[1]: Started sshd@20-172.232.7.133:22-147.75.109.163:43858.service - OpenSSH per-connection server daemon (147.75.109.163:43858). Aug 13 01:48:02.415964 sshd[5223]: Accepted publickey for core from 147.75.109.163 port 43858 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:02.417248 sshd-session[5223]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:02.422218 systemd-logind[1539]: New session 21 of user core. Aug 13 01:48:02.436000 systemd[1]: Started session-21.scope - Session 21 of User core. Aug 13 01:48:02.722250 sshd[5225]: Connection closed by 147.75.109.163 port 43858 Aug 13 01:48:02.722747 sshd-session[5223]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:02.726463 systemd-logind[1539]: Session 21 logged out. Waiting for processes to exit. Aug 13 01:48:02.727186 systemd[1]: sshd@20-172.232.7.133:22-147.75.109.163:43858.service: Deactivated successfully. Aug 13 01:48:02.732628 systemd[1]: session-21.scope: Deactivated successfully. Aug 13 01:48:02.733925 systemd-logind[1539]: Removed session 21. Aug 13 01:48:04.781771 containerd[1559]: time="2025-08-13T01:48:04.781364029Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:04.782164 kubelet[2718]: E0813 01:48:04.781451 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:48:04.848997 containerd[1559]: time="2025-08-13T01:48:04.848944149Z" level=error msg="Failed to destroy network for sandbox \"4a54154932c8a7a78e2f3734dc3bdd6756a1d8b1501cff2fec5afd8d30cbc02f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:04.852846 containerd[1559]: time="2025-08-13T01:48:04.852800160Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a54154932c8a7a78e2f3734dc3bdd6756a1d8b1501cff2fec5afd8d30cbc02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:04.853643 kubelet[2718]: E0813 01:48:04.853386 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a54154932c8a7a78e2f3734dc3bdd6756a1d8b1501cff2fec5afd8d30cbc02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:04.853807 kubelet[2718]: E0813 01:48:04.853600 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a54154932c8a7a78e2f3734dc3bdd6756a1d8b1501cff2fec5afd8d30cbc02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:48:04.853807 kubelet[2718]: E0813 01:48:04.853743 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4a54154932c8a7a78e2f3734dc3bdd6756a1d8b1501cff2fec5afd8d30cbc02f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:48:04.854087 systemd[1]: run-netns-cni\x2d9d72f82c\x2d2089\x2d5346\x2dba39\x2d3bbc7a98f036.mount: Deactivated successfully. Aug 13 01:48:04.855144 kubelet[2718]: E0813 01:48:04.855101 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4a54154932c8a7a78e2f3734dc3bdd6756a1d8b1501cff2fec5afd8d30cbc02f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:48:07.787209 systemd[1]: Started sshd@21-172.232.7.133:22-147.75.109.163:43862.service - OpenSSH per-connection server daemon (147.75.109.163:43862). Aug 13 01:48:08.135045 sshd[5263]: Accepted publickey for core from 147.75.109.163 port 43862 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:08.136976 sshd-session[5263]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:08.144563 systemd-logind[1539]: New session 22 of user core. Aug 13 01:48:08.151076 systemd[1]: Started session-22.scope - Session 22 of User core. Aug 13 01:48:08.433892 sshd[5265]: Connection closed by 147.75.109.163 port 43862 Aug 13 01:48:08.435070 sshd-session[5263]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:08.441478 systemd-logind[1539]: Session 22 logged out. Waiting for processes to exit. Aug 13 01:48:08.442412 systemd[1]: sshd@21-172.232.7.133:22-147.75.109.163:43862.service: Deactivated successfully. Aug 13 01:48:08.444511 systemd[1]: session-22.scope: Deactivated successfully. Aug 13 01:48:08.446303 systemd-logind[1539]: Removed session 22. Aug 13 01:48:08.780744 kubelet[2718]: E0813 01:48:08.779983 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:48:09.780099 kubelet[2718]: E0813 01:48:09.779778 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:48:09.780099 kubelet[2718]: E0813 01:48:09.779876 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:48:09.780326 containerd[1559]: time="2025-08-13T01:48:09.780241860Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:09.781248 containerd[1559]: time="2025-08-13T01:48:09.780823189Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:09.856992 containerd[1559]: time="2025-08-13T01:48:09.856953108Z" level=error msg="Failed to destroy network for sandbox \"10cb0bf0d59d28306132e26811aa1ec0492486e016f0e5e76096fefddb8ff200\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:09.860780 systemd[1]: run-netns-cni\x2d89d3347e\x2ddf8e\x2de8f9\x2db32f\x2d3c5994d029cb.mount: Deactivated successfully. Aug 13 01:48:09.861234 containerd[1559]: time="2025-08-13T01:48:09.860954120Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"10cb0bf0d59d28306132e26811aa1ec0492486e016f0e5e76096fefddb8ff200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:09.862139 kubelet[2718]: E0813 01:48:09.862063 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10cb0bf0d59d28306132e26811aa1ec0492486e016f0e5e76096fefddb8ff200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:09.863234 kubelet[2718]: E0813 01:48:09.862248 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10cb0bf0d59d28306132e26811aa1ec0492486e016f0e5e76096fefddb8ff200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:48:09.863234 kubelet[2718]: E0813 01:48:09.862272 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"10cb0bf0d59d28306132e26811aa1ec0492486e016f0e5e76096fefddb8ff200\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:48:09.863234 kubelet[2718]: E0813 01:48:09.862513 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"10cb0bf0d59d28306132e26811aa1ec0492486e016f0e5e76096fefddb8ff200\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j47vf" podUID="0ba3c042-02d2-446d-bb82-0965919f2962" Aug 13 01:48:09.866913 containerd[1559]: time="2025-08-13T01:48:09.866881836Z" level=error msg="Failed to destroy network for sandbox \"5323b01405590c62bb5b4a38a711ad98e9d3892447065955d911a909433fb29c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:09.870103 containerd[1559]: time="2025-08-13T01:48:09.870076010Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"5323b01405590c62bb5b4a38a711ad98e9d3892447065955d911a909433fb29c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:09.870550 kubelet[2718]: E0813 01:48:09.870214 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5323b01405590c62bb5b4a38a711ad98e9d3892447065955d911a909433fb29c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:09.870550 kubelet[2718]: E0813 01:48:09.870248 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5323b01405590c62bb5b4a38a711ad98e9d3892447065955d911a909433fb29c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:48:09.870550 kubelet[2718]: E0813 01:48:09.870265 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5323b01405590c62bb5b4a38a711ad98e9d3892447065955d911a909433fb29c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:48:09.870550 kubelet[2718]: E0813 01:48:09.870292 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5323b01405590c62bb5b4a38a711ad98e9d3892447065955d911a909433fb29c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dfjz8" podUID="dafdbb28-0754-4303-98bd-08c77ee94f1a" Aug 13 01:48:09.870571 systemd[1]: run-netns-cni\x2de5612693\x2df837\x2db3d9\x2dd967\x2d334873f7338e.mount: Deactivated successfully. Aug 13 01:48:10.780369 containerd[1559]: time="2025-08-13T01:48:10.780069814Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:10.825821 containerd[1559]: time="2025-08-13T01:48:10.825770992Z" level=error msg="Failed to destroy network for sandbox \"c207717859f64c56843bb0fd7bb7bc7f2a4ae3d4b8887f1102c719437cd20076\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:10.827567 systemd[1]: run-netns-cni\x2d380d01ed\x2dfac2\x2d915f\x2d1435\x2d83c451fd70b2.mount: Deactivated successfully. Aug 13 01:48:10.829811 containerd[1559]: time="2025-08-13T01:48:10.829779073Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c207717859f64c56843bb0fd7bb7bc7f2a4ae3d4b8887f1102c719437cd20076\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:10.830730 kubelet[2718]: E0813 01:48:10.830694 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c207717859f64c56843bb0fd7bb7bc7f2a4ae3d4b8887f1102c719437cd20076\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:10.830788 kubelet[2718]: E0813 01:48:10.830748 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c207717859f64c56843bb0fd7bb7bc7f2a4ae3d4b8887f1102c719437cd20076\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:48:10.830788 kubelet[2718]: E0813 01:48:10.830767 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c207717859f64c56843bb0fd7bb7bc7f2a4ae3d4b8887f1102c719437cd20076\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:48:10.830835 kubelet[2718]: E0813 01:48:10.830805 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c207717859f64c56843bb0fd7bb7bc7f2a4ae3d4b8887f1102c719437cd20076\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:48:12.781957 kubelet[2718]: E0813 01:48:12.781913 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1136792411: write /var/lib/containerd/tmpmounts/containerd-mount1136792411/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-qgskr" podUID="9adfb11c-9977-45e9-b78f-00f4995e46c5" Aug 13 01:48:13.494759 systemd[1]: Started sshd@22-172.232.7.133:22-147.75.109.163:43276.service - OpenSSH per-connection server daemon (147.75.109.163:43276). Aug 13 01:48:13.824213 sshd[5359]: Accepted publickey for core from 147.75.109.163 port 43276 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:13.825839 sshd-session[5359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:13.830185 systemd-logind[1539]: New session 23 of user core. Aug 13 01:48:13.836995 systemd[1]: Started session-23.scope - Session 23 of User core. Aug 13 01:48:14.116585 sshd[5361]: Connection closed by 147.75.109.163 port 43276 Aug 13 01:48:14.117207 sshd-session[5359]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:14.120810 systemd[1]: sshd@22-172.232.7.133:22-147.75.109.163:43276.service: Deactivated successfully. Aug 13 01:48:14.122712 systemd[1]: session-23.scope: Deactivated successfully. Aug 13 01:48:14.123704 systemd-logind[1539]: Session 23 logged out. Waiting for processes to exit. Aug 13 01:48:14.125045 systemd-logind[1539]: Removed session 23. Aug 13 01:48:19.184096 systemd[1]: Started sshd@23-172.232.7.133:22-147.75.109.163:44332.service - OpenSSH per-connection server daemon (147.75.109.163:44332). Aug 13 01:48:19.523334 sshd[5373]: Accepted publickey for core from 147.75.109.163 port 44332 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:19.524732 sshd-session[5373]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:19.530012 systemd-logind[1539]: New session 24 of user core. Aug 13 01:48:19.538995 systemd[1]: Started session-24.scope - Session 24 of User core. Aug 13 01:48:19.780599 containerd[1559]: time="2025-08-13T01:48:19.780421653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:19.838959 sshd[5375]: Connection closed by 147.75.109.163 port 44332 Aug 13 01:48:19.839219 sshd-session[5373]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:19.841044 containerd[1559]: time="2025-08-13T01:48:19.841013450Z" level=error msg="Failed to destroy network for sandbox \"8caebd897d8014abcf4aa0658ab6b33ec72c8f8c3da47cf1c88a036a2633c6cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:19.844364 systemd[1]: run-netns-cni\x2d456200ca\x2d69a8\x2db056\x2dfe1d\x2d8eae68d848c3.mount: Deactivated successfully. Aug 13 01:48:19.845550 containerd[1559]: time="2025-08-13T01:48:19.845523421Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"8caebd897d8014abcf4aa0658ab6b33ec72c8f8c3da47cf1c88a036a2633c6cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:19.846444 kubelet[2718]: E0813 01:48:19.845959 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8caebd897d8014abcf4aa0658ab6b33ec72c8f8c3da47cf1c88a036a2633c6cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:19.846889 kubelet[2718]: E0813 01:48:19.846468 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8caebd897d8014abcf4aa0658ab6b33ec72c8f8c3da47cf1c88a036a2633c6cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:48:19.846969 kubelet[2718]: E0813 01:48:19.846948 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8caebd897d8014abcf4aa0658ab6b33ec72c8f8c3da47cf1c88a036a2633c6cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:48:19.847954 kubelet[2718]: E0813 01:48:19.847917 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8caebd897d8014abcf4aa0658ab6b33ec72c8f8c3da47cf1c88a036a2633c6cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:48:19.851149 systemd[1]: sshd@23-172.232.7.133:22-147.75.109.163:44332.service: Deactivated successfully. Aug 13 01:48:19.853271 systemd[1]: session-24.scope: Deactivated successfully. Aug 13 01:48:19.858031 systemd-logind[1539]: Session 24 logged out. Waiting for processes to exit. Aug 13 01:48:19.859698 systemd-logind[1539]: Removed session 24. Aug 13 01:48:22.782246 kubelet[2718]: E0813 01:48:22.781933 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:48:22.782952 containerd[1559]: time="2025-08-13T01:48:22.782905478Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:22.783960 containerd[1559]: time="2025-08-13T01:48:22.783922786Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:22.867030 containerd[1559]: time="2025-08-13T01:48:22.866896622Z" level=error msg="Failed to destroy network for sandbox \"eb85dcecc8a3c9a61bfdfb8915378f2db2d4482a5de7107903ae39ec8d3b27b7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:22.871919 systemd[1]: run-netns-cni\x2d29f3fffb\x2dcf6b\x2dc87b\x2da122\x2d4f5950663830.mount: Deactivated successfully. Aug 13 01:48:22.872575 containerd[1559]: time="2025-08-13T01:48:22.872539251Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb85dcecc8a3c9a61bfdfb8915378f2db2d4482a5de7107903ae39ec8d3b27b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:22.876775 kubelet[2718]: E0813 01:48:22.876716 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb85dcecc8a3c9a61bfdfb8915378f2db2d4482a5de7107903ae39ec8d3b27b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:22.877867 kubelet[2718]: E0813 01:48:22.876894 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb85dcecc8a3c9a61bfdfb8915378f2db2d4482a5de7107903ae39ec8d3b27b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:48:22.878218 kubelet[2718]: E0813 01:48:22.877968 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"eb85dcecc8a3c9a61bfdfb8915378f2db2d4482a5de7107903ae39ec8d3b27b7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:48:22.878218 kubelet[2718]: E0813 01:48:22.878041 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"eb85dcecc8a3c9a61bfdfb8915378f2db2d4482a5de7107903ae39ec8d3b27b7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dfjz8" podUID="dafdbb28-0754-4303-98bd-08c77ee94f1a" Aug 13 01:48:22.887222 containerd[1559]: time="2025-08-13T01:48:22.887194782Z" level=error msg="Failed to destroy network for sandbox \"de98cbc1e4819acd2bd3d6ee73314c7236f249c5da865d2e6aa054687d17918b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:22.889094 systemd[1]: run-netns-cni\x2d79c0b2e9\x2dcded\x2d2bd8\x2da44a\x2d91efb9a30afd.mount: Deactivated successfully. Aug 13 01:48:22.890770 containerd[1559]: time="2025-08-13T01:48:22.890152396Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"de98cbc1e4819acd2bd3d6ee73314c7236f249c5da865d2e6aa054687d17918b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:22.890984 kubelet[2718]: E0813 01:48:22.890312 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de98cbc1e4819acd2bd3d6ee73314c7236f249c5da865d2e6aa054687d17918b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:22.890984 kubelet[2718]: E0813 01:48:22.890348 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de98cbc1e4819acd2bd3d6ee73314c7236f249c5da865d2e6aa054687d17918b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:48:22.890984 kubelet[2718]: E0813 01:48:22.890367 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"de98cbc1e4819acd2bd3d6ee73314c7236f249c5da865d2e6aa054687d17918b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:48:22.890984 kubelet[2718]: E0813 01:48:22.890443 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"de98cbc1e4819acd2bd3d6ee73314c7236f249c5da865d2e6aa054687d17918b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:48:23.780056 kubelet[2718]: E0813 01:48:23.779999 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:48:23.780888 kubelet[2718]: E0813 01:48:23.780519 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1136792411: write /var/lib/containerd/tmpmounts/containerd-mount1136792411/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-qgskr" podUID="9adfb11c-9977-45e9-b78f-00f4995e46c5" Aug 13 01:48:23.781193 containerd[1559]: time="2025-08-13T01:48:23.781139868Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:23.836292 containerd[1559]: time="2025-08-13T01:48:23.836234359Z" level=error msg="Failed to destroy network for sandbox \"f9cfffca3a001078982e7ff6e97826f48e5e9e176c538655c6afec6e835c5827\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:23.840003 systemd[1]: run-netns-cni\x2d51b4a796\x2d3372\x2d7d56\x2dab29\x2dbbaf2bfac57d.mount: Deactivated successfully. Aug 13 01:48:23.841085 containerd[1559]: time="2025-08-13T01:48:23.840926711Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9cfffca3a001078982e7ff6e97826f48e5e9e176c538655c6afec6e835c5827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:23.841712 kubelet[2718]: E0813 01:48:23.841680 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9cfffca3a001078982e7ff6e97826f48e5e9e176c538655c6afec6e835c5827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:23.842334 kubelet[2718]: E0813 01:48:23.841727 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9cfffca3a001078982e7ff6e97826f48e5e9e176c538655c6afec6e835c5827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:48:23.842334 kubelet[2718]: E0813 01:48:23.841746 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f9cfffca3a001078982e7ff6e97826f48e5e9e176c538655c6afec6e835c5827\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:48:23.842334 kubelet[2718]: E0813 01:48:23.841780 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f9cfffca3a001078982e7ff6e97826f48e5e9e176c538655c6afec6e835c5827\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j47vf" podUID="0ba3c042-02d2-446d-bb82-0965919f2962" Aug 13 01:48:24.907985 systemd[1]: Started sshd@24-172.232.7.133:22-147.75.109.163:44338.service - OpenSSH per-connection server daemon (147.75.109.163:44338). Aug 13 01:48:25.257253 sshd[5491]: Accepted publickey for core from 147.75.109.163 port 44338 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:25.258910 sshd-session[5491]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:25.264227 systemd-logind[1539]: New session 25 of user core. Aug 13 01:48:25.273981 systemd[1]: Started session-25.scope - Session 25 of User core. Aug 13 01:48:25.561509 sshd[5493]: Connection closed by 147.75.109.163 port 44338 Aug 13 01:48:25.562203 sshd-session[5491]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:25.566398 systemd-logind[1539]: Session 25 logged out. Waiting for processes to exit. Aug 13 01:48:25.567137 systemd[1]: sshd@24-172.232.7.133:22-147.75.109.163:44338.service: Deactivated successfully. Aug 13 01:48:25.569145 systemd[1]: session-25.scope: Deactivated successfully. Aug 13 01:48:25.571782 systemd-logind[1539]: Removed session 25. Aug 13 01:48:30.623603 systemd[1]: Started sshd@25-172.232.7.133:22-147.75.109.163:32920.service - OpenSSH per-connection server daemon (147.75.109.163:32920). Aug 13 01:48:30.960338 sshd[5507]: Accepted publickey for core from 147.75.109.163 port 32920 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:30.962013 sshd-session[5507]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:30.967677 systemd-logind[1539]: New session 26 of user core. Aug 13 01:48:30.975981 systemd[1]: Started session-26.scope - Session 26 of User core. Aug 13 01:48:31.273993 sshd[5509]: Connection closed by 147.75.109.163 port 32920 Aug 13 01:48:31.275544 sshd-session[5507]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:31.280339 systemd-logind[1539]: Session 26 logged out. Waiting for processes to exit. Aug 13 01:48:31.281058 systemd[1]: sshd@25-172.232.7.133:22-147.75.109.163:32920.service: Deactivated successfully. Aug 13 01:48:31.283573 systemd[1]: session-26.scope: Deactivated successfully. Aug 13 01:48:31.286368 systemd-logind[1539]: Removed session 26. Aug 13 01:48:34.783830 containerd[1559]: time="2025-08-13T01:48:34.783750917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:34.784772 containerd[1559]: time="2025-08-13T01:48:34.784629386Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:34.785786 kubelet[2718]: E0813 01:48:34.785743 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-node\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node:v3.30.2\\\": failed to extract layer sha256:0df84c1170997f4f9d64efde556fda4ea5e552df4ec6d03b6cdd975940ab2b14: mount callback failed on /var/lib/containerd/tmpmounts/containerd-mount1136792411: write /var/lib/containerd/tmpmounts/containerd-mount1136792411/usr/bin/calico-node: no space left on device\"" pod="calico-system/calico-node-qgskr" podUID="9adfb11c-9977-45e9-b78f-00f4995e46c5" Aug 13 01:48:34.865675 containerd[1559]: time="2025-08-13T01:48:34.865623079Z" level=error msg="Failed to destroy network for sandbox \"e80eb84cbf3ea45031ba25c06241115950479f8e420c5193de708e91b2798d55\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:34.868750 systemd[1]: run-netns-cni\x2d1766a020\x2dd716\x2dfdc0\x2d0a28\x2d2e794c5b9103.mount: Deactivated successfully. Aug 13 01:48:34.870463 containerd[1559]: time="2025-08-13T01:48:34.867972185Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e80eb84cbf3ea45031ba25c06241115950479f8e420c5193de708e91b2798d55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:34.871051 kubelet[2718]: E0813 01:48:34.869104 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e80eb84cbf3ea45031ba25c06241115950479f8e420c5193de708e91b2798d55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:34.871051 kubelet[2718]: E0813 01:48:34.869171 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e80eb84cbf3ea45031ba25c06241115950479f8e420c5193de708e91b2798d55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:48:34.871051 kubelet[2718]: E0813 01:48:34.869191 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e80eb84cbf3ea45031ba25c06241115950479f8e420c5193de708e91b2798d55\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:48:34.871051 kubelet[2718]: E0813 01:48:34.869231 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e80eb84cbf3ea45031ba25c06241115950479f8e420c5193de708e91b2798d55\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:48:34.882846 containerd[1559]: time="2025-08-13T01:48:34.882809598Z" level=error msg="Failed to destroy network for sandbox \"717183539fc24b74082cc19efaf835b795c41990ed85e478cbbdc79403aedc0c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:34.884835 systemd[1]: run-netns-cni\x2d32c9c359\x2d16f6\x2dbc92\x2d6ca6\x2d47718c597ffd.mount: Deactivated successfully. Aug 13 01:48:34.885977 containerd[1559]: time="2025-08-13T01:48:34.885929552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"717183539fc24b74082cc19efaf835b795c41990ed85e478cbbdc79403aedc0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:34.886154 kubelet[2718]: E0813 01:48:34.886120 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"717183539fc24b74082cc19efaf835b795c41990ed85e478cbbdc79403aedc0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:34.886215 kubelet[2718]: E0813 01:48:34.886157 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"717183539fc24b74082cc19efaf835b795c41990ed85e478cbbdc79403aedc0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:48:34.886215 kubelet[2718]: E0813 01:48:34.886174 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"717183539fc24b74082cc19efaf835b795c41990ed85e478cbbdc79403aedc0c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:48:34.886215 kubelet[2718]: E0813 01:48:34.886204 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"717183539fc24b74082cc19efaf835b795c41990ed85e478cbbdc79403aedc0c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:48:35.779876 kubelet[2718]: E0813 01:48:35.779795 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:48:35.780640 containerd[1559]: time="2025-08-13T01:48:35.780584817Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:35.847078 containerd[1559]: time="2025-08-13T01:48:35.847012677Z" level=error msg="Failed to destroy network for sandbox \"fe249a2f7981126de72e78d3c7cf51a662a85fa88b8dbd305670b2203fef7f92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:35.851094 containerd[1559]: time="2025-08-13T01:48:35.850889080Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe249a2f7981126de72e78d3c7cf51a662a85fa88b8dbd305670b2203fef7f92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:35.850994 systemd[1]: run-netns-cni\x2de19c68fa\x2d50d7\x2d696a\x2d436e\x2d3fe8659e43e9.mount: Deactivated successfully. Aug 13 01:48:35.851543 kubelet[2718]: E0813 01:48:35.851189 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe249a2f7981126de72e78d3c7cf51a662a85fa88b8dbd305670b2203fef7f92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:35.851543 kubelet[2718]: E0813 01:48:35.851435 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe249a2f7981126de72e78d3c7cf51a662a85fa88b8dbd305670b2203fef7f92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:48:35.851543 kubelet[2718]: E0813 01:48:35.851461 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe249a2f7981126de72e78d3c7cf51a662a85fa88b8dbd305670b2203fef7f92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:48:35.852093 kubelet[2718]: E0813 01:48:35.852053 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe249a2f7981126de72e78d3c7cf51a662a85fa88b8dbd305670b2203fef7f92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j47vf" podUID="0ba3c042-02d2-446d-bb82-0965919f2962" Aug 13 01:48:36.337262 systemd[1]: Started sshd@26-172.232.7.133:22-147.75.109.163:32928.service - OpenSSH per-connection server daemon (147.75.109.163:32928). Aug 13 01:48:36.686701 sshd[5603]: Accepted publickey for core from 147.75.109.163 port 32928 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:36.689312 sshd-session[5603]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:36.695904 systemd-logind[1539]: New session 27 of user core. Aug 13 01:48:36.701144 systemd[1]: Started session-27.scope - Session 27 of User core. Aug 13 01:48:36.780106 kubelet[2718]: E0813 01:48:36.780080 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:48:36.781671 containerd[1559]: time="2025-08-13T01:48:36.781642799Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:36.843187 containerd[1559]: time="2025-08-13T01:48:36.843074090Z" level=error msg="Failed to destroy network for sandbox \"971e4733dc6db52a1ace1d35364e6abac5dc3780eafd64ddca0275e8cb023d41\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:36.846333 systemd[1]: run-netns-cni\x2d35e707f9\x2d4421\x2daef3\x2df390\x2d680d83b5c725.mount: Deactivated successfully. Aug 13 01:48:36.847128 containerd[1559]: time="2025-08-13T01:48:36.847064833Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"971e4733dc6db52a1ace1d35364e6abac5dc3780eafd64ddca0275e8cb023d41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:36.847453 kubelet[2718]: E0813 01:48:36.847279 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"971e4733dc6db52a1ace1d35364e6abac5dc3780eafd64ddca0275e8cb023d41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:36.847453 kubelet[2718]: E0813 01:48:36.847324 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"971e4733dc6db52a1ace1d35364e6abac5dc3780eafd64ddca0275e8cb023d41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:48:36.847453 kubelet[2718]: E0813 01:48:36.847344 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"971e4733dc6db52a1ace1d35364e6abac5dc3780eafd64ddca0275e8cb023d41\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:48:36.847453 kubelet[2718]: E0813 01:48:36.847378 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"971e4733dc6db52a1ace1d35364e6abac5dc3780eafd64ddca0275e8cb023d41\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dfjz8" podUID="dafdbb28-0754-4303-98bd-08c77ee94f1a" Aug 13 01:48:37.001043 sshd[5605]: Connection closed by 147.75.109.163 port 32928 Aug 13 01:48:37.003765 sshd-session[5603]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:37.010765 systemd-logind[1539]: Session 27 logged out. Waiting for processes to exit. Aug 13 01:48:37.011610 systemd[1]: sshd@26-172.232.7.133:22-147.75.109.163:32928.service: Deactivated successfully. Aug 13 01:48:37.016051 systemd[1]: session-27.scope: Deactivated successfully. Aug 13 01:48:37.018507 systemd-logind[1539]: Removed session 27. Aug 13 01:48:42.069436 systemd[1]: Started sshd@27-172.232.7.133:22-147.75.109.163:48258.service - OpenSSH per-connection server daemon (147.75.109.163:48258). Aug 13 01:48:42.414014 sshd[5644]: Accepted publickey for core from 147.75.109.163 port 48258 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:42.415776 sshd-session[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:42.421200 systemd-logind[1539]: New session 28 of user core. Aug 13 01:48:42.425954 systemd[1]: Started session-28.scope - Session 28 of User core. Aug 13 01:48:42.719494 sshd[5646]: Connection closed by 147.75.109.163 port 48258 Aug 13 01:48:42.720270 sshd-session[5644]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:42.724794 systemd-logind[1539]: Session 28 logged out. Waiting for processes to exit. Aug 13 01:48:42.725393 systemd[1]: sshd@27-172.232.7.133:22-147.75.109.163:48258.service: Deactivated successfully. Aug 13 01:48:42.727627 systemd[1]: session-28.scope: Deactivated successfully. Aug 13 01:48:42.730004 systemd-logind[1539]: Removed session 28. Aug 13 01:48:46.780162 kubelet[2718]: E0813 01:48:46.779904 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:48:46.781137 containerd[1559]: time="2025-08-13T01:48:46.780781609Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:46.830722 containerd[1559]: time="2025-08-13T01:48:46.830673568Z" level=error msg="Failed to destroy network for sandbox \"3a68362932128364609e85e46885a119e9d43d0cecb17611cb6155551b0f645c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:46.832882 containerd[1559]: time="2025-08-13T01:48:46.832813156Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a68362932128364609e85e46885a119e9d43d0cecb17611cb6155551b0f645c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:46.833764 systemd[1]: run-netns-cni\x2d99ece9c3\x2d085f\x2dea0c\x2df095\x2da61c8499dba7.mount: Deactivated successfully. Aug 13 01:48:46.834096 kubelet[2718]: E0813 01:48:46.834033 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a68362932128364609e85e46885a119e9d43d0cecb17611cb6155551b0f645c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:46.834151 kubelet[2718]: E0813 01:48:46.834115 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a68362932128364609e85e46885a119e9d43d0cecb17611cb6155551b0f645c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:48:46.834179 kubelet[2718]: E0813 01:48:46.834158 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"3a68362932128364609e85e46885a119e9d43d0cecb17611cb6155551b0f645c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:48:46.834252 kubelet[2718]: E0813 01:48:46.834197 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-j47vf_kube-system(0ba3c042-02d2-446d-bb82-0965919f2962)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"3a68362932128364609e85e46885a119e9d43d0cecb17611cb6155551b0f645c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-j47vf" podUID="0ba3c042-02d2-446d-bb82-0965919f2962" Aug 13 01:48:47.783251 systemd[1]: Started sshd@28-172.232.7.133:22-147.75.109.163:48272.service - OpenSSH per-connection server daemon (147.75.109.163:48272). Aug 13 01:48:48.131817 sshd[5683]: Accepted publickey for core from 147.75.109.163 port 48272 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:48.132298 sshd-session[5683]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:48.141681 systemd-logind[1539]: New session 29 of user core. Aug 13 01:48:48.147015 systemd[1]: Started session-29.scope - Session 29 of User core. Aug 13 01:48:48.436428 sshd[5686]: Connection closed by 147.75.109.163 port 48272 Aug 13 01:48:48.437086 sshd-session[5683]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:48.442413 systemd[1]: sshd@28-172.232.7.133:22-147.75.109.163:48272.service: Deactivated successfully. Aug 13 01:48:48.446481 systemd[1]: session-29.scope: Deactivated successfully. Aug 13 01:48:48.448305 systemd-logind[1539]: Session 29 logged out. Waiting for processes to exit. Aug 13 01:48:48.451422 systemd-logind[1539]: Removed session 29. Aug 13 01:48:48.781760 containerd[1559]: time="2025-08-13T01:48:48.781565750Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\"" Aug 13 01:48:49.780300 kubelet[2718]: E0813 01:48:49.780211 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:48:49.783499 containerd[1559]: time="2025-08-13T01:48:49.783442626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:49.783891 containerd[1559]: time="2025-08-13T01:48:49.783442626Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:49.875597 containerd[1559]: time="2025-08-13T01:48:49.875547699Z" level=error msg="Failed to destroy network for sandbox \"a67415687c0d3305da5192d484524f9480346f188469f53090ffcaddbea83408\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:49.879423 systemd[1]: run-netns-cni\x2db73f3110\x2d1319\x2d5b54\x2d5fed\x2d0d7053b3613e.mount: Deactivated successfully. Aug 13 01:48:49.880113 containerd[1559]: time="2025-08-13T01:48:49.879597801Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"a67415687c0d3305da5192d484524f9480346f188469f53090ffcaddbea83408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:49.882199 kubelet[2718]: E0813 01:48:49.881884 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a67415687c0d3305da5192d484524f9480346f188469f53090ffcaddbea83408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:49.882199 kubelet[2718]: E0813 01:48:49.881972 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a67415687c0d3305da5192d484524f9480346f188469f53090ffcaddbea83408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:48:49.882199 kubelet[2718]: E0813 01:48:49.881993 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"a67415687c0d3305da5192d484524f9480346f188469f53090ffcaddbea83408\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:48:49.882199 kubelet[2718]: E0813 01:48:49.882033 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dfjz8_kube-system(dafdbb28-0754-4303-98bd-08c77ee94f1a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"a67415687c0d3305da5192d484524f9480346f188469f53090ffcaddbea83408\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dfjz8" podUID="dafdbb28-0754-4303-98bd-08c77ee94f1a" Aug 13 01:48:49.890429 containerd[1559]: time="2025-08-13T01:48:49.890390866Z" level=error msg="Failed to destroy network for sandbox \"d1c0eafb44b228cb7a5837062fbf0e916f9fc31ea7c78e78b1e8689d0e62e576\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:49.893395 containerd[1559]: time="2025-08-13T01:48:49.893361284Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c0eafb44b228cb7a5837062fbf0e916f9fc31ea7c78e78b1e8689d0e62e576\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:49.893564 kubelet[2718]: E0813 01:48:49.893506 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c0eafb44b228cb7a5837062fbf0e916f9fc31ea7c78e78b1e8689d0e62e576\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:49.893564 kubelet[2718]: E0813 01:48:49.893545 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c0eafb44b228cb7a5837062fbf0e916f9fc31ea7c78e78b1e8689d0e62e576\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:48:49.893891 kubelet[2718]: E0813 01:48:49.893563 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d1c0eafb44b228cb7a5837062fbf0e916f9fc31ea7c78e78b1e8689d0e62e576\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:48:49.893891 kubelet[2718]: E0813 01:48:49.893595 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d1c0eafb44b228cb7a5837062fbf0e916f9fc31ea7c78e78b1e8689d0e62e576\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:48:49.894584 systemd[1]: run-netns-cni\x2d18b7ff8f\x2d560f\x2dbb22\x2d8e8f\x2df76713ed3225.mount: Deactivated successfully. Aug 13 01:48:50.780712 containerd[1559]: time="2025-08-13T01:48:50.780390108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,}" Aug 13 01:48:50.866953 containerd[1559]: time="2025-08-13T01:48:50.866895276Z" level=error msg="Failed to destroy network for sandbox \"bf4a7a7a1c9ad9f090ef2c4133d816670a78ee524e3e99dffbd7898689036a0d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:50.869531 systemd[1]: run-netns-cni\x2d5bbd9953\x2ddefb\x2dcc88\x2d443c\x2d82afa1ba6382.mount: Deactivated successfully. Aug 13 01:48:50.870847 containerd[1559]: time="2025-08-13T01:48:50.870759052Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf4a7a7a1c9ad9f090ef2c4133d816670a78ee524e3e99dffbd7898689036a0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:50.871194 kubelet[2718]: E0813 01:48:50.871149 2718 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf4a7a7a1c9ad9f090ef2c4133d816670a78ee524e3e99dffbd7898689036a0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Aug 13 01:48:50.873981 kubelet[2718]: E0813 01:48:50.871337 2718 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf4a7a7a1c9ad9f090ef2c4133d816670a78ee524e3e99dffbd7898689036a0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:48:50.873981 kubelet[2718]: E0813 01:48:50.871364 2718 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"bf4a7a7a1c9ad9f090ef2c4133d816670a78ee524e3e99dffbd7898689036a0d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:48:50.873981 kubelet[2718]: E0813 01:48:50.871427 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"bf4a7a7a1c9ad9f090ef2c4133d816670a78ee524e3e99dffbd7898689036a0d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:48:52.789798 kubelet[2718]: E0813 01:48:52.789739 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:48:53.269687 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1109053932.mount: Deactivated successfully. Aug 13 01:48:53.305801 containerd[1559]: time="2025-08-13T01:48:53.305760756Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:48:53.306926 containerd[1559]: time="2025-08-13T01:48:53.306740262Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.2: active requests=0, bytes read=158500163" Aug 13 01:48:53.307507 containerd[1559]: time="2025-08-13T01:48:53.307477453Z" level=info msg="ImageCreate event name:\"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:48:53.308957 containerd[1559]: time="2025-08-13T01:48:53.308908523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:48:53.309828 containerd[1559]: time="2025-08-13T01:48:53.309372427Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.2\" with image id \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e94d49349cc361ef2216d27dda4a097278984d778279f66e79b0616c827c6760\", size \"158500025\" in 4.527183135s" Aug 13 01:48:53.309828 containerd[1559]: time="2025-08-13T01:48:53.309401337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.2\" returns image reference \"sha256:cc52550d767f73458fee2ee68db9db5de30d175e8fa4569ebdb43610127b6d20\"" Aug 13 01:48:53.324047 containerd[1559]: time="2025-08-13T01:48:53.323977668Z" level=info msg="CreateContainer within sandbox \"cf8808596e9944d86c6c06b036bdf15dc271c9e20b2942d501808df66f92f821\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Aug 13 01:48:53.334077 containerd[1559]: time="2025-08-13T01:48:53.334046711Z" level=info msg="Container c417f2abc38fb1e7f571e895b4832cd8d84c96e5d9cac2ded4c0b2b1c7bebb21: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:48:53.345398 containerd[1559]: time="2025-08-13T01:48:53.345343748Z" level=info msg="CreateContainer within sandbox \"cf8808596e9944d86c6c06b036bdf15dc271c9e20b2942d501808df66f92f821\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"c417f2abc38fb1e7f571e895b4832cd8d84c96e5d9cac2ded4c0b2b1c7bebb21\"" Aug 13 01:48:53.346197 containerd[1559]: time="2025-08-13T01:48:53.346149396Z" level=info msg="StartContainer for \"c417f2abc38fb1e7f571e895b4832cd8d84c96e5d9cac2ded4c0b2b1c7bebb21\"" Aug 13 01:48:53.347449 containerd[1559]: time="2025-08-13T01:48:53.347416989Z" level=info msg="connecting to shim c417f2abc38fb1e7f571e895b4832cd8d84c96e5d9cac2ded4c0b2b1c7bebb21" address="unix:///run/containerd/s/622e69d3c921a9a0485f0e4b2ee545d9fd40dfe3138fc5878b4d17fe298e3d44" protocol=ttrpc version=3 Aug 13 01:48:53.376030 systemd[1]: Started cri-containerd-c417f2abc38fb1e7f571e895b4832cd8d84c96e5d9cac2ded4c0b2b1c7bebb21.scope - libcontainer container c417f2abc38fb1e7f571e895b4832cd8d84c96e5d9cac2ded4c0b2b1c7bebb21. Aug 13 01:48:53.432125 containerd[1559]: time="2025-08-13T01:48:53.432005348Z" level=info msg="StartContainer for \"c417f2abc38fb1e7f571e895b4832cd8d84c96e5d9cac2ded4c0b2b1c7bebb21\" returns successfully" Aug 13 01:48:53.504074 systemd[1]: Started sshd@29-172.232.7.133:22-147.75.109.163:51574.service - OpenSSH per-connection server daemon (147.75.109.163:51574). Aug 13 01:48:53.514717 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Aug 13 01:48:53.514788 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Aug 13 01:48:53.861421 sshd[5834]: Accepted publickey for core from 147.75.109.163 port 51574 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:53.863013 sshd-session[5834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:53.869726 systemd-logind[1539]: New session 30 of user core. Aug 13 01:48:53.877328 systemd[1]: Started session-30.scope - Session 30 of User core. Aug 13 01:48:54.176776 sshd[5855]: Connection closed by 147.75.109.163 port 51574 Aug 13 01:48:54.177053 sshd-session[5834]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:54.181456 systemd[1]: sshd@29-172.232.7.133:22-147.75.109.163:51574.service: Deactivated successfully. Aug 13 01:48:54.183461 systemd[1]: session-30.scope: Deactivated successfully. Aug 13 01:48:54.185048 systemd-logind[1539]: Session 30 logged out. Waiting for processes to exit. Aug 13 01:48:54.186719 systemd-logind[1539]: Removed session 30. Aug 13 01:48:54.352728 kubelet[2718]: I0813 01:48:54.352681 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-qgskr" podStartSLOduration=2.336204924 podStartE2EDuration="3m16.352668305s" podCreationTimestamp="2025-08-13 01:45:38 +0000 UTC" firstStartedPulling="2025-08-13 01:45:39.293943952 +0000 UTC m=+16.624685726" lastFinishedPulling="2025-08-13 01:48:53.310407343 +0000 UTC m=+210.641149107" observedRunningTime="2025-08-13 01:48:54.345246344 +0000 UTC m=+211.675988108" watchObservedRunningTime="2025-08-13 01:48:54.352668305 +0000 UTC m=+211.683410079" Aug 13 01:48:54.427110 containerd[1559]: time="2025-08-13T01:48:54.427015265Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c417f2abc38fb1e7f571e895b4832cd8d84c96e5d9cac2ded4c0b2b1c7bebb21\" id:\"a5371dbf38039ce173bb7b1c7cc6478c1bf180ee2e48ee3380f34a00ff713945\" pid:5879 exit_status:1 exited_at:{seconds:1755049734 nanos:426710399}" Aug 13 01:48:55.451798 containerd[1559]: time="2025-08-13T01:48:55.451744000Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c417f2abc38fb1e7f571e895b4832cd8d84c96e5d9cac2ded4c0b2b1c7bebb21\" id:\"52fb7d9ea879ac8c2c2525406c1724989435c169102e9022cd14295dc9c01dd9\" pid:6023 exit_status:1 exited_at:{seconds:1755049735 nanos:451291416}" Aug 13 01:48:55.539781 systemd-networkd[1470]: vxlan.calico: Link UP Aug 13 01:48:55.539789 systemd-networkd[1470]: vxlan.calico: Gained carrier Aug 13 01:48:57.464732 systemd-networkd[1470]: vxlan.calico: Gained IPv6LL Aug 13 01:48:58.628553 kubelet[2718]: I0813 01:48:58.628525 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:48:58.628553 kubelet[2718]: I0813 01:48:58.628564 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:48:58.631025 kubelet[2718]: I0813 01:48:58.631006 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:48:58.632592 kubelet[2718]: I0813 01:48:58.632372 2718 image_gc_manager.go:487] "Removing image to free bytes" imageID="sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93" size=25052538 runtimeHandler="" Aug 13 01:48:58.632822 containerd[1559]: time="2025-08-13T01:48:58.632749163Z" level=info msg="RemoveImage \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:48:58.634105 containerd[1559]: time="2025-08-13T01:48:58.634069275Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator:v1.38.3\"" Aug 13 01:48:58.634836 containerd[1559]: time="2025-08-13T01:48:58.634797556Z" level=info msg="ImageDelete event name:\"quay.io/tigera/operator@sha256:dbf1bad0def7b5955dc8e4aeee96e23ead0bc5822f6872518e685cd0ed484121\"" Aug 13 01:48:58.635276 containerd[1559]: time="2025-08-13T01:48:58.635257750Z" level=info msg="RemoveImage \"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\" returns successfully" Aug 13 01:48:58.635502 containerd[1559]: time="2025-08-13T01:48:58.635470237Z" level=info msg="ImageDelete event name:\"sha256:8bde16470b09d1963e19456806d73180c9778a6c2b3c1fda2335c67c1cd4ce93\"" Aug 13 01:48:58.645745 kubelet[2718]: I0813 01:48:58.645727 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:48:58.645850 kubelet[2718]: I0813 01:48:58.645823 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","kube-system/coredns-668d6bf9bc-dfjz8","kube-system/coredns-668d6bf9bc-j47vf","calico-system/csi-node-driver-dbqt2","calico-system/calico-typha-b5b9867b4-p6jwz","calico-system/calico-node-qgskr","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:48:58.645971 kubelet[2718]: E0813 01:48:58.645885 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:48:58.645971 kubelet[2718]: E0813 01:48:58.645895 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:48:58.645971 kubelet[2718]: E0813 01:48:58.645902 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:48:58.645971 kubelet[2718]: E0813 01:48:58.645908 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:48:58.645971 kubelet[2718]: E0813 01:48:58.645918 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:48:58.645971 kubelet[2718]: E0813 01:48:58.645928 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:48:58.645971 kubelet[2718]: E0813 01:48:58.645936 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:48:58.645971 kubelet[2718]: E0813 01:48:58.645944 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:48:58.645971 kubelet[2718]: E0813 01:48:58.645952 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:48:58.645971 kubelet[2718]: E0813 01:48:58.645960 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:48:58.645971 kubelet[2718]: I0813 01:48:58.645969 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:48:58.779528 kubelet[2718]: E0813 01:48:58.779202 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:48:59.242071 systemd[1]: Started sshd@30-172.232.7.133:22-147.75.109.163:39686.service - OpenSSH per-connection server daemon (147.75.109.163:39686). Aug 13 01:48:59.583007 sshd[6110]: Accepted publickey for core from 147.75.109.163 port 39686 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:48:59.584349 sshd-session[6110]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:48:59.589565 systemd-logind[1539]: New session 31 of user core. Aug 13 01:48:59.594969 systemd[1]: Started session-31.scope - Session 31 of User core. Aug 13 01:48:59.779777 kubelet[2718]: E0813 01:48:59.779747 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:48:59.780598 containerd[1559]: time="2025-08-13T01:48:59.780572496Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,}" Aug 13 01:48:59.914124 sshd[6112]: Connection closed by 147.75.109.163 port 39686 Aug 13 01:48:59.914182 sshd-session[6110]: pam_unix(sshd:session): session closed for user core Aug 13 01:48:59.920540 systemd-logind[1539]: Session 31 logged out. Waiting for processes to exit. Aug 13 01:48:59.921560 systemd[1]: sshd@30-172.232.7.133:22-147.75.109.163:39686.service: Deactivated successfully. Aug 13 01:48:59.924477 systemd[1]: session-31.scope: Deactivated successfully. Aug 13 01:48:59.931732 systemd-networkd[1470]: caliaae5a038286: Link UP Aug 13 01:48:59.932370 systemd-logind[1539]: Removed session 31. Aug 13 01:48:59.933447 systemd-networkd[1470]: caliaae5a038286: Gained carrier Aug 13 01:48:59.955689 containerd[1559]: 2025-08-13 01:48:59.857 [INFO][6122] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--133-k8s-coredns--668d6bf9bc--j47vf-eth0 coredns-668d6bf9bc- kube-system 0ba3c042-02d2-446d-bb82-0965919f2962 800 0 2025-08-13 01:45:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-7-133 coredns-668d6bf9bc-j47vf eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliaae5a038286 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-j47vf" WorkloadEndpoint="172--232--7--133-k8s-coredns--668d6bf9bc--j47vf-" Aug 13 01:48:59.955689 containerd[1559]: 2025-08-13 01:48:59.857 [INFO][6122] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-j47vf" WorkloadEndpoint="172--232--7--133-k8s-coredns--668d6bf9bc--j47vf-eth0" Aug 13 01:48:59.955689 containerd[1559]: 2025-08-13 01:48:59.881 [INFO][6133] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" HandleID="k8s-pod-network.66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" Workload="172--232--7--133-k8s-coredns--668d6bf9bc--j47vf-eth0" Aug 13 01:48:59.955924 containerd[1559]: 2025-08-13 01:48:59.881 [INFO][6133] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" HandleID="k8s-pod-network.66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" Workload="172--232--7--133-k8s-coredns--668d6bf9bc--j47vf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024f100), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-7-133", "pod":"coredns-668d6bf9bc-j47vf", "timestamp":"2025-08-13 01:48:59.88146027 +0000 UTC"}, Hostname:"172-232-7-133", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:48:59.955924 containerd[1559]: 2025-08-13 01:48:59.882 [INFO][6133] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:48:59.955924 containerd[1559]: 2025-08-13 01:48:59.882 [INFO][6133] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:48:59.955924 containerd[1559]: 2025-08-13 01:48:59.882 [INFO][6133] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-133' Aug 13 01:48:59.955924 containerd[1559]: 2025-08-13 01:48:59.889 [INFO][6133] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" host="172-232-7-133" Aug 13 01:48:59.955924 containerd[1559]: 2025-08-13 01:48:59.893 [INFO][6133] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-133" Aug 13 01:48:59.955924 containerd[1559]: 2025-08-13 01:48:59.897 [INFO][6133] ipam/ipam.go 511: Trying affinity for 192.168.90.192/26 host="172-232-7-133" Aug 13 01:48:59.955924 containerd[1559]: 2025-08-13 01:48:59.899 [INFO][6133] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.192/26 host="172-232-7-133" Aug 13 01:48:59.955924 containerd[1559]: 2025-08-13 01:48:59.902 [INFO][6133] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.192/26 host="172-232-7-133" Aug 13 01:48:59.955924 containerd[1559]: 2025-08-13 01:48:59.902 [INFO][6133] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.192/26 handle="k8s-pod-network.66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" host="172-232-7-133" Aug 13 01:48:59.956160 containerd[1559]: 2025-08-13 01:48:59.904 [INFO][6133] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5 Aug 13 01:48:59.956160 containerd[1559]: 2025-08-13 01:48:59.909 [INFO][6133] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.192/26 handle="k8s-pod-network.66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" host="172-232-7-133" Aug 13 01:48:59.956160 containerd[1559]: 2025-08-13 01:48:59.915 [INFO][6133] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.193/26] block=192.168.90.192/26 handle="k8s-pod-network.66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" host="172-232-7-133" Aug 13 01:48:59.956160 containerd[1559]: 2025-08-13 01:48:59.915 [INFO][6133] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.193/26] handle="k8s-pod-network.66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" host="172-232-7-133" Aug 13 01:48:59.956160 containerd[1559]: 2025-08-13 01:48:59.916 [INFO][6133] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:48:59.956160 containerd[1559]: 2025-08-13 01:48:59.916 [INFO][6133] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.193/26] IPv6=[] ContainerID="66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" HandleID="k8s-pod-network.66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" Workload="172--232--7--133-k8s-coredns--668d6bf9bc--j47vf-eth0" Aug 13 01:48:59.956290 containerd[1559]: 2025-08-13 01:48:59.923 [INFO][6122] cni-plugin/k8s.go 418: Populated endpoint ContainerID="66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-j47vf" WorkloadEndpoint="172--232--7--133-k8s-coredns--668d6bf9bc--j47vf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--133-k8s-coredns--668d6bf9bc--j47vf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0ba3c042-02d2-446d-bb82-0965919f2962", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 45, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-133", ContainerID:"", Pod:"coredns-668d6bf9bc-j47vf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.90.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaae5a038286", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:48:59.956290 containerd[1559]: 2025-08-13 01:48:59.924 [INFO][6122] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.193/32] ContainerID="66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-j47vf" WorkloadEndpoint="172--232--7--133-k8s-coredns--668d6bf9bc--j47vf-eth0" Aug 13 01:48:59.956290 containerd[1559]: 2025-08-13 01:48:59.924 [INFO][6122] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaae5a038286 ContainerID="66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-j47vf" WorkloadEndpoint="172--232--7--133-k8s-coredns--668d6bf9bc--j47vf-eth0" Aug 13 01:48:59.956290 containerd[1559]: 2025-08-13 01:48:59.932 [INFO][6122] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-j47vf" WorkloadEndpoint="172--232--7--133-k8s-coredns--668d6bf9bc--j47vf-eth0" Aug 13 01:48:59.956290 containerd[1559]: 2025-08-13 01:48:59.934 [INFO][6122] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-j47vf" WorkloadEndpoint="172--232--7--133-k8s-coredns--668d6bf9bc--j47vf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--133-k8s-coredns--668d6bf9bc--j47vf-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"0ba3c042-02d2-446d-bb82-0965919f2962", ResourceVersion:"800", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 45, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-133", ContainerID:"66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5", Pod:"coredns-668d6bf9bc-j47vf", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.90.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliaae5a038286", MAC:"4a:a0:7d:26:72:38", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:48:59.956290 containerd[1559]: 2025-08-13 01:48:59.947 [INFO][6122] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" Namespace="kube-system" Pod="coredns-668d6bf9bc-j47vf" WorkloadEndpoint="172--232--7--133-k8s-coredns--668d6bf9bc--j47vf-eth0" Aug 13 01:49:00.007738 containerd[1559]: time="2025-08-13T01:49:00.007682253Z" level=info msg="connecting to shim 66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5" address="unix:///run/containerd/s/86ba71111b7dc1845e355e44f690dace13c47750b6e346a621e9572898ed9a9c" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:49:00.042975 systemd[1]: Started cri-containerd-66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5.scope - libcontainer container 66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5. Aug 13 01:49:00.092185 containerd[1559]: time="2025-08-13T01:49:00.092148016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-j47vf,Uid:0ba3c042-02d2-446d-bb82-0965919f2962,Namespace:kube-system,Attempt:0,} returns sandbox id \"66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5\"" Aug 13 01:49:00.093038 kubelet[2718]: E0813 01:49:00.093009 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:49:00.094208 containerd[1559]: time="2025-08-13T01:49:00.094179211Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:49:00.779418 kubelet[2718]: E0813 01:49:00.779346 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:49:00.780045 containerd[1559]: time="2025-08-13T01:49:00.780007862Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,}" Aug 13 01:49:00.876045 systemd-networkd[1470]: calieab459612de: Link UP Aug 13 01:49:00.877336 systemd-networkd[1470]: calieab459612de: Gained carrier Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.817 [INFO][6203] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--133-k8s-coredns--668d6bf9bc--dfjz8-eth0 coredns-668d6bf9bc- kube-system dafdbb28-0754-4303-98bd-08c77ee94f1a 789 0 2025-08-13 01:45:28 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s 172-232-7-133 coredns-668d6bf9bc-dfjz8 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calieab459612de [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" Namespace="kube-system" Pod="coredns-668d6bf9bc-dfjz8" WorkloadEndpoint="172--232--7--133-k8s-coredns--668d6bf9bc--dfjz8-" Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.818 [INFO][6203] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" Namespace="kube-system" Pod="coredns-668d6bf9bc-dfjz8" WorkloadEndpoint="172--232--7--133-k8s-coredns--668d6bf9bc--dfjz8-eth0" Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.838 [INFO][6215] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" HandleID="k8s-pod-network.5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" Workload="172--232--7--133-k8s-coredns--668d6bf9bc--dfjz8-eth0" Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.838 [INFO][6215] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" HandleID="k8s-pod-network.5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" Workload="172--232--7--133-k8s-coredns--668d6bf9bc--dfjz8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc00024eff0), Attrs:map[string]string{"namespace":"kube-system", "node":"172-232-7-133", "pod":"coredns-668d6bf9bc-dfjz8", "timestamp":"2025-08-13 01:49:00.838080825 +0000 UTC"}, Hostname:"172-232-7-133", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.838 [INFO][6215] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.838 [INFO][6215] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.838 [INFO][6215] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-133' Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.844 [INFO][6215] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" host="172-232-7-133" Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.848 [INFO][6215] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-133" Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.856 [INFO][6215] ipam/ipam.go 511: Trying affinity for 192.168.90.192/26 host="172-232-7-133" Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.858 [INFO][6215] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.192/26 host="172-232-7-133" Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.860 [INFO][6215] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.192/26 host="172-232-7-133" Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.860 [INFO][6215] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.192/26 handle="k8s-pod-network.5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" host="172-232-7-133" Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.862 [INFO][6215] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.864 [INFO][6215] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.192/26 handle="k8s-pod-network.5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" host="172-232-7-133" Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.869 [INFO][6215] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.194/26] block=192.168.90.192/26 handle="k8s-pod-network.5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" host="172-232-7-133" Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.869 [INFO][6215] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.194/26] handle="k8s-pod-network.5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" host="172-232-7-133" Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.869 [INFO][6215] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:49:00.897351 containerd[1559]: 2025-08-13 01:49:00.869 [INFO][6215] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.194/26] IPv6=[] ContainerID="5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" HandleID="k8s-pod-network.5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" Workload="172--232--7--133-k8s-coredns--668d6bf9bc--dfjz8-eth0" Aug 13 01:49:00.900083 containerd[1559]: 2025-08-13 01:49:00.872 [INFO][6203] cni-plugin/k8s.go 418: Populated endpoint ContainerID="5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" Namespace="kube-system" Pod="coredns-668d6bf9bc-dfjz8" WorkloadEndpoint="172--232--7--133-k8s-coredns--668d6bf9bc--dfjz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--133-k8s-coredns--668d6bf9bc--dfjz8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dafdbb28-0754-4303-98bd-08c77ee94f1a", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 45, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-133", ContainerID:"", Pod:"coredns-668d6bf9bc-dfjz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.90.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieab459612de", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:49:00.900083 containerd[1559]: 2025-08-13 01:49:00.872 [INFO][6203] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.194/32] ContainerID="5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" Namespace="kube-system" Pod="coredns-668d6bf9bc-dfjz8" WorkloadEndpoint="172--232--7--133-k8s-coredns--668d6bf9bc--dfjz8-eth0" Aug 13 01:49:00.900083 containerd[1559]: 2025-08-13 01:49:00.872 [INFO][6203] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calieab459612de ContainerID="5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" Namespace="kube-system" Pod="coredns-668d6bf9bc-dfjz8" WorkloadEndpoint="172--232--7--133-k8s-coredns--668d6bf9bc--dfjz8-eth0" Aug 13 01:49:00.900083 containerd[1559]: 2025-08-13 01:49:00.878 [INFO][6203] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" Namespace="kube-system" Pod="coredns-668d6bf9bc-dfjz8" WorkloadEndpoint="172--232--7--133-k8s-coredns--668d6bf9bc--dfjz8-eth0" Aug 13 01:49:00.900083 containerd[1559]: 2025-08-13 01:49:00.879 [INFO][6203] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" Namespace="kube-system" Pod="coredns-668d6bf9bc-dfjz8" WorkloadEndpoint="172--232--7--133-k8s-coredns--668d6bf9bc--dfjz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--133-k8s-coredns--668d6bf9bc--dfjz8-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"dafdbb28-0754-4303-98bd-08c77ee94f1a", ResourceVersion:"789", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 45, 28, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-133", ContainerID:"5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca", Pod:"coredns-668d6bf9bc-dfjz8", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.90.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calieab459612de", MAC:"0e:45:71:e2:83:9f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:49:00.900083 containerd[1559]: 2025-08-13 01:49:00.891 [INFO][6203] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" Namespace="kube-system" Pod="coredns-668d6bf9bc-dfjz8" WorkloadEndpoint="172--232--7--133-k8s-coredns--668d6bf9bc--dfjz8-eth0" Aug 13 01:49:00.942539 containerd[1559]: time="2025-08-13T01:49:00.942453161Z" level=info msg="connecting to shim 5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca" address="unix:///run/containerd/s/d10654a14a26a7108381b1423cfc1ddaeb90d0cfefc0c1eecf055738bafc5a09" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:49:00.985286 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3986700051.mount: Deactivated successfully. Aug 13 01:49:01.003090 systemd[1]: Started cri-containerd-5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca.scope - libcontainer container 5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca. Aug 13 01:49:01.077325 containerd[1559]: time="2025-08-13T01:49:01.077225896Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dfjz8,Uid:dafdbb28-0754-4303-98bd-08c77ee94f1a,Namespace:kube-system,Attempt:0,} returns sandbox id \"5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca\"" Aug 13 01:49:01.079507 kubelet[2718]: E0813 01:49:01.079482 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:49:01.623108 systemd-networkd[1470]: caliaae5a038286: Gained IPv6LL Aug 13 01:49:01.699811 containerd[1559]: time="2025-08-13T01:49:01.699244688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:01.700642 containerd[1559]: time="2025-08-13T01:49:01.700594741Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=18565241" Aug 13 01:49:01.701396 containerd[1559]: time="2025-08-13T01:49:01.701012507Z" level=info msg="ImageCreate event name:\"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:01.705034 containerd[1559]: time="2025-08-13T01:49:01.705013517Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:01.706590 containerd[1559]: time="2025-08-13T01:49:01.706568807Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 1.612336347s" Aug 13 01:49:01.707057 containerd[1559]: time="2025-08-13T01:49:01.707038952Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:49:01.708508 containerd[1559]: time="2025-08-13T01:49:01.708489673Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Aug 13 01:49:01.710942 containerd[1559]: time="2025-08-13T01:49:01.710916594Z" level=info msg="CreateContainer within sandbox \"66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:49:01.718571 containerd[1559]: time="2025-08-13T01:49:01.718553229Z" level=info msg="Container c9158913d4d262e6b68acf08579f3fad593bfec75cebc8f5958d46f75b5588f7: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:49:01.724393 containerd[1559]: time="2025-08-13T01:49:01.724373267Z" level=info msg="CreateContainer within sandbox \"66a8cace6347759461047773321f988eb024d806b1b0bb5b2a009bc0d6ed15c5\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"c9158913d4d262e6b68acf08579f3fad593bfec75cebc8f5958d46f75b5588f7\"" Aug 13 01:49:01.724774 containerd[1559]: time="2025-08-13T01:49:01.724757873Z" level=info msg="StartContainer for \"c9158913d4d262e6b68acf08579f3fad593bfec75cebc8f5958d46f75b5588f7\"" Aug 13 01:49:01.725496 containerd[1559]: time="2025-08-13T01:49:01.725418415Z" level=info msg="connecting to shim c9158913d4d262e6b68acf08579f3fad593bfec75cebc8f5958d46f75b5588f7" address="unix:///run/containerd/s/86ba71111b7dc1845e355e44f690dace13c47750b6e346a621e9572898ed9a9c" protocol=ttrpc version=3 Aug 13 01:49:01.747975 systemd[1]: Started cri-containerd-c9158913d4d262e6b68acf08579f3fad593bfec75cebc8f5958d46f75b5588f7.scope - libcontainer container c9158913d4d262e6b68acf08579f3fad593bfec75cebc8f5958d46f75b5588f7. Aug 13 01:49:01.790276 containerd[1559]: time="2025-08-13T01:49:01.790240763Z" level=info msg="StartContainer for \"c9158913d4d262e6b68acf08579f3fad593bfec75cebc8f5958d46f75b5588f7\" returns successfully" Aug 13 01:49:01.924993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1035337835.mount: Deactivated successfully. Aug 13 01:49:01.939288 containerd[1559]: time="2025-08-13T01:49:01.939236632Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:01.940894 containerd[1559]: time="2025-08-13T01:49:01.940078771Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=0" Aug 13 01:49:01.942685 containerd[1559]: time="2025-08-13T01:49:01.942659489Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"18562039\" in 233.923588ms" Aug 13 01:49:01.942736 containerd[1559]: time="2025-08-13T01:49:01.942701069Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6\"" Aug 13 01:49:01.944659 containerd[1559]: time="2025-08-13T01:49:01.944629775Z" level=info msg="CreateContainer within sandbox \"5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Aug 13 01:49:01.952956 containerd[1559]: time="2025-08-13T01:49:01.951714198Z" level=info msg="Container 9cb341511b4a0a14a55b9848e1672e41c427bb9af56a55a12c04a2fc5b212b68: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:49:01.954703 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1944532603.mount: Deactivated successfully. Aug 13 01:49:01.966476 containerd[1559]: time="2025-08-13T01:49:01.966423796Z" level=info msg="CreateContainer within sandbox \"5389a5aac9b7c2e1a924925dd0568a0515170a086dd34032d8f572141e600bca\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"9cb341511b4a0a14a55b9848e1672e41c427bb9af56a55a12c04a2fc5b212b68\"" Aug 13 01:49:01.967505 containerd[1559]: time="2025-08-13T01:49:01.967478643Z" level=info msg="StartContainer for \"9cb341511b4a0a14a55b9848e1672e41c427bb9af56a55a12c04a2fc5b212b68\"" Aug 13 01:49:01.969970 containerd[1559]: time="2025-08-13T01:49:01.969937482Z" level=info msg="connecting to shim 9cb341511b4a0a14a55b9848e1672e41c427bb9af56a55a12c04a2fc5b212b68" address="unix:///run/containerd/s/d10654a14a26a7108381b1423cfc1ddaeb90d0cfefc0c1eecf055738bafc5a09" protocol=ttrpc version=3 Aug 13 01:49:01.993991 systemd[1]: Started cri-containerd-9cb341511b4a0a14a55b9848e1672e41c427bb9af56a55a12c04a2fc5b212b68.scope - libcontainer container 9cb341511b4a0a14a55b9848e1672e41c427bb9af56a55a12c04a2fc5b212b68. Aug 13 01:49:02.007212 systemd-networkd[1470]: calieab459612de: Gained IPv6LL Aug 13 01:49:02.044699 containerd[1559]: time="2025-08-13T01:49:02.044669835Z" level=info msg="StartContainer for \"9cb341511b4a0a14a55b9848e1672e41c427bb9af56a55a12c04a2fc5b212b68\" returns successfully" Aug 13 01:49:02.348095 kubelet[2718]: E0813 01:49:02.347447 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:49:02.351489 kubelet[2718]: E0813 01:49:02.351440 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:49:02.365707 kubelet[2718]: I0813 01:49:02.365588 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-j47vf" podStartSLOduration=212.75143604 podStartE2EDuration="3m34.365575436s" podCreationTimestamp="2025-08-13 01:45:28 +0000 UTC" firstStartedPulling="2025-08-13 01:49:00.093552958 +0000 UTC m=+217.424294732" lastFinishedPulling="2025-08-13 01:49:01.707692364 +0000 UTC m=+219.038434128" observedRunningTime="2025-08-13 01:49:02.362066299 +0000 UTC m=+219.692808063" watchObservedRunningTime="2025-08-13 01:49:02.365575436 +0000 UTC m=+219.696317200" Aug 13 01:49:02.394921 kubelet[2718]: I0813 01:49:02.394868 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dfjz8" podStartSLOduration=213.531748225 podStartE2EDuration="3m34.394841498s" podCreationTimestamp="2025-08-13 01:45:28 +0000 UTC" firstStartedPulling="2025-08-13 01:49:01.080164209 +0000 UTC m=+218.410905973" lastFinishedPulling="2025-08-13 01:49:01.943257482 +0000 UTC m=+219.273999246" observedRunningTime="2025-08-13 01:49:02.384559534 +0000 UTC m=+219.715301308" watchObservedRunningTime="2025-08-13 01:49:02.394841498 +0000 UTC m=+219.725583272" Aug 13 01:49:03.353315 kubelet[2718]: E0813 01:49:03.353278 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:49:03.353701 kubelet[2718]: E0813 01:49:03.353540 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:49:04.357527 kubelet[2718]: E0813 01:49:04.357349 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:49:04.357527 kubelet[2718]: E0813 01:49:04.357349 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:49:04.780815 containerd[1559]: time="2025-08-13T01:49:04.780248858Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,}" Aug 13 01:49:04.950513 systemd-networkd[1470]: calid1811f01760: Link UP Aug 13 01:49:04.952887 systemd-networkd[1470]: calid1811f01760: Gained carrier Aug 13 01:49:04.982076 systemd[1]: Started sshd@31-172.232.7.133:22-147.75.109.163:39702.service - OpenSSH per-connection server daemon (147.75.109.163:39702). Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.835 [INFO][6408] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--133-k8s-csi--node--driver--dbqt2-eth0 csi-node-driver- calico-system 9674627f-b072-4139-b18d-fdf07891e1e2 701 0 2025-08-13 01:45:39 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:8967bcb6f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s 172-232-7-133 csi-node-driver-dbqt2 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calid1811f01760 [] [] }} ContainerID="74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" Namespace="calico-system" Pod="csi-node-driver-dbqt2" WorkloadEndpoint="172--232--7--133-k8s-csi--node--driver--dbqt2-" Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.836 [INFO][6408] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" Namespace="calico-system" Pod="csi-node-driver-dbqt2" WorkloadEndpoint="172--232--7--133-k8s-csi--node--driver--dbqt2-eth0" Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.888 [INFO][6420] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" HandleID="k8s-pod-network.74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" Workload="172--232--7--133-k8s-csi--node--driver--dbqt2-eth0" Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.888 [INFO][6420] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" HandleID="k8s-pod-network.74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" Workload="172--232--7--133-k8s-csi--node--driver--dbqt2-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002d5600), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-7-133", "pod":"csi-node-driver-dbqt2", "timestamp":"2025-08-13 01:49:04.888265569 +0000 UTC"}, Hostname:"172-232-7-133", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.888 [INFO][6420] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.888 [INFO][6420] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.888 [INFO][6420] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-133' Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.900 [INFO][6420] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" host="172-232-7-133" Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.908 [INFO][6420] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-133" Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.914 [INFO][6420] ipam/ipam.go 511: Trying affinity for 192.168.90.192/26 host="172-232-7-133" Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.924 [INFO][6420] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.192/26 host="172-232-7-133" Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.928 [INFO][6420] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.192/26 host="172-232-7-133" Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.928 [INFO][6420] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.192/26 handle="k8s-pod-network.74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" host="172-232-7-133" Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.932 [INFO][6420] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795 Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.935 [INFO][6420] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.192/26 handle="k8s-pod-network.74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" host="172-232-7-133" Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.943 [INFO][6420] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.195/26] block=192.168.90.192/26 handle="k8s-pod-network.74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" host="172-232-7-133" Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.943 [INFO][6420] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.195/26] handle="k8s-pod-network.74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" host="172-232-7-133" Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.943 [INFO][6420] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:49:04.986004 containerd[1559]: 2025-08-13 01:49:04.943 [INFO][6420] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.195/26] IPv6=[] ContainerID="74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" HandleID="k8s-pod-network.74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" Workload="172--232--7--133-k8s-csi--node--driver--dbqt2-eth0" Aug 13 01:49:04.986514 containerd[1559]: 2025-08-13 01:49:04.947 [INFO][6408] cni-plugin/k8s.go 418: Populated endpoint ContainerID="74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" Namespace="calico-system" Pod="csi-node-driver-dbqt2" WorkloadEndpoint="172--232--7--133-k8s-csi--node--driver--dbqt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--133-k8s-csi--node--driver--dbqt2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9674627f-b072-4139-b18d-fdf07891e1e2", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 45, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-133", ContainerID:"", Pod:"csi-node-driver-dbqt2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.90.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid1811f01760", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:49:04.986514 containerd[1559]: 2025-08-13 01:49:04.947 [INFO][6408] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.195/32] ContainerID="74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" Namespace="calico-system" Pod="csi-node-driver-dbqt2" WorkloadEndpoint="172--232--7--133-k8s-csi--node--driver--dbqt2-eth0" Aug 13 01:49:04.986514 containerd[1559]: 2025-08-13 01:49:04.947 [INFO][6408] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid1811f01760 ContainerID="74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" Namespace="calico-system" Pod="csi-node-driver-dbqt2" WorkloadEndpoint="172--232--7--133-k8s-csi--node--driver--dbqt2-eth0" Aug 13 01:49:04.986514 containerd[1559]: 2025-08-13 01:49:04.953 [INFO][6408] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" Namespace="calico-system" Pod="csi-node-driver-dbqt2" WorkloadEndpoint="172--232--7--133-k8s-csi--node--driver--dbqt2-eth0" Aug 13 01:49:04.986514 containerd[1559]: 2025-08-13 01:49:04.953 [INFO][6408] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" Namespace="calico-system" Pod="csi-node-driver-dbqt2" WorkloadEndpoint="172--232--7--133-k8s-csi--node--driver--dbqt2-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--133-k8s-csi--node--driver--dbqt2-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"9674627f-b072-4139-b18d-fdf07891e1e2", ResourceVersion:"701", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 45, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"8967bcb6f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-133", ContainerID:"74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795", Pod:"csi-node-driver-dbqt2", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.90.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calid1811f01760", MAC:"3e:74:0b:47:2e:72", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:49:04.986514 containerd[1559]: 2025-08-13 01:49:04.975 [INFO][6408] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" Namespace="calico-system" Pod="csi-node-driver-dbqt2" WorkloadEndpoint="172--232--7--133-k8s-csi--node--driver--dbqt2-eth0" Aug 13 01:49:05.039074 containerd[1559]: time="2025-08-13T01:49:05.038981187Z" level=info msg="connecting to shim 74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795" address="unix:///run/containerd/s/0bf890fa5edb51983ff0ed285e08bbe51a8803a614f315d5fb6ae1836f10700a" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:49:05.069182 systemd[1]: Started cri-containerd-74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795.scope - libcontainer container 74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795. Aug 13 01:49:05.104959 containerd[1559]: time="2025-08-13T01:49:05.104909499Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-dbqt2,Uid:9674627f-b072-4139-b18d-fdf07891e1e2,Namespace:calico-system,Attempt:0,} returns sandbox id \"74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795\"" Aug 13 01:49:05.109006 containerd[1559]: time="2025-08-13T01:49:05.108915392Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\"" Aug 13 01:49:05.340243 sshd[6429]: Accepted publickey for core from 147.75.109.163 port 39702 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:05.341426 sshd-session[6429]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:05.346523 systemd-logind[1539]: New session 32 of user core. Aug 13 01:49:05.351081 systemd[1]: Started session-32.scope - Session 32 of User core. Aug 13 01:49:05.672194 sshd[6485]: Connection closed by 147.75.109.163 port 39702 Aug 13 01:49:05.673163 sshd-session[6429]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:05.678256 systemd[1]: sshd@31-172.232.7.133:22-147.75.109.163:39702.service: Deactivated successfully. Aug 13 01:49:05.681270 systemd[1]: session-32.scope: Deactivated successfully. Aug 13 01:49:05.682581 systemd-logind[1539]: Session 32 logged out. Waiting for processes to exit. Aug 13 01:49:05.684636 systemd-logind[1539]: Removed session 32. Aug 13 01:49:05.780466 containerd[1559]: time="2025-08-13T01:49:05.780366136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,}" Aug 13 01:49:05.925210 systemd-networkd[1470]: cali5fa6f6628d7: Link UP Aug 13 01:49:05.928732 systemd-networkd[1470]: cali5fa6f6628d7: Gained carrier Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.825 [INFO][6497] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {172--232--7--133-k8s-calico--kube--controllers--7f9448c8f5--ck2sf-eth0 calico-kube-controllers-7f9448c8f5- calico-system 35b780d0-9cdb-470f-8c65-ede949b6d595 796 0 2025-08-13 01:45:39 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7f9448c8f5 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s 172-232-7-133 calico-kube-controllers-7f9448c8f5-ck2sf eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali5fa6f6628d7 [] [] }} ContainerID="4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" Namespace="calico-system" Pod="calico-kube-controllers-7f9448c8f5-ck2sf" WorkloadEndpoint="172--232--7--133-k8s-calico--kube--controllers--7f9448c8f5--ck2sf-" Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.825 [INFO][6497] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" Namespace="calico-system" Pod="calico-kube-controllers-7f9448c8f5-ck2sf" WorkloadEndpoint="172--232--7--133-k8s-calico--kube--controllers--7f9448c8f5--ck2sf-eth0" Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.868 [INFO][6509] ipam/ipam_plugin.go 225: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" HandleID="k8s-pod-network.4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" Workload="172--232--7--133-k8s-calico--kube--controllers--7f9448c8f5--ck2sf-eth0" Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.868 [INFO][6509] ipam/ipam_plugin.go 265: Auto assigning IP ContainerID="4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" HandleID="k8s-pod-network.4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" Workload="172--232--7--133-k8s-calico--kube--controllers--7f9448c8f5--ck2sf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0xc0002cd9a0), Attrs:map[string]string{"namespace":"calico-system", "node":"172-232-7-133", "pod":"calico-kube-controllers-7f9448c8f5-ck2sf", "timestamp":"2025-08-13 01:49:05.868094041 +0000 UTC"}, Hostname:"172-232-7-133", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.868 [INFO][6509] ipam/ipam_plugin.go 353: About to acquire host-wide IPAM lock. Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.868 [INFO][6509] ipam/ipam_plugin.go 368: Acquired host-wide IPAM lock. Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.868 [INFO][6509] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host '172-232-7-133' Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.874 [INFO][6509] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" host="172-232-7-133" Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.879 [INFO][6509] ipam/ipam.go 394: Looking up existing affinities for host host="172-232-7-133" Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.885 [INFO][6509] ipam/ipam.go 511: Trying affinity for 192.168.90.192/26 host="172-232-7-133" Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.888 [INFO][6509] ipam/ipam.go 158: Attempting to load block cidr=192.168.90.192/26 host="172-232-7-133" Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.891 [INFO][6509] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.90.192/26 host="172-232-7-133" Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.891 [INFO][6509] ipam/ipam.go 1220: Attempting to assign 1 addresses from block block=192.168.90.192/26 handle="k8s-pod-network.4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" host="172-232-7-133" Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.892 [INFO][6509] ipam/ipam.go 1764: Creating new handle: k8s-pod-network.4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492 Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.897 [INFO][6509] ipam/ipam.go 1243: Writing block in order to claim IPs block=192.168.90.192/26 handle="k8s-pod-network.4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" host="172-232-7-133" Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.914 [INFO][6509] ipam/ipam.go 1256: Successfully claimed IPs: [192.168.90.196/26] block=192.168.90.192/26 handle="k8s-pod-network.4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" host="172-232-7-133" Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.914 [INFO][6509] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.90.196/26] handle="k8s-pod-network.4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" host="172-232-7-133" Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.914 [INFO][6509] ipam/ipam_plugin.go 374: Released host-wide IPAM lock. Aug 13 01:49:05.949150 containerd[1559]: 2025-08-13 01:49:05.914 [INFO][6509] ipam/ipam_plugin.go 283: Calico CNI IPAM assigned addresses IPv4=[192.168.90.196/26] IPv6=[] ContainerID="4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" HandleID="k8s-pod-network.4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" Workload="172--232--7--133-k8s-calico--kube--controllers--7f9448c8f5--ck2sf-eth0" Aug 13 01:49:05.949884 containerd[1559]: 2025-08-13 01:49:05.919 [INFO][6497] cni-plugin/k8s.go 418: Populated endpoint ContainerID="4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" Namespace="calico-system" Pod="calico-kube-controllers-7f9448c8f5-ck2sf" WorkloadEndpoint="172--232--7--133-k8s-calico--kube--controllers--7f9448c8f5--ck2sf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--133-k8s-calico--kube--controllers--7f9448c8f5--ck2sf-eth0", GenerateName:"calico-kube-controllers-7f9448c8f5-", Namespace:"calico-system", SelfLink:"", UID:"35b780d0-9cdb-470f-8c65-ede949b6d595", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 45, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f9448c8f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-133", ContainerID:"", Pod:"calico-kube-controllers-7f9448c8f5-ck2sf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.90.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5fa6f6628d7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:49:05.949884 containerd[1559]: 2025-08-13 01:49:05.920 [INFO][6497] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.90.196/32] ContainerID="4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" Namespace="calico-system" Pod="calico-kube-controllers-7f9448c8f5-ck2sf" WorkloadEndpoint="172--232--7--133-k8s-calico--kube--controllers--7f9448c8f5--ck2sf-eth0" Aug 13 01:49:05.949884 containerd[1559]: 2025-08-13 01:49:05.920 [INFO][6497] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5fa6f6628d7 ContainerID="4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" Namespace="calico-system" Pod="calico-kube-controllers-7f9448c8f5-ck2sf" WorkloadEndpoint="172--232--7--133-k8s-calico--kube--controllers--7f9448c8f5--ck2sf-eth0" Aug 13 01:49:05.949884 containerd[1559]: 2025-08-13 01:49:05.927 [INFO][6497] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" Namespace="calico-system" Pod="calico-kube-controllers-7f9448c8f5-ck2sf" WorkloadEndpoint="172--232--7--133-k8s-calico--kube--controllers--7f9448c8f5--ck2sf-eth0" Aug 13 01:49:05.949884 containerd[1559]: 2025-08-13 01:49:05.928 [INFO][6497] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" Namespace="calico-system" Pod="calico-kube-controllers-7f9448c8f5-ck2sf" WorkloadEndpoint="172--232--7--133-k8s-calico--kube--controllers--7f9448c8f5--ck2sf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"172--232--7--133-k8s-calico--kube--controllers--7f9448c8f5--ck2sf-eth0", GenerateName:"calico-kube-controllers-7f9448c8f5-", Namespace:"calico-system", SelfLink:"", UID:"35b780d0-9cdb-470f-8c65-ede949b6d595", ResourceVersion:"796", Generation:0, CreationTimestamp:time.Date(2025, time.August, 13, 1, 45, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7f9448c8f5", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"172-232-7-133", ContainerID:"4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492", Pod:"calico-kube-controllers-7f9448c8f5-ck2sf", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.90.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali5fa6f6628d7", MAC:"82:e8:19:54:86:26", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Aug 13 01:49:05.949884 containerd[1559]: 2025-08-13 01:49:05.943 [INFO][6497] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" Namespace="calico-system" Pod="calico-kube-controllers-7f9448c8f5-ck2sf" WorkloadEndpoint="172--232--7--133-k8s-calico--kube--controllers--7f9448c8f5--ck2sf-eth0" Aug 13 01:49:05.987584 containerd[1559]: time="2025-08-13T01:49:05.987243776Z" level=info msg="connecting to shim 4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492" address="unix:///run/containerd/s/168587caf2abf4b43651d3c12252179075ff60214c141f7c43f2fd36383f02d1" namespace=k8s.io protocol=ttrpc version=3 Aug 13 01:49:06.041042 systemd[1]: Started cri-containerd-4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492.scope - libcontainer container 4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492. Aug 13 01:49:06.170194 containerd[1559]: time="2025-08-13T01:49:06.170157003Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7f9448c8f5-ck2sf,Uid:35b780d0-9cdb-470f-8c65-ede949b6d595,Namespace:calico-system,Attempt:0,} returns sandbox id \"4346eff858a2ff84b96daea8aba5f7765b6ccf2cc57e99bb74f3bb02a46e0492\"" Aug 13 01:49:06.192054 containerd[1559]: time="2025-08-13T01:49:06.191970029Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:06.193467 containerd[1559]: time="2025-08-13T01:49:06.193367542Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.2: active requests=0, bytes read=8759190" Aug 13 01:49:06.194416 containerd[1559]: time="2025-08-13T01:49:06.194394441Z" level=info msg="ImageCreate event name:\"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:06.196226 containerd[1559]: time="2025-08-13T01:49:06.196183410Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:06.196662 containerd[1559]: time="2025-08-13T01:49:06.196640214Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/csi:v3.30.2\" with image id \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\", repo tag \"ghcr.io/flatcar/calico/csi:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/csi@sha256:e570128aa8067a2f06b96d3cc98afa2e0a4b9790b435ee36ca051c8e72aeb8d0\", size \"10251893\" in 1.087688913s" Aug 13 01:49:06.196733 containerd[1559]: time="2025-08-13T01:49:06.196718773Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.2\" returns image reference \"sha256:c7fd1cc652979d89a51bbcc125e28e90c9815c0bd8f922a5bd36eed4e1927c6d\"" Aug 13 01:49:06.199235 containerd[1559]: time="2025-08-13T01:49:06.199197794Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:49:06.200384 containerd[1559]: time="2025-08-13T01:49:06.200285402Z" level=info msg="CreateContainer within sandbox \"74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795\" for container &ContainerMetadata{Name:calico-csi,Attempt:0,}" Aug 13 01:49:06.214231 containerd[1559]: time="2025-08-13T01:49:06.214121521Z" level=info msg="Container 79cbc95bbdf7255045bd57bdb65c44186fb0a06a3acc8030cc507efc0147d6e0: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:49:06.227304 containerd[1559]: time="2025-08-13T01:49:06.227271127Z" level=info msg="CreateContainer within sandbox \"74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795\" for &ContainerMetadata{Name:calico-csi,Attempt:0,} returns container id \"79cbc95bbdf7255045bd57bdb65c44186fb0a06a3acc8030cc507efc0147d6e0\"" Aug 13 01:49:06.227934 containerd[1559]: time="2025-08-13T01:49:06.227893091Z" level=info msg="StartContainer for \"79cbc95bbdf7255045bd57bdb65c44186fb0a06a3acc8030cc507efc0147d6e0\"" Aug 13 01:49:06.229255 containerd[1559]: time="2025-08-13T01:49:06.229220265Z" level=info msg="connecting to shim 79cbc95bbdf7255045bd57bdb65c44186fb0a06a3acc8030cc507efc0147d6e0" address="unix:///run/containerd/s/0bf890fa5edb51983ff0ed285e08bbe51a8803a614f315d5fb6ae1836f10700a" protocol=ttrpc version=3 Aug 13 01:49:06.260021 systemd[1]: Started cri-containerd-79cbc95bbdf7255045bd57bdb65c44186fb0a06a3acc8030cc507efc0147d6e0.scope - libcontainer container 79cbc95bbdf7255045bd57bdb65c44186fb0a06a3acc8030cc507efc0147d6e0. Aug 13 01:49:06.310707 containerd[1559]: time="2025-08-13T01:49:06.310574857Z" level=info msg="StartContainer for \"79cbc95bbdf7255045bd57bdb65c44186fb0a06a3acc8030cc507efc0147d6e0\" returns successfully" Aug 13 01:49:06.679086 systemd-networkd[1470]: calid1811f01760: Gained IPv6LL Aug 13 01:49:06.779696 kubelet[2718]: E0813 01:49:06.779664 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:49:07.511008 systemd-networkd[1470]: cali5fa6f6628d7: Gained IPv6LL Aug 13 01:49:08.497002 containerd[1559]: time="2025-08-13T01:49:08.496954364Z" level=error msg="failed to cleanup \"extract-643570946-_y0_ sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:49:08.497651 containerd[1559]: time="2025-08-13T01:49:08.497581908Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/95/fs/usr/bin/kube-controllers: no space left on device" Aug 13 01:49:08.497651 containerd[1559]: time="2025-08-13T01:49:08.497605677Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=51276688" Aug 13 01:49:08.498154 kubelet[2718]: E0813 01:49:08.498018 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/95/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:49:08.499887 kubelet[2718]: E0813 01:49:08.498761 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/95/fs/usr/bin/kube-controllers: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:49:08.499887 kubelet[2718]: E0813 01:49:08.499000 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrxd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/95/fs/usr/bin/kube-controllers: no space left on device" logger="UnhandledError" Aug 13 01:49:08.500194 kubelet[2718]: E0813 01:49:08.500127 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/95/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:49:08.500260 containerd[1559]: time="2025-08-13T01:49:08.500235197Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 01:49:08.553999 containerd[1559]: time="2025-08-13T01:49:08.553953505Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/kube-system_kube-controller-manager-172-232-7-133_5f9ea1c169ca17f70f2b596c13773f1c/kube-controller-manager/0.log\"" error="write /var/log/pods/kube-system_kube-controller-manager-172-232-7-133_5f9ea1c169ca17f70f2b596c13773f1c/kube-controller-manager/0.log: no space left on device" Aug 13 01:49:08.602975 containerd[1559]: time="2025-08-13T01:49:08.602902917Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/kube-system_kube-controller-manager-172-232-7-133_5f9ea1c169ca17f70f2b596c13773f1c/kube-controller-manager/0.log\"" error="write /var/log/pods/kube-system_kube-controller-manager-172-232-7-133_5f9ea1c169ca17f70f2b596c13773f1c/kube-controller-manager/0.log: no space left on device" Aug 13 01:49:08.655654 containerd[1559]: time="2025-08-13T01:49:08.653937557Z" level=error msg="Fail to write \"stderr\" log to log file \"/var/log/pods/kube-system_kube-controller-manager-172-232-7-133_5f9ea1c169ca17f70f2b596c13773f1c/kube-controller-manager/0.log\"" error="write /var/log/pods/kube-system_kube-controller-manager-172-232-7-133_5f9ea1c169ca17f70f2b596c13773f1c/kube-controller-manager/0.log: no space left on device" Aug 13 01:49:08.660957 containerd[1559]: time="2025-08-13T01:49:08.660908287Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": mkdir /var/lib/containerd/io.containerd.content.v1.content/ingest/ee940ae86eafd2ed237f0737028242a8d54542a0643eb7f563fcf78afdd9c630: no space left on device" Aug 13 01:49:08.661129 containerd[1559]: time="2025-08-13T01:49:08.661064825Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=93" Aug 13 01:49:08.661333 kubelet[2718]: E0813 01:49:08.661293 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": mkdir /var/lib/containerd/io.containerd.content.v1.content/ingest/ee940ae86eafd2ed237f0737028242a8d54542a0643eb7f563fcf78afdd9c630: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2" Aug 13 01:49:08.661380 kubelet[2718]: E0813 01:49:08.661349 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": mkdir /var/lib/containerd/io.containerd.content.v1.content/ingest/ee940ae86eafd2ed237f0737028242a8d54542a0643eb7f563fcf78afdd9c630: no space left on device" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2" Aug 13 01:49:08.661481 kubelet[2718]: E0813 01:49:08.661440 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m2lnb,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-dbqt2_calico-system(9674627f-b072-4139-b18d-fdf07891e1e2): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\": mkdir /var/lib/containerd/io.containerd.content.v1.content/ingest/ee940ae86eafd2ed237f0737028242a8d54542a0643eb7f563fcf78afdd9c630: no space left on device" logger="UnhandledError" Aug 13 01:49:08.662801 kubelet[2718]: E0813 01:49:08.662768 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": mkdir /var/lib/containerd/io.containerd.content.v1.content/ingest/ee940ae86eafd2ed237f0737028242a8d54542a0643eb7f563fcf78afdd9c630: no space left on device\"" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:49:08.688086 kubelet[2718]: I0813 01:49:08.687485 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:49:08.688086 kubelet[2718]: I0813 01:49:08.687510 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:49:08.691181 kubelet[2718]: I0813 01:49:08.691164 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:49:08.707609 kubelet[2718]: I0813 01:49:08.707590 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:49:08.707710 kubelet[2718]: I0813 01:49:08.707690 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/coredns-668d6bf9bc-dfjz8","kube-system/coredns-668d6bf9bc-j47vf","calico-system/calico-node-qgskr","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","kube-system/kube-apiserver-172-232-7-133","calico-system/csi-node-driver-dbqt2","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:49:08.707783 kubelet[2718]: E0813 01:49:08.707721 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:49:08.707783 kubelet[2718]: E0813 01:49:08.707734 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:49:08.707783 kubelet[2718]: E0813 01:49:08.707743 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:49:08.707783 kubelet[2718]: E0813 01:49:08.707759 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:49:08.707783 kubelet[2718]: E0813 01:49:08.707767 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:49:08.707783 kubelet[2718]: E0813 01:49:08.707774 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:49:08.707783 kubelet[2718]: E0813 01:49:08.707782 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:49:08.707783 kubelet[2718]: E0813 01:49:08.707790 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:49:08.708021 kubelet[2718]: E0813 01:49:08.707798 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:49:08.708021 kubelet[2718]: E0813 01:49:08.707806 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:49:08.708021 kubelet[2718]: I0813 01:49:08.707814 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:49:09.375412 kubelet[2718]: E0813 01:49:09.375026 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to extract layer sha256:02d3dffb3ef10df51972f4bc886d3c12267d2c7867905840dea1b421677959b9: write /var/lib/containerd/io.containerd.snapshotter.v1.overlayfs/snapshots/95/fs/usr/bin/kube-controllers: no space left on device\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:49:09.375412 kubelet[2718]: E0813 01:49:09.375355 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\\\": mkdir /var/lib/containerd/io.containerd.content.v1.content/ingest/ee940ae86eafd2ed237f0737028242a8d54542a0643eb7f563fcf78afdd9c630: no space left on device\"" pod="calico-system/csi-node-driver-dbqt2" podUID="9674627f-b072-4139-b18d-fdf07891e1e2" Aug 13 01:49:10.734458 systemd[1]: Started sshd@32-172.232.7.133:22-147.75.109.163:36934.service - OpenSSH per-connection server daemon (147.75.109.163:36934). Aug 13 01:49:11.086017 sshd[6617]: Accepted publickey for core from 147.75.109.163 port 36934 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:11.088435 sshd-session[6617]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:11.095051 systemd-logind[1539]: New session 33 of user core. Aug 13 01:49:11.100592 systemd[1]: Started session-33.scope - Session 33 of User core. Aug 13 01:49:11.390278 sshd[6621]: Connection closed by 147.75.109.163 port 36934 Aug 13 01:49:11.391060 sshd-session[6617]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:11.394896 systemd-logind[1539]: Session 33 logged out. Waiting for processes to exit. Aug 13 01:49:11.395683 systemd[1]: sshd@32-172.232.7.133:22-147.75.109.163:36934.service: Deactivated successfully. Aug 13 01:49:11.397618 systemd[1]: session-33.scope: Deactivated successfully. Aug 13 01:49:11.400257 systemd-logind[1539]: Removed session 33. Aug 13 01:49:13.779441 kubelet[2718]: E0813 01:49:13.779406 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:49:16.451272 systemd[1]: Started sshd@33-172.232.7.133:22-147.75.109.163:36936.service - OpenSSH per-connection server daemon (147.75.109.163:36936). Aug 13 01:49:16.780434 sshd[6642]: Accepted publickey for core from 147.75.109.163 port 36936 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:16.781927 sshd-session[6642]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:16.787364 systemd-logind[1539]: New session 34 of user core. Aug 13 01:49:16.791980 systemd[1]: Started session-34.scope - Session 34 of User core. Aug 13 01:49:17.094946 sshd[6644]: Connection closed by 147.75.109.163 port 36936 Aug 13 01:49:17.096060 sshd-session[6642]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:17.099977 systemd[1]: sshd@33-172.232.7.133:22-147.75.109.163:36936.service: Deactivated successfully. Aug 13 01:49:17.103495 systemd[1]: session-34.scope: Deactivated successfully. Aug 13 01:49:17.107543 systemd-logind[1539]: Session 34 logged out. Waiting for processes to exit. Aug 13 01:49:17.110085 systemd-logind[1539]: Removed session 34. Aug 13 01:49:18.727839 kubelet[2718]: I0813 01:49:18.727806 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:49:18.727839 kubelet[2718]: I0813 01:49:18.727843 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:49:18.729613 kubelet[2718]: I0813 01:49:18.729594 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:49:18.742499 kubelet[2718]: I0813 01:49:18.742483 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:49:18.742598 kubelet[2718]: I0813 01:49:18.742580 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/coredns-668d6bf9bc-dfjz8","kube-system/coredns-668d6bf9bc-j47vf","calico-system/calico-node-qgskr","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","kube-system/kube-apiserver-172-232-7-133","calico-system/csi-node-driver-dbqt2","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:49:18.742702 kubelet[2718]: E0813 01:49:18.742609 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:49:18.742702 kubelet[2718]: E0813 01:49:18.742621 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:49:18.742702 kubelet[2718]: E0813 01:49:18.742629 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:49:18.742702 kubelet[2718]: E0813 01:49:18.742637 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:49:18.742702 kubelet[2718]: E0813 01:49:18.742645 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:49:18.742702 kubelet[2718]: E0813 01:49:18.742652 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:49:18.742702 kubelet[2718]: E0813 01:49:18.742660 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:49:18.742702 kubelet[2718]: E0813 01:49:18.742667 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:49:18.742702 kubelet[2718]: E0813 01:49:18.742676 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:49:18.742702 kubelet[2718]: E0813 01:49:18.742683 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:49:18.742702 kubelet[2718]: I0813 01:49:18.742692 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:49:20.782827 containerd[1559]: time="2025-08-13T01:49:20.782148810Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\"" Aug 13 01:49:21.747156 containerd[1559]: time="2025-08-13T01:49:21.747107020Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:21.747908 containerd[1559]: time="2025-08-13T01:49:21.747873572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2: active requests=0, bytes read=14703784" Aug 13 01:49:21.748982 containerd[1559]: time="2025-08-13T01:49:21.748941272Z" level=info msg="ImageCreate event name:\"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:21.752737 containerd[1559]: time="2025-08-13T01:49:21.752450858Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Aug 13 01:49:21.753282 containerd[1559]: time="2025-08-13T01:49:21.753247690Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" with image id \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\", repo tag \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\", repo digest \"ghcr.io/flatcar/calico/node-driver-registrar@sha256:8fec2de12dfa51bae89d941938a07af2598eb8bfcab55d0dded1d9c193d7b99f\", size \"16196439\" in 971.05222ms" Aug 13 01:49:21.753516 containerd[1559]: time="2025-08-13T01:49:21.753287629Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.2\" returns image reference \"sha256:9e48822a4fe26f4ed9231b361fdd1357ea3567f1fc0a8db4d616622fe570a866\"" Aug 13 01:49:21.756171 containerd[1559]: time="2025-08-13T01:49:21.756079272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:49:21.759587 containerd[1559]: time="2025-08-13T01:49:21.759552548Z" level=info msg="CreateContainer within sandbox \"74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795\" for container &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,}" Aug 13 01:49:21.776044 containerd[1559]: time="2025-08-13T01:49:21.776002365Z" level=info msg="Container 72e1ba8427120e350e95e08986233c81633c0fcf98787321874a41a068a47eeb: CDI devices from CRI Config.CDIDevices: []" Aug 13 01:49:21.785123 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount449117444.mount: Deactivated successfully. Aug 13 01:49:21.789685 containerd[1559]: time="2025-08-13T01:49:21.789649161Z" level=info msg="CreateContainer within sandbox \"74fe3fb1af87dd3ee880ab3ae949ae8659909d38affc57031010806df1eaf795\" for &ContainerMetadata{Name:csi-node-driver-registrar,Attempt:0,} returns container id \"72e1ba8427120e350e95e08986233c81633c0fcf98787321874a41a068a47eeb\"" Aug 13 01:49:21.791098 containerd[1559]: time="2025-08-13T01:49:21.791068828Z" level=info msg="StartContainer for \"72e1ba8427120e350e95e08986233c81633c0fcf98787321874a41a068a47eeb\"" Aug 13 01:49:21.793293 containerd[1559]: time="2025-08-13T01:49:21.793263626Z" level=info msg="connecting to shim 72e1ba8427120e350e95e08986233c81633c0fcf98787321874a41a068a47eeb" address="unix:///run/containerd/s/0bf890fa5edb51983ff0ed285e08bbe51a8803a614f315d5fb6ae1836f10700a" protocol=ttrpc version=3 Aug 13 01:49:21.822998 systemd[1]: Started cri-containerd-72e1ba8427120e350e95e08986233c81633c0fcf98787321874a41a068a47eeb.scope - libcontainer container 72e1ba8427120e350e95e08986233c81633c0fcf98787321874a41a068a47eeb. Aug 13 01:49:21.870032 containerd[1559]: time="2025-08-13T01:49:21.869996540Z" level=info msg="StartContainer for \"72e1ba8427120e350e95e08986233c81633c0fcf98787321874a41a068a47eeb\" returns successfully" Aug 13 01:49:21.991598 kubelet[2718]: I0813 01:49:21.991551 2718 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: csi.tigera.io endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock versions: 1.0.0 Aug 13 01:49:21.991598 kubelet[2718]: I0813 01:49:21.991595 2718 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: csi.tigera.io at endpoint: /var/lib/kubelet/plugins/csi.tigera.io/csi.sock Aug 13 01:49:22.162997 systemd[1]: Started sshd@34-172.232.7.133:22-147.75.109.163:36036.service - OpenSSH per-connection server daemon (147.75.109.163:36036). Aug 13 01:49:22.368545 containerd[1559]: time="2025-08-13T01:49:22.368490974Z" level=error msg="failed to cleanup \"extract-227911853-HT9G sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:49:22.369149 containerd[1559]: time="2025-08-13T01:49:22.369115739Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 01:49:22.369214 containerd[1559]: time="2025-08-13T01:49:22.369187278Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=22024426" Aug 13 01:49:22.369433 kubelet[2718]: E0813 01:49:22.369382 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:49:22.369433 kubelet[2718]: E0813 01:49:22.369423 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:49:22.369583 kubelet[2718]: E0813 01:49:22.369535 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrxd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 01:49:22.370928 kubelet[2718]: E0813 01:49:22.370899 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:49:22.422072 kubelet[2718]: I0813 01:49:22.421958 2718 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/csi-node-driver-dbqt2" podStartSLOduration=206.774714276 podStartE2EDuration="3m43.421945194s" podCreationTimestamp="2025-08-13 01:45:39 +0000 UTC" firstStartedPulling="2025-08-13 01:49:05.108398008 +0000 UTC m=+222.439139772" lastFinishedPulling="2025-08-13 01:49:21.755628916 +0000 UTC m=+239.086370690" observedRunningTime="2025-08-13 01:49:22.418895134 +0000 UTC m=+239.749636918" watchObservedRunningTime="2025-08-13 01:49:22.421945194 +0000 UTC m=+239.752686958" Aug 13 01:49:22.504095 sshd[6691]: Accepted publickey for core from 147.75.109.163 port 36036 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:22.506197 sshd-session[6691]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:22.513584 systemd-logind[1539]: New session 35 of user core. Aug 13 01:49:22.519009 systemd[1]: Started session-35.scope - Session 35 of User core. Aug 13 01:49:22.784263 kubelet[2718]: E0813 01:49:22.783558 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:49:22.820949 sshd[6693]: Connection closed by 147.75.109.163 port 36036 Aug 13 01:49:22.822570 sshd-session[6691]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:22.828640 systemd[1]: sshd@34-172.232.7.133:22-147.75.109.163:36036.service: Deactivated successfully. Aug 13 01:49:22.829125 systemd-logind[1539]: Session 35 logged out. Waiting for processes to exit. Aug 13 01:49:22.836018 systemd[1]: session-35.scope: Deactivated successfully. Aug 13 01:49:22.842960 systemd-logind[1539]: Removed session 35. Aug 13 01:49:25.400976 containerd[1559]: time="2025-08-13T01:49:25.400784252Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c417f2abc38fb1e7f571e895b4832cd8d84c96e5d9cac2ded4c0b2b1c7bebb21\" id:\"8d9086996cb545f360bdd68ad4538ff883cb0bf6bda1cc7861e90c4012966c80\" pid:6719 exited_at:{seconds:1755049765 nanos:400008700}" Aug 13 01:49:27.882595 systemd[1]: Started sshd@35-172.232.7.133:22-147.75.109.163:36046.service - OpenSSH per-connection server daemon (147.75.109.163:36046). Aug 13 01:49:28.222569 sshd[6733]: Accepted publickey for core from 147.75.109.163 port 36046 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:28.223954 sshd-session[6733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:28.228651 systemd-logind[1539]: New session 36 of user core. Aug 13 01:49:28.238219 systemd[1]: Started session-36.scope - Session 36 of User core. Aug 13 01:49:28.538084 sshd[6735]: Connection closed by 147.75.109.163 port 36046 Aug 13 01:49:28.538908 sshd-session[6733]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:28.543151 systemd-logind[1539]: Session 36 logged out. Waiting for processes to exit. Aug 13 01:49:28.543830 systemd[1]: sshd@35-172.232.7.133:22-147.75.109.163:36046.service: Deactivated successfully. Aug 13 01:49:28.546231 systemd[1]: session-36.scope: Deactivated successfully. Aug 13 01:49:28.548415 systemd-logind[1539]: Removed session 36. Aug 13 01:49:28.769556 kubelet[2718]: I0813 01:49:28.769520 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:49:28.769556 kubelet[2718]: I0813 01:49:28.769558 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:49:28.771465 kubelet[2718]: I0813 01:49:28.771441 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:49:28.786364 kubelet[2718]: I0813 01:49:28.786312 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:49:28.786488 kubelet[2718]: I0813 01:49:28.786439 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/coredns-668d6bf9bc-dfjz8","kube-system/coredns-668d6bf9bc-j47vf","calico-system/calico-node-qgskr","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","calico-system/csi-node-driver-dbqt2","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:49:28.786488 kubelet[2718]: E0813 01:49:28.786466 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:49:28.786488 kubelet[2718]: E0813 01:49:28.786478 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:49:28.786488 kubelet[2718]: E0813 01:49:28.786487 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:49:28.786605 kubelet[2718]: E0813 01:49:28.786495 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:49:28.786605 kubelet[2718]: E0813 01:49:28.786504 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:49:28.786605 kubelet[2718]: E0813 01:49:28.786511 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:49:28.786605 kubelet[2718]: E0813 01:49:28.786518 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:49:28.786605 kubelet[2718]: E0813 01:49:28.786527 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:49:28.786605 kubelet[2718]: E0813 01:49:28.786534 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:49:28.786605 kubelet[2718]: E0813 01:49:28.786542 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:49:28.786605 kubelet[2718]: I0813 01:49:28.786551 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:49:33.599038 systemd[1]: Started sshd@36-172.232.7.133:22-147.75.109.163:50778.service - OpenSSH per-connection server daemon (147.75.109.163:50778). Aug 13 01:49:33.928661 sshd[6750]: Accepted publickey for core from 147.75.109.163 port 50778 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:33.930450 sshd-session[6750]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:33.935033 systemd-logind[1539]: New session 37 of user core. Aug 13 01:49:33.946980 systemd[1]: Started session-37.scope - Session 37 of User core. Aug 13 01:49:34.219783 sshd[6752]: Connection closed by 147.75.109.163 port 50778 Aug 13 01:49:34.220328 sshd-session[6750]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:34.223962 systemd-logind[1539]: Session 37 logged out. Waiting for processes to exit. Aug 13 01:49:34.224663 systemd[1]: sshd@36-172.232.7.133:22-147.75.109.163:50778.service: Deactivated successfully. Aug 13 01:49:34.226490 systemd[1]: session-37.scope: Deactivated successfully. Aug 13 01:49:34.228552 systemd-logind[1539]: Removed session 37. Aug 13 01:49:37.780826 kubelet[2718]: E0813 01:49:37.780654 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:49:38.821325 kubelet[2718]: I0813 01:49:38.821276 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:49:38.821325 kubelet[2718]: I0813 01:49:38.821311 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:49:38.823454 kubelet[2718]: I0813 01:49:38.823438 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:49:38.837593 kubelet[2718]: I0813 01:49:38.837570 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:49:38.837719 kubelet[2718]: I0813 01:49:38.837700 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/coredns-668d6bf9bc-dfjz8","kube-system/coredns-668d6bf9bc-j47vf","calico-system/calico-node-qgskr","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","calico-system/csi-node-driver-dbqt2","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:49:38.837785 kubelet[2718]: E0813 01:49:38.837730 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:49:38.837785 kubelet[2718]: E0813 01:49:38.837741 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:49:38.837785 kubelet[2718]: E0813 01:49:38.837750 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:49:38.837785 kubelet[2718]: E0813 01:49:38.837758 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:49:38.837785 kubelet[2718]: E0813 01:49:38.837766 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:49:38.837785 kubelet[2718]: E0813 01:49:38.837773 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:49:38.837785 kubelet[2718]: E0813 01:49:38.837780 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:49:38.837785 kubelet[2718]: E0813 01:49:38.837790 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:49:38.837970 kubelet[2718]: E0813 01:49:38.837797 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:49:38.837970 kubelet[2718]: E0813 01:49:38.837803 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:49:38.837970 kubelet[2718]: I0813 01:49:38.837812 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:49:39.288001 systemd[1]: Started sshd@37-172.232.7.133:22-147.75.109.163:41904.service - OpenSSH per-connection server daemon (147.75.109.163:41904). Aug 13 01:49:39.627079 sshd[6772]: Accepted publickey for core from 147.75.109.163 port 41904 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:39.628569 sshd-session[6772]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:39.634239 systemd-logind[1539]: New session 38 of user core. Aug 13 01:49:39.637970 systemd[1]: Started session-38.scope - Session 38 of User core. Aug 13 01:49:39.928482 sshd[6774]: Connection closed by 147.75.109.163 port 41904 Aug 13 01:49:39.928715 sshd-session[6772]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:39.933694 systemd[1]: sshd@37-172.232.7.133:22-147.75.109.163:41904.service: Deactivated successfully. Aug 13 01:49:39.935983 systemd[1]: session-38.scope: Deactivated successfully. Aug 13 01:49:39.937241 systemd-logind[1539]: Session 38 logged out. Waiting for processes to exit. Aug 13 01:49:39.938788 systemd-logind[1539]: Removed session 38. Aug 13 01:49:44.989811 systemd[1]: Started sshd@38-172.232.7.133:22-147.75.109.163:41914.service - OpenSSH per-connection server daemon (147.75.109.163:41914). Aug 13 01:49:45.327913 sshd[6786]: Accepted publickey for core from 147.75.109.163 port 41914 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:45.329379 sshd-session[6786]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:45.334970 systemd-logind[1539]: New session 39 of user core. Aug 13 01:49:45.343061 systemd[1]: Started session-39.scope - Session 39 of User core. Aug 13 01:49:45.629019 sshd[6788]: Connection closed by 147.75.109.163 port 41914 Aug 13 01:49:45.629818 sshd-session[6786]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:45.632810 systemd[1]: sshd@38-172.232.7.133:22-147.75.109.163:41914.service: Deactivated successfully. Aug 13 01:49:45.634973 systemd[1]: session-39.scope: Deactivated successfully. Aug 13 01:49:45.637987 systemd-logind[1539]: Session 39 logged out. Waiting for processes to exit. Aug 13 01:49:45.639430 systemd-logind[1539]: Removed session 39. Aug 13 01:49:48.865791 kubelet[2718]: I0813 01:49:48.865759 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:49:48.865791 kubelet[2718]: I0813 01:49:48.865793 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:49:48.867818 kubelet[2718]: I0813 01:49:48.867802 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:49:48.882089 kubelet[2718]: I0813 01:49:48.882058 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:49:48.882276 kubelet[2718]: I0813 01:49:48.882256 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/coredns-668d6bf9bc-j47vf","kube-system/coredns-668d6bf9bc-dfjz8","calico-system/calico-node-qgskr","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","calico-system/csi-node-driver-dbqt2","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:49:48.882345 kubelet[2718]: E0813 01:49:48.882288 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:49:48.882345 kubelet[2718]: E0813 01:49:48.882299 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:49:48.882345 kubelet[2718]: E0813 01:49:48.882308 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:49:48.882345 kubelet[2718]: E0813 01:49:48.882318 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:49:48.882345 kubelet[2718]: E0813 01:49:48.882326 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:49:48.882345 kubelet[2718]: E0813 01:49:48.882333 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:49:48.882345 kubelet[2718]: E0813 01:49:48.882340 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:49:48.882493 kubelet[2718]: E0813 01:49:48.882353 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:49:48.882493 kubelet[2718]: E0813 01:49:48.882360 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:49:48.882493 kubelet[2718]: E0813 01:49:48.882370 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:49:48.882493 kubelet[2718]: I0813 01:49:48.882395 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:49:50.690957 systemd[1]: Started sshd@39-172.232.7.133:22-147.75.109.163:58754.service - OpenSSH per-connection server daemon (147.75.109.163:58754). Aug 13 01:49:51.020288 sshd[6802]: Accepted publickey for core from 147.75.109.163 port 58754 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:51.021738 sshd-session[6802]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:51.027500 systemd-logind[1539]: New session 40 of user core. Aug 13 01:49:51.033032 systemd[1]: Started session-40.scope - Session 40 of User core. Aug 13 01:49:51.326323 sshd[6804]: Connection closed by 147.75.109.163 port 58754 Aug 13 01:49:51.327216 sshd-session[6802]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:51.333204 systemd[1]: sshd@39-172.232.7.133:22-147.75.109.163:58754.service: Deactivated successfully. Aug 13 01:49:51.335952 systemd[1]: session-40.scope: Deactivated successfully. Aug 13 01:49:51.339195 systemd-logind[1539]: Session 40 logged out. Waiting for processes to exit. Aug 13 01:49:51.341483 systemd-logind[1539]: Removed session 40. Aug 13 01:49:51.781193 containerd[1559]: time="2025-08-13T01:49:51.780886361Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:49:52.440997 containerd[1559]: time="2025-08-13T01:49:52.440959335Z" level=error msg="failed to cleanup \"extract-296893164-SSdw sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:49:52.441655 containerd[1559]: time="2025-08-13T01:49:52.441587461Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 01:49:52.441655 containerd[1559]: time="2025-08-13T01:49:52.441608261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=22024426" Aug 13 01:49:52.441791 kubelet[2718]: E0813 01:49:52.441757 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:49:52.442920 kubelet[2718]: E0813 01:49:52.441801 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:49:52.442920 kubelet[2718]: E0813 01:49:52.442252 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrxd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 01:49:52.443510 kubelet[2718]: E0813 01:49:52.443478 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:49:55.414024 containerd[1559]: time="2025-08-13T01:49:55.413969566Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c417f2abc38fb1e7f571e895b4832cd8d84c96e5d9cac2ded4c0b2b1c7bebb21\" id:\"42cf9ca195d920ef68733dbd2f583cdc23f30438167d9c3b2bfd826a3a8c14f2\" pid:6828 exited_at:{seconds:1755049795 nanos:413597549}" Aug 13 01:49:56.390289 systemd[1]: Started sshd@40-172.232.7.133:22-147.75.109.163:58766.service - OpenSSH per-connection server daemon (147.75.109.163:58766). Aug 13 01:49:56.732218 sshd[6841]: Accepted publickey for core from 147.75.109.163 port 58766 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:49:56.733620 sshd-session[6841]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:49:56.738505 systemd-logind[1539]: New session 41 of user core. Aug 13 01:49:56.744977 systemd[1]: Started session-41.scope - Session 41 of User core. Aug 13 01:49:57.039552 sshd[6843]: Connection closed by 147.75.109.163 port 58766 Aug 13 01:49:57.040477 sshd-session[6841]: pam_unix(sshd:session): session closed for user core Aug 13 01:49:57.043407 systemd[1]: sshd@40-172.232.7.133:22-147.75.109.163:58766.service: Deactivated successfully. Aug 13 01:49:57.045115 systemd[1]: session-41.scope: Deactivated successfully. Aug 13 01:49:57.047436 systemd-logind[1539]: Session 41 logged out. Waiting for processes to exit. Aug 13 01:49:57.049823 systemd-logind[1539]: Removed session 41. Aug 13 01:49:58.906480 kubelet[2718]: I0813 01:49:58.906416 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:49:58.906480 kubelet[2718]: I0813 01:49:58.906467 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:49:58.909636 kubelet[2718]: I0813 01:49:58.909612 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:49:58.938050 kubelet[2718]: I0813 01:49:58.938019 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:49:58.938256 kubelet[2718]: I0813 01:49:58.938198 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/coredns-668d6bf9bc-j47vf","kube-system/coredns-668d6bf9bc-dfjz8","calico-system/calico-node-qgskr","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","calico-system/csi-node-driver-dbqt2","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:49:58.938256 kubelet[2718]: E0813 01:49:58.938227 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:49:58.938256 kubelet[2718]: E0813 01:49:58.938238 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:49:58.938256 kubelet[2718]: E0813 01:49:58.938248 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:49:58.938256 kubelet[2718]: E0813 01:49:58.938255 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:49:58.938256 kubelet[2718]: E0813 01:49:58.938264 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:49:58.938471 kubelet[2718]: E0813 01:49:58.938272 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:49:58.938471 kubelet[2718]: E0813 01:49:58.938279 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:49:58.938471 kubelet[2718]: E0813 01:49:58.938290 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:49:58.938471 kubelet[2718]: E0813 01:49:58.938297 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:49:58.938471 kubelet[2718]: E0813 01:49:58.938305 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:49:58.938471 kubelet[2718]: I0813 01:49:58.938313 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:50:01.812765 update_engine[1542]: I20250813 01:50:01.812702 1542 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Aug 13 01:50:01.812765 update_engine[1542]: I20250813 01:50:01.812748 1542 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Aug 13 01:50:01.813192 update_engine[1542]: I20250813 01:50:01.813040 1542 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Aug 13 01:50:01.814526 update_engine[1542]: I20250813 01:50:01.813957 1542 omaha_request_params.cc:62] Current group set to beta Aug 13 01:50:01.814526 update_engine[1542]: I20250813 01:50:01.814349 1542 update_attempter.cc:499] Already updated boot flags. Skipping. Aug 13 01:50:01.814526 update_engine[1542]: I20250813 01:50:01.814363 1542 update_attempter.cc:643] Scheduling an action processor start. Aug 13 01:50:01.814526 update_engine[1542]: I20250813 01:50:01.814378 1542 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 01:50:01.814526 update_engine[1542]: I20250813 01:50:01.814402 1542 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Aug 13 01:50:01.814526 update_engine[1542]: I20250813 01:50:01.814460 1542 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 01:50:01.814526 update_engine[1542]: I20250813 01:50:01.814467 1542 omaha_request_action.cc:272] Request: Aug 13 01:50:01.814526 update_engine[1542]: Aug 13 01:50:01.814526 update_engine[1542]: Aug 13 01:50:01.814526 update_engine[1542]: Aug 13 01:50:01.814526 update_engine[1542]: Aug 13 01:50:01.814526 update_engine[1542]: Aug 13 01:50:01.814526 update_engine[1542]: Aug 13 01:50:01.814526 update_engine[1542]: Aug 13 01:50:01.814526 update_engine[1542]: Aug 13 01:50:01.814526 update_engine[1542]: I20250813 01:50:01.814474 1542 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:50:01.817555 locksmithd[1600]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Aug 13 01:50:01.818455 update_engine[1542]: I20250813 01:50:01.818424 1542 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:50:01.818835 update_engine[1542]: I20250813 01:50:01.818799 1542 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:50:01.880486 update_engine[1542]: E20250813 01:50:01.880420 1542 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:50:01.880622 update_engine[1542]: I20250813 01:50:01.880518 1542 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Aug 13 01:50:02.105982 systemd[1]: Started sshd@41-172.232.7.133:22-147.75.109.163:57676.service - OpenSSH per-connection server daemon (147.75.109.163:57676). Aug 13 01:50:02.454389 sshd[6858]: Accepted publickey for core from 147.75.109.163 port 57676 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:02.454766 sshd-session[6858]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:02.461961 systemd-logind[1539]: New session 42 of user core. Aug 13 01:50:02.478985 systemd[1]: Started session-42.scope - Session 42 of User core. Aug 13 01:50:02.769493 sshd[6860]: Connection closed by 147.75.109.163 port 57676 Aug 13 01:50:02.770006 sshd-session[6858]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:02.774699 systemd[1]: sshd@41-172.232.7.133:22-147.75.109.163:57676.service: Deactivated successfully. Aug 13 01:50:02.777996 systemd[1]: session-42.scope: Deactivated successfully. Aug 13 01:50:02.779645 systemd-logind[1539]: Session 42 logged out. Waiting for processes to exit. Aug 13 01:50:02.783908 systemd-logind[1539]: Removed session 42. Aug 13 01:50:06.784256 kubelet[2718]: E0813 01:50:06.784162 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:50:07.829416 systemd[1]: Started sshd@42-172.232.7.133:22-147.75.109.163:57688.service - OpenSSH per-connection server daemon (147.75.109.163:57688). Aug 13 01:50:08.163532 sshd[6872]: Accepted publickey for core from 147.75.109.163 port 57688 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:08.164821 sshd-session[6872]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:08.169840 systemd-logind[1539]: New session 43 of user core. Aug 13 01:50:08.173976 systemd[1]: Started session-43.scope - Session 43 of User core. Aug 13 01:50:08.466533 sshd[6874]: Connection closed by 147.75.109.163 port 57688 Aug 13 01:50:08.468047 sshd-session[6872]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:08.472617 systemd[1]: sshd@42-172.232.7.133:22-147.75.109.163:57688.service: Deactivated successfully. Aug 13 01:50:08.474573 systemd[1]: session-43.scope: Deactivated successfully. Aug 13 01:50:08.477112 systemd-logind[1539]: Session 43 logged out. Waiting for processes to exit. Aug 13 01:50:08.480284 systemd-logind[1539]: Removed session 43. Aug 13 01:50:08.782578 kubelet[2718]: E0813 01:50:08.782300 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:50:08.977266 kubelet[2718]: I0813 01:50:08.977233 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:08.977266 kubelet[2718]: I0813 01:50:08.977264 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:50:08.978988 kubelet[2718]: I0813 01:50:08.978954 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:50:08.997571 kubelet[2718]: I0813 01:50:08.997400 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:08.997834 kubelet[2718]: I0813 01:50:08.997776 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/coredns-668d6bf9bc-dfjz8","kube-system/coredns-668d6bf9bc-j47vf","calico-system/calico-node-qgskr","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","calico-system/csi-node-driver-dbqt2","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:50:08.998084 kubelet[2718]: E0813 01:50:08.998022 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:50:08.998302 kubelet[2718]: E0813 01:50:08.998186 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:50:08.998302 kubelet[2718]: E0813 01:50:08.998202 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:50:08.998302 kubelet[2718]: E0813 01:50:08.998228 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:50:08.998302 kubelet[2718]: E0813 01:50:08.998237 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:50:08.998302 kubelet[2718]: E0813 01:50:08.998247 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:50:08.998302 kubelet[2718]: E0813 01:50:08.998255 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:50:08.998662 kubelet[2718]: E0813 01:50:08.998581 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:50:08.998662 kubelet[2718]: E0813 01:50:08.998621 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:50:08.998662 kubelet[2718]: E0813 01:50:08.998631 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:50:08.998662 kubelet[2718]: I0813 01:50:08.998641 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:50:11.808615 update_engine[1542]: I20250813 01:50:11.808552 1542 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:50:11.808979 update_engine[1542]: I20250813 01:50:11.808831 1542 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:50:11.809172 update_engine[1542]: I20250813 01:50:11.809145 1542 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:50:11.809993 update_engine[1542]: E20250813 01:50:11.809963 1542 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:50:11.810066 update_engine[1542]: I20250813 01:50:11.810010 1542 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Aug 13 01:50:12.781782 kubelet[2718]: E0813 01:50:12.781752 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:50:13.534501 systemd[1]: Started sshd@43-172.232.7.133:22-147.75.109.163:38004.service - OpenSSH per-connection server daemon (147.75.109.163:38004). Aug 13 01:50:13.779550 kubelet[2718]: E0813 01:50:13.779511 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:50:13.866153 sshd[6886]: Accepted publickey for core from 147.75.109.163 port 38004 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:13.867936 sshd-session[6886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:13.872244 systemd-logind[1539]: New session 44 of user core. Aug 13 01:50:13.883970 systemd[1]: Started session-44.scope - Session 44 of User core. Aug 13 01:50:14.172025 sshd[6888]: Connection closed by 147.75.109.163 port 38004 Aug 13 01:50:14.172309 sshd-session[6886]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:14.176937 systemd-logind[1539]: Session 44 logged out. Waiting for processes to exit. Aug 13 01:50:14.177727 systemd[1]: sshd@43-172.232.7.133:22-147.75.109.163:38004.service: Deactivated successfully. Aug 13 01:50:14.180131 systemd[1]: session-44.scope: Deactivated successfully. Aug 13 01:50:14.181288 systemd-logind[1539]: Removed session 44. Aug 13 01:50:16.779671 kubelet[2718]: E0813 01:50:16.779498 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:50:18.564817 containerd[1559]: time="2025-08-13T01:50:18.564718948Z" level=warning msg="container event discarded" container=867993db78ab5d0bf62647b61d34d9ed3d0f19de24f8cc71aa517bf63c71303f type=CONTAINER_CREATED_EVENT Aug 13 01:50:18.576037 containerd[1559]: time="2025-08-13T01:50:18.575967663Z" level=warning msg="container event discarded" container=867993db78ab5d0bf62647b61d34d9ed3d0f19de24f8cc71aa517bf63c71303f type=CONTAINER_STARTED_EVENT Aug 13 01:50:18.600351 containerd[1559]: time="2025-08-13T01:50:18.600318971Z" level=warning msg="container event discarded" container=54f7b95c3df4933177d9333b747dad3f6096e144d14babc0ce2132e2c29c42df type=CONTAINER_CREATED_EVENT Aug 13 01:50:18.616696 containerd[1559]: time="2025-08-13T01:50:18.616657756Z" level=warning msg="container event discarded" container=be4ea84125570b7f0bb4a4f1ad20524d1eb45d5fea7e7deb2dcf0568ac2cd0da type=CONTAINER_CREATED_EVENT Aug 13 01:50:18.616696 containerd[1559]: time="2025-08-13T01:50:18.616688916Z" level=warning msg="container event discarded" container=be4ea84125570b7f0bb4a4f1ad20524d1eb45d5fea7e7deb2dcf0568ac2cd0da type=CONTAINER_STARTED_EVENT Aug 13 01:50:18.646041 containerd[1559]: time="2025-08-13T01:50:18.645998486Z" level=warning msg="container event discarded" container=77300aca2d8184fb9e0635eb45faa4d319ff1a976e375c515c55461520c409cc type=CONTAINER_CREATED_EVENT Aug 13 01:50:18.646041 containerd[1559]: time="2025-08-13T01:50:18.646030855Z" level=warning msg="container event discarded" container=77300aca2d8184fb9e0635eb45faa4d319ff1a976e375c515c55461520c409cc type=CONTAINER_STARTED_EVENT Aug 13 01:50:18.646041 containerd[1559]: time="2025-08-13T01:50:18.646039895Z" level=warning msg="container event discarded" container=d81069ab95c9b72874dd3230d9d6fe82c86726f1d6cd87ae5923cd102861244e type=CONTAINER_CREATED_EVENT Aug 13 01:50:18.675365 containerd[1559]: time="2025-08-13T01:50:18.675315596Z" level=warning msg="container event discarded" container=072f7d2d45b3804aec0ce1bfc6db7bb8c0cf5ff6a3a4eb92f172d274c987b1c4 type=CONTAINER_CREATED_EVENT Aug 13 01:50:18.736126 containerd[1559]: time="2025-08-13T01:50:18.736061132Z" level=warning msg="container event discarded" container=54f7b95c3df4933177d9333b747dad3f6096e144d14babc0ce2132e2c29c42df type=CONTAINER_STARTED_EVENT Aug 13 01:50:18.791197 containerd[1559]: time="2025-08-13T01:50:18.791155092Z" level=warning msg="container event discarded" container=d81069ab95c9b72874dd3230d9d6fe82c86726f1d6cd87ae5923cd102861244e type=CONTAINER_STARTED_EVENT Aug 13 01:50:18.830451 containerd[1559]: time="2025-08-13T01:50:18.830341254Z" level=warning msg="container event discarded" container=072f7d2d45b3804aec0ce1bfc6db7bb8c0cf5ff6a3a4eb92f172d274c987b1c4 type=CONTAINER_STARTED_EVENT Aug 13 01:50:19.026243 kubelet[2718]: I0813 01:50:19.026206 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:19.026243 kubelet[2718]: I0813 01:50:19.026244 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:50:19.029303 kubelet[2718]: I0813 01:50:19.029242 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:50:19.045284 kubelet[2718]: I0813 01:50:19.045248 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:19.045944 kubelet[2718]: I0813 01:50:19.045408 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/coredns-668d6bf9bc-dfjz8","kube-system/coredns-668d6bf9bc-j47vf","calico-system/calico-node-qgskr","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","calico-system/csi-node-driver-dbqt2","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:50:19.045944 kubelet[2718]: E0813 01:50:19.045459 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:50:19.045944 kubelet[2718]: E0813 01:50:19.045473 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:50:19.045944 kubelet[2718]: E0813 01:50:19.045482 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:50:19.045944 kubelet[2718]: E0813 01:50:19.045491 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:50:19.045944 kubelet[2718]: E0813 01:50:19.045501 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:50:19.045944 kubelet[2718]: E0813 01:50:19.045508 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:50:19.045944 kubelet[2718]: E0813 01:50:19.045539 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:50:19.045944 kubelet[2718]: E0813 01:50:19.045549 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:50:19.045944 kubelet[2718]: E0813 01:50:19.045557 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:50:19.045944 kubelet[2718]: E0813 01:50:19.045565 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:50:19.045944 kubelet[2718]: I0813 01:50:19.045576 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:50:19.241050 systemd[1]: Started sshd@44-172.232.7.133:22-147.75.109.163:46488.service - OpenSSH per-connection server daemon (147.75.109.163:46488). Aug 13 01:50:19.576979 sshd[6907]: Accepted publickey for core from 147.75.109.163 port 46488 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:19.579121 sshd-session[6907]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:19.584401 systemd-logind[1539]: New session 45 of user core. Aug 13 01:50:19.593249 systemd[1]: Started session-45.scope - Session 45 of User core. Aug 13 01:50:19.880119 sshd[6909]: Connection closed by 147.75.109.163 port 46488 Aug 13 01:50:19.880648 sshd-session[6907]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:19.885468 systemd-logind[1539]: Session 45 logged out. Waiting for processes to exit. Aug 13 01:50:19.886255 systemd[1]: sshd@44-172.232.7.133:22-147.75.109.163:46488.service: Deactivated successfully. Aug 13 01:50:19.889073 systemd[1]: session-45.scope: Deactivated successfully. Aug 13 01:50:19.890660 systemd-logind[1539]: Removed session 45. Aug 13 01:50:21.780642 kubelet[2718]: E0813 01:50:21.780585 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:50:21.809904 update_engine[1542]: I20250813 01:50:21.809820 1542 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:50:21.810441 update_engine[1542]: I20250813 01:50:21.810292 1542 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:50:21.810571 update_engine[1542]: I20250813 01:50:21.810539 1542 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:50:21.811439 update_engine[1542]: E20250813 01:50:21.811408 1542 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:50:21.811467 update_engine[1542]: I20250813 01:50:21.811457 1542 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Aug 13 01:50:22.770790 kubelet[2718]: I0813 01:50:22.770752 2718 image_gc_manager.go:383] "Disk usage on image filesystem is over the high threshold, trying to free bytes down to the low threshold" usage=100 highThreshold=85 amountToFree=411531673 lowThreshold=80 Aug 13 01:50:22.770790 kubelet[2718]: E0813 01:50:22.770783 2718 kubelet.go:1551] "Image garbage collection failed multiple times in a row" err="Failed to garbage collect required amount of images. Attempted to free 411531673 bytes, but only found 0 bytes eligible to free." Aug 13 01:50:24.949835 systemd[1]: Started sshd@45-172.232.7.133:22-147.75.109.163:46490.service - OpenSSH per-connection server daemon (147.75.109.163:46490). Aug 13 01:50:25.286136 sshd[6923]: Accepted publickey for core from 147.75.109.163 port 46490 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:25.287827 sshd-session[6923]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:25.293608 systemd-logind[1539]: New session 46 of user core. Aug 13 01:50:25.299996 systemd[1]: Started session-46.scope - Session 46 of User core. Aug 13 01:50:25.424391 containerd[1559]: time="2025-08-13T01:50:25.424346030Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c417f2abc38fb1e7f571e895b4832cd8d84c96e5d9cac2ded4c0b2b1c7bebb21\" id:\"c52888cad23c5c5df822c9ad97c3ee516f0489fd0b1a7542538ac658c41e26d6\" pid:6939 exited_at:{seconds:1755049825 nanos:424118752}" Aug 13 01:50:25.598266 sshd[6925]: Connection closed by 147.75.109.163 port 46490 Aug 13 01:50:25.601306 sshd-session[6923]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:25.607382 systemd-logind[1539]: Session 46 logged out. Waiting for processes to exit. Aug 13 01:50:25.608295 systemd[1]: sshd@45-172.232.7.133:22-147.75.109.163:46490.service: Deactivated successfully. Aug 13 01:50:25.614004 systemd[1]: session-46.scope: Deactivated successfully. Aug 13 01:50:25.617793 systemd-logind[1539]: Removed session 46. Aug 13 01:50:26.781241 kubelet[2718]: E0813 01:50:26.780475 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:50:28.257367 containerd[1559]: time="2025-08-13T01:50:28.257291834Z" level=warning msg="container event discarded" container=8d12ca4d3be570d9809489d13968c51eab5c04f97c3369b3f24b62fdfc6952bc type=CONTAINER_CREATED_EVENT Aug 13 01:50:28.258104 containerd[1559]: time="2025-08-13T01:50:28.257464323Z" level=warning msg="container event discarded" container=8d12ca4d3be570d9809489d13968c51eab5c04f97c3369b3f24b62fdfc6952bc type=CONTAINER_STARTED_EVENT Aug 13 01:50:28.274970 containerd[1559]: time="2025-08-13T01:50:28.274874100Z" level=warning msg="container event discarded" container=df4c05f1cdb65e4117865a2215be1ab997ca1e8c0869e786ed771a63ca4b7cdf type=CONTAINER_CREATED_EVENT Aug 13 01:50:28.351208 containerd[1559]: time="2025-08-13T01:50:28.351132527Z" level=warning msg="container event discarded" container=df4c05f1cdb65e4117865a2215be1ab997ca1e8c0869e786ed771a63ca4b7cdf type=CONTAINER_STARTED_EVENT Aug 13 01:50:28.762795 containerd[1559]: time="2025-08-13T01:50:28.762727472Z" level=warning msg="container event discarded" container=2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a type=CONTAINER_CREATED_EVENT Aug 13 01:50:28.762795 containerd[1559]: time="2025-08-13T01:50:28.762779242Z" level=warning msg="container event discarded" container=2ca4c96c3f2bc5d7d11a98330829be80628ad125b77438e59f926cbe512d5b5a type=CONTAINER_STARTED_EVENT Aug 13 01:50:29.092313 kubelet[2718]: I0813 01:50:29.092213 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:29.092313 kubelet[2718]: I0813 01:50:29.092253 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:50:29.093775 kubelet[2718]: I0813 01:50:29.093757 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:50:29.117199 kubelet[2718]: I0813 01:50:29.117178 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:29.117371 kubelet[2718]: I0813 01:50:29.117322 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/coredns-668d6bf9bc-dfjz8","kube-system/coredns-668d6bf9bc-j47vf","calico-system/calico-node-qgskr","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","calico-system/csi-node-driver-dbqt2","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:50:29.117371 kubelet[2718]: E0813 01:50:29.117354 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:50:29.117371 kubelet[2718]: E0813 01:50:29.117368 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:50:29.117541 kubelet[2718]: E0813 01:50:29.117379 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:50:29.117541 kubelet[2718]: E0813 01:50:29.117389 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:50:29.117541 kubelet[2718]: E0813 01:50:29.117397 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:50:29.117541 kubelet[2718]: E0813 01:50:29.117405 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:50:29.117541 kubelet[2718]: E0813 01:50:29.117413 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:50:29.117541 kubelet[2718]: E0813 01:50:29.117424 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:50:29.117541 kubelet[2718]: E0813 01:50:29.117431 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:50:29.117541 kubelet[2718]: E0813 01:50:29.117439 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:50:29.117541 kubelet[2718]: I0813 01:50:29.117448 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:50:29.780008 kubelet[2718]: E0813 01:50:29.779931 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:50:29.976119 containerd[1559]: time="2025-08-13T01:50:29.976047868Z" level=warning msg="container event discarded" container=791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc type=CONTAINER_CREATED_EVENT Aug 13 01:50:30.035407 containerd[1559]: time="2025-08-13T01:50:30.035284782Z" level=warning msg="container event discarded" container=791f4e0b9fb5d166424e71cf19fe156e098183010c642bf34a609399458696dc type=CONTAINER_STARTED_EVENT Aug 13 01:50:30.663945 systemd[1]: Started sshd@46-172.232.7.133:22-147.75.109.163:43410.service - OpenSSH per-connection server daemon (147.75.109.163:43410). Aug 13 01:50:31.009969 sshd[6968]: Accepted publickey for core from 147.75.109.163 port 43410 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:31.011972 sshd-session[6968]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:31.019229 systemd-logind[1539]: New session 47 of user core. Aug 13 01:50:31.027354 systemd[1]: Started session-47.scope - Session 47 of User core. Aug 13 01:50:31.333181 sshd[6970]: Connection closed by 147.75.109.163 port 43410 Aug 13 01:50:31.334059 sshd-session[6968]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:31.338339 systemd[1]: sshd@46-172.232.7.133:22-147.75.109.163:43410.service: Deactivated successfully. Aug 13 01:50:31.339656 systemd-logind[1539]: Session 47 logged out. Waiting for processes to exit. Aug 13 01:50:31.341797 systemd[1]: session-47.scope: Deactivated successfully. Aug 13 01:50:31.345367 systemd-logind[1539]: Removed session 47. Aug 13 01:50:31.808376 update_engine[1542]: I20250813 01:50:31.808311 1542 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:50:31.808929 update_engine[1542]: I20250813 01:50:31.808554 1542 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:50:31.809068 update_engine[1542]: I20250813 01:50:31.809039 1542 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:50:31.810022 update_engine[1542]: E20250813 01:50:31.809993 1542 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:50:31.810067 update_engine[1542]: I20250813 01:50:31.810035 1542 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 01:50:31.810067 update_engine[1542]: I20250813 01:50:31.810043 1542 omaha_request_action.cc:617] Omaha request response: Aug 13 01:50:31.810138 update_engine[1542]: E20250813 01:50:31.810114 1542 omaha_request_action.cc:636] Omaha request network transfer failed. Aug 13 01:50:31.810179 update_engine[1542]: I20250813 01:50:31.810138 1542 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Aug 13 01:50:31.810179 update_engine[1542]: I20250813 01:50:31.810173 1542 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 01:50:31.810251 update_engine[1542]: I20250813 01:50:31.810180 1542 update_attempter.cc:306] Processing Done. Aug 13 01:50:31.810251 update_engine[1542]: E20250813 01:50:31.810194 1542 update_attempter.cc:619] Update failed. Aug 13 01:50:31.810251 update_engine[1542]: I20250813 01:50:31.810199 1542 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Aug 13 01:50:31.810251 update_engine[1542]: I20250813 01:50:31.810205 1542 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Aug 13 01:50:31.810251 update_engine[1542]: I20250813 01:50:31.810211 1542 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Aug 13 01:50:31.810961 update_engine[1542]: I20250813 01:50:31.810431 1542 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Aug 13 01:50:31.810961 update_engine[1542]: I20250813 01:50:31.810458 1542 omaha_request_action.cc:271] Posting an Omaha request to disabled Aug 13 01:50:31.810961 update_engine[1542]: I20250813 01:50:31.810464 1542 omaha_request_action.cc:272] Request: Aug 13 01:50:31.810961 update_engine[1542]: Aug 13 01:50:31.810961 update_engine[1542]: Aug 13 01:50:31.810961 update_engine[1542]: Aug 13 01:50:31.810961 update_engine[1542]: Aug 13 01:50:31.810961 update_engine[1542]: Aug 13 01:50:31.810961 update_engine[1542]: Aug 13 01:50:31.810961 update_engine[1542]: I20250813 01:50:31.810472 1542 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Aug 13 01:50:31.810961 update_engine[1542]: I20250813 01:50:31.810739 1542 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Aug 13 01:50:31.810961 update_engine[1542]: I20250813 01:50:31.810927 1542 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Aug 13 01:50:31.811329 locksmithd[1600]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Aug 13 01:50:31.811598 update_engine[1542]: E20250813 01:50:31.811523 1542 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled Aug 13 01:50:31.811598 update_engine[1542]: I20250813 01:50:31.811556 1542 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Aug 13 01:50:31.811598 update_engine[1542]: I20250813 01:50:31.811563 1542 omaha_request_action.cc:617] Omaha request response: Aug 13 01:50:31.811598 update_engine[1542]: I20250813 01:50:31.811570 1542 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 01:50:31.811598 update_engine[1542]: I20250813 01:50:31.811575 1542 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Aug 13 01:50:31.811598 update_engine[1542]: I20250813 01:50:31.811580 1542 update_attempter.cc:306] Processing Done. Aug 13 01:50:31.811598 update_engine[1542]: I20250813 01:50:31.811586 1542 update_attempter.cc:310] Error event sent. Aug 13 01:50:31.811598 update_engine[1542]: I20250813 01:50:31.811594 1542 update_check_scheduler.cc:74] Next update check in 40m2s Aug 13 01:50:31.811977 locksmithd[1600]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Aug 13 01:50:32.783483 containerd[1559]: time="2025-08-13T01:50:32.782913999Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\"" Aug 13 01:50:33.496528 containerd[1559]: time="2025-08-13T01:50:33.496459492Z" level=error msg="failed to cleanup \"extract-381924900-QwQj sha256:8c887db5e1c1509bbe47d7287572f140b60a8c0adc0202d6183f3ae0c5f0b532\"" error="write /var/lib/containerd/io.containerd.metadata.v1.bolt/meta.db: no space left on device" Aug 13 01:50:33.497046 containerd[1559]: time="2025-08-13T01:50:33.497017508Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\" failed" error="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" Aug 13 01:50:33.497106 containerd[1559]: time="2025-08-13T01:50:33.497085108Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.2: active requests=0, bytes read=22024426" Aug 13 01:50:33.497254 kubelet[2718]: E0813 01:50:33.497209 2718 log.go:32] "PullImage from image service failed" err="rpc error: code = Unknown desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:50:33.497612 kubelet[2718]: E0813 01:50:33.497256 2718 kuberuntime_image.go:55] "Failed to pull image" err="failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.2" Aug 13 01:50:33.497665 kubelet[2718]: E0813 01:50:33.497604 2718 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.2,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-zrxd9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7f9448c8f5-ck2sf_calico-system(35b780d0-9cdb-470f-8c65-ede949b6d595): ErrImagePull: failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device" logger="UnhandledError" Aug 13 01:50:33.498900 kubelet[2718]: E0813 01:50:33.498844 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:50:36.398361 systemd[1]: Started sshd@47-172.232.7.133:22-147.75.109.163:43422.service - OpenSSH per-connection server daemon (147.75.109.163:43422). Aug 13 01:50:36.729898 sshd[6996]: Accepted publickey for core from 147.75.109.163 port 43422 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:36.731027 sshd-session[6996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:36.736295 systemd-logind[1539]: New session 48 of user core. Aug 13 01:50:36.744990 systemd[1]: Started session-48.scope - Session 48 of User core. Aug 13 01:50:37.035170 sshd[6998]: Connection closed by 147.75.109.163 port 43422 Aug 13 01:50:37.036163 sshd-session[6996]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:37.039926 systemd[1]: sshd@47-172.232.7.133:22-147.75.109.163:43422.service: Deactivated successfully. Aug 13 01:50:37.042401 systemd[1]: session-48.scope: Deactivated successfully. Aug 13 01:50:37.045175 systemd-logind[1539]: Session 48 logged out. Waiting for processes to exit. Aug 13 01:50:37.047124 systemd-logind[1539]: Removed session 48. Aug 13 01:50:38.896100 containerd[1559]: time="2025-08-13T01:50:38.896008694Z" level=warning msg="container event discarded" container=cfa298bfa01c63f2814313bc2ca2902a5a814af99803fdce63b44b336bdbb6f3 type=CONTAINER_CREATED_EVENT Aug 13 01:50:38.896100 containerd[1559]: time="2025-08-13T01:50:38.896067873Z" level=warning msg="container event discarded" container=cfa298bfa01c63f2814313bc2ca2902a5a814af99803fdce63b44b336bdbb6f3 type=CONTAINER_STARTED_EVENT Aug 13 01:50:39.156265 kubelet[2718]: I0813 01:50:39.156169 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:39.157090 kubelet[2718]: I0813 01:50:39.156620 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:50:39.160486 kubelet[2718]: I0813 01:50:39.159702 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:50:39.185983 kubelet[2718]: I0813 01:50:39.185952 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:39.186200 kubelet[2718]: I0813 01:50:39.186183 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/coredns-668d6bf9bc-dfjz8","kube-system/coredns-668d6bf9bc-j47vf","calico-system/calico-node-qgskr","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","calico-system/csi-node-driver-dbqt2","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:50:39.186301 kubelet[2718]: E0813 01:50:39.186290 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:50:39.186360 kubelet[2718]: E0813 01:50:39.186351 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:50:39.186482 kubelet[2718]: E0813 01:50:39.186402 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:50:39.186482 kubelet[2718]: E0813 01:50:39.186414 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:50:39.186482 kubelet[2718]: E0813 01:50:39.186423 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:50:39.186482 kubelet[2718]: E0813 01:50:39.186431 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:50:39.186482 kubelet[2718]: E0813 01:50:39.186438 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:50:39.186482 kubelet[2718]: E0813 01:50:39.186448 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:50:39.186482 kubelet[2718]: E0813 01:50:39.186457 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:50:39.186482 kubelet[2718]: E0813 01:50:39.186464 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:50:39.186482 kubelet[2718]: I0813 01:50:39.186473 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:50:39.300248 containerd[1559]: time="2025-08-13T01:50:39.300172817Z" level=warning msg="container event discarded" container=cf8808596e9944d86c6c06b036bdf15dc271c9e20b2942d501808df66f92f821 type=CONTAINER_CREATED_EVENT Aug 13 01:50:39.300248 containerd[1559]: time="2025-08-13T01:50:39.300227947Z" level=warning msg="container event discarded" container=cf8808596e9944d86c6c06b036bdf15dc271c9e20b2942d501808df66f92f821 type=CONTAINER_STARTED_EVENT Aug 13 01:50:40.706997 containerd[1559]: time="2025-08-13T01:50:40.706915444Z" level=warning msg="container event discarded" container=58faca2b388dd49391b71a9a8e39a338df8505373c8086610067d0e7f31e1aa8 type=CONTAINER_CREATED_EVENT Aug 13 01:50:40.783999 containerd[1559]: time="2025-08-13T01:50:40.783951329Z" level=warning msg="container event discarded" container=58faca2b388dd49391b71a9a8e39a338df8505373c8086610067d0e7f31e1aa8 type=CONTAINER_STARTED_EVENT Aug 13 01:50:41.447249 containerd[1559]: time="2025-08-13T01:50:41.447172918Z" level=warning msg="container event discarded" container=d3371c98dab10353f9b24b83c5f00584e60f47e52018587e00a849bd276cfd28 type=CONTAINER_CREATED_EVENT Aug 13 01:50:41.525151 containerd[1559]: time="2025-08-13T01:50:41.525107600Z" level=warning msg="container event discarded" container=d3371c98dab10353f9b24b83c5f00584e60f47e52018587e00a849bd276cfd28 type=CONTAINER_STARTED_EVENT Aug 13 01:50:41.632310 containerd[1559]: time="2025-08-13T01:50:41.632251557Z" level=warning msg="container event discarded" container=d3371c98dab10353f9b24b83c5f00584e60f47e52018587e00a849bd276cfd28 type=CONTAINER_STOPPED_EVENT Aug 13 01:50:42.099737 systemd[1]: Started sshd@48-172.232.7.133:22-147.75.109.163:37946.service - OpenSSH per-connection server daemon (147.75.109.163:37946). Aug 13 01:50:42.436956 sshd[7010]: Accepted publickey for core from 147.75.109.163 port 37946 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:42.438565 sshd-session[7010]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:42.443127 systemd-logind[1539]: New session 49 of user core. Aug 13 01:50:42.452893 systemd[1]: Started session-49.scope - Session 49 of User core. Aug 13 01:50:42.759243 sshd[7012]: Connection closed by 147.75.109.163 port 37946 Aug 13 01:50:42.758873 sshd-session[7010]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:42.767556 systemd[1]: sshd@48-172.232.7.133:22-147.75.109.163:37946.service: Deactivated successfully. Aug 13 01:50:42.768910 systemd-logind[1539]: Session 49 logged out. Waiting for processes to exit. Aug 13 01:50:42.772200 systemd[1]: session-49.scope: Deactivated successfully. Aug 13 01:50:42.775216 systemd-logind[1539]: Removed session 49. Aug 13 01:50:43.780016 kubelet[2718]: E0813 01:50:43.779977 2718 dns.go:153] "Nameserver limits exceeded" err="Nameserver limits were exceeded, some nameservers have been omitted, the applied nameserver line is: 172.232.0.13 172.232.0.22 172.232.0.9" Aug 13 01:50:43.812558 containerd[1559]: time="2025-08-13T01:50:43.812460555Z" level=warning msg="container event discarded" container=e1ffa803e2080b8112af836cb46b273f69b34665ba26bb9821625310bd6e7a5f type=CONTAINER_CREATED_EVENT Aug 13 01:50:43.899751 containerd[1559]: time="2025-08-13T01:50:43.899698027Z" level=warning msg="container event discarded" container=e1ffa803e2080b8112af836cb46b273f69b34665ba26bb9821625310bd6e7a5f type=CONTAINER_STARTED_EVENT Aug 13 01:50:44.513339 containerd[1559]: time="2025-08-13T01:50:44.513248751Z" level=warning msg="container event discarded" container=e1ffa803e2080b8112af836cb46b273f69b34665ba26bb9821625310bd6e7a5f type=CONTAINER_STOPPED_EVENT Aug 13 01:50:46.787925 kubelet[2718]: E0813 01:50:46.787684 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595" Aug 13 01:50:47.824418 systemd[1]: Started sshd@49-172.232.7.133:22-147.75.109.163:37950.service - OpenSSH per-connection server daemon (147.75.109.163:37950). Aug 13 01:50:48.170564 sshd[7023]: Accepted publickey for core from 147.75.109.163 port 37950 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:48.172314 sshd-session[7023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:48.179214 systemd-logind[1539]: New session 50 of user core. Aug 13 01:50:48.183061 systemd[1]: Started session-50.scope - Session 50 of User core. Aug 13 01:50:48.486880 sshd[7025]: Connection closed by 147.75.109.163 port 37950 Aug 13 01:50:48.487450 sshd-session[7023]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:48.492650 systemd[1]: sshd@49-172.232.7.133:22-147.75.109.163:37950.service: Deactivated successfully. Aug 13 01:50:48.495188 systemd[1]: session-50.scope: Deactivated successfully. Aug 13 01:50:48.496390 systemd-logind[1539]: Session 50 logged out. Waiting for processes to exit. Aug 13 01:50:48.498693 systemd-logind[1539]: Removed session 50. Aug 13 01:50:49.213298 kubelet[2718]: I0813 01:50:49.213263 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:49.213298 kubelet[2718]: I0813 01:50:49.213309 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:50:49.217332 kubelet[2718]: I0813 01:50:49.217305 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:50:49.231626 kubelet[2718]: I0813 01:50:49.231599 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:49.231790 kubelet[2718]: I0813 01:50:49.231757 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/coredns-668d6bf9bc-dfjz8","kube-system/coredns-668d6bf9bc-j47vf","calico-system/calico-node-qgskr","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","calico-system/csi-node-driver-dbqt2","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:50:49.231869 kubelet[2718]: E0813 01:50:49.231793 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:50:49.231869 kubelet[2718]: E0813 01:50:49.231806 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:50:49.231869 kubelet[2718]: E0813 01:50:49.231816 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:50:49.231869 kubelet[2718]: E0813 01:50:49.231825 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:50:49.231869 kubelet[2718]: E0813 01:50:49.231833 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:50:49.231869 kubelet[2718]: E0813 01:50:49.231841 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:50:49.232005 kubelet[2718]: E0813 01:50:49.231849 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:50:49.232005 kubelet[2718]: E0813 01:50:49.231902 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:50:49.232005 kubelet[2718]: E0813 01:50:49.231911 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:50:49.232005 kubelet[2718]: E0813 01:50:49.231920 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:50:49.232005 kubelet[2718]: I0813 01:50:49.231929 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:50:53.545403 systemd[1]: Started sshd@50-172.232.7.133:22-147.75.109.163:48168.service - OpenSSH per-connection server daemon (147.75.109.163:48168). Aug 13 01:50:53.882357 sshd[7037]: Accepted publickey for core from 147.75.109.163 port 48168 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:53.883719 sshd-session[7037]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:53.888921 systemd-logind[1539]: New session 51 of user core. Aug 13 01:50:53.896982 systemd[1]: Started session-51.scope - Session 51 of User core. Aug 13 01:50:54.196795 sshd[7039]: Connection closed by 147.75.109.163 port 48168 Aug 13 01:50:54.198078 sshd-session[7037]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:54.203245 systemd-logind[1539]: Session 51 logged out. Waiting for processes to exit. Aug 13 01:50:54.203498 systemd[1]: sshd@50-172.232.7.133:22-147.75.109.163:48168.service: Deactivated successfully. Aug 13 01:50:54.207530 systemd[1]: session-51.scope: Deactivated successfully. Aug 13 01:50:54.211962 systemd-logind[1539]: Removed session 51. Aug 13 01:50:55.426719 containerd[1559]: time="2025-08-13T01:50:55.426676222Z" level=info msg="TaskExit event in podsandbox handler container_id:\"c417f2abc38fb1e7f571e895b4832cd8d84c96e5d9cac2ded4c0b2b1c7bebb21\" id:\"f33bee1590cb510b80cfd72e1b3aca94522c36579ca22adfecc2f8066ba221c1\" pid:7062 exited_at:{seconds:1755049855 nanos:426332462}" Aug 13 01:50:59.266988 systemd[1]: Started sshd@51-172.232.7.133:22-147.75.109.163:43146.service - OpenSSH per-connection server daemon (147.75.109.163:43146). Aug 13 01:50:59.268842 kubelet[2718]: I0813 01:50:59.268815 2718 eviction_manager.go:376] "Eviction manager: attempting to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:59.269552 kubelet[2718]: I0813 01:50:59.268872 2718 container_gc.go:86] "Attempting to delete unused containers" Aug 13 01:50:59.274541 kubelet[2718]: I0813 01:50:59.274472 2718 image_gc_manager.go:431] "Attempting to delete unused images" Aug 13 01:50:59.301233 kubelet[2718]: I0813 01:50:59.301051 2718 eviction_manager.go:387] "Eviction manager: must evict pod(s) to reclaim" resourceName="ephemeral-storage" Aug 13 01:50:59.301431 kubelet[2718]: I0813 01:50:59.301413 2718 eviction_manager.go:405] "Eviction manager: pods ranked for eviction" pods=["calico-system/calico-kube-controllers-7f9448c8f5-ck2sf","calico-system/calico-typha-b5b9867b4-p6jwz","kube-system/coredns-668d6bf9bc-dfjz8","kube-system/coredns-668d6bf9bc-j47vf","calico-system/calico-node-qgskr","kube-system/kube-controller-manager-172-232-7-133","kube-system/kube-proxy-fw2dv","calico-system/csi-node-driver-dbqt2","kube-system/kube-apiserver-172-232-7-133","kube-system/kube-scheduler-172-232-7-133"] Aug 13 01:50:59.301987 kubelet[2718]: E0813 01:50:59.301922 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" Aug 13 01:50:59.301987 kubelet[2718]: E0813 01:50:59.301940 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-typha-b5b9867b4-p6jwz" Aug 13 01:50:59.301987 kubelet[2718]: E0813 01:50:59.301949 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-dfjz8" Aug 13 01:50:59.302273 kubelet[2718]: E0813 01:50:59.301956 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/coredns-668d6bf9bc-j47vf" Aug 13 01:50:59.302273 kubelet[2718]: E0813 01:50:59.302116 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/calico-node-qgskr" Aug 13 01:50:59.302273 kubelet[2718]: E0813 01:50:59.302125 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-controller-manager-172-232-7-133" Aug 13 01:50:59.302273 kubelet[2718]: E0813 01:50:59.302133 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-proxy-fw2dv" Aug 13 01:50:59.302273 kubelet[2718]: E0813 01:50:59.302144 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="calico-system/csi-node-driver-dbqt2" Aug 13 01:50:59.302273 kubelet[2718]: E0813 01:50:59.302152 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-apiserver-172-232-7-133" Aug 13 01:50:59.302759 kubelet[2718]: E0813 01:50:59.302610 2718 eviction_manager.go:609] "Eviction manager: cannot evict a critical pod" pod="kube-system/kube-scheduler-172-232-7-133" Aug 13 01:50:59.302759 kubelet[2718]: I0813 01:50:59.302628 2718 eviction_manager.go:438] "Eviction manager: unable to evict any pods from the node" Aug 13 01:50:59.618846 sshd[7076]: Accepted publickey for core from 147.75.109.163 port 43146 ssh2: RSA SHA256:cBmfpXKRPHuhsMLAEvSmRHUbYKxumz8ENH6QtYQprao Aug 13 01:50:59.621036 sshd-session[7076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Aug 13 01:50:59.626104 systemd-logind[1539]: New session 52 of user core. Aug 13 01:50:59.631141 systemd[1]: Started session-52.scope - Session 52 of User core. Aug 13 01:50:59.937048 sshd[7078]: Connection closed by 147.75.109.163 port 43146 Aug 13 01:50:59.938182 sshd-session[7076]: pam_unix(sshd:session): session closed for user core Aug 13 01:50:59.943088 systemd-logind[1539]: Session 52 logged out. Waiting for processes to exit. Aug 13 01:50:59.943733 systemd[1]: sshd@51-172.232.7.133:22-147.75.109.163:43146.service: Deactivated successfully. Aug 13 01:50:59.946439 systemd[1]: session-52.scope: Deactivated successfully. Aug 13 01:50:59.952224 systemd-logind[1539]: Removed session 52. Aug 13 01:51:01.781585 kubelet[2718]: E0813 01:51:01.781511 2718 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": ErrImagePull: failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.2\\\": failed to copy: write /var/lib/containerd/io.containerd.content.v1.content/ingest/4283d0a398ddaeb52dccc841c03984c7b2a083fc772ca6b92c29a9b825e542b6/data: no space left on device\"" pod="calico-system/calico-kube-controllers-7f9448c8f5-ck2sf" podUID="35b780d0-9cdb-470f-8c65-ede949b6d595"